Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce reference implementation for standard EKS cluster using Terraform #6

Merged
merged 20 commits into from
Sep 20, 2022

Conversation

jpolchlo
Copy link
Collaborator

@jpolchlo jpolchlo commented Aug 11, 2022

The core function of this entire repository is to describe an Azavea-standard Kubernetes deployment. What are the hardware, system, and application requirements that meet our minimum needs? What kind of interface for deploying this (potentially complex) infrastructure should we provide? How should users of this infrastructure interact with it as they deploy their own custom applications? These are some of the questions that we will need to answer in due time as we learn more about this space.

This PR is an initial answer to how we should proceed, suggesting one potential method for deploying a Kubernetes cluster to AWS. The approach taken here is an iteration on a Terraform-based deployment that has been used for other projects at Azavea, which is hopefully a refinement of those previous efforts, eventually offering a standard cluster architecture that can be targeted by application-specific repositories that wish to place resources onto company-wide clusters. The shape of these standard clusters is emerging, but not yet fixed. See the documentation in this repo for more details.

The Terraform code in this PR has been segmented into several stages: (1) configuring the cluster hardware and API access, including OIDC for IRSA and some RBAC setup; (2) setting up basic cluster services, in this case, Karpenter, and possibly an ingress controller; and (3) provisioning system-wide applications, beginning with Franklin.

This basic setup is delivered through the use of an updated STRTA infrastructure. The standard approach to deployment will consist of executing cibuild followed by cipublish. The latter script will be aware of both the AWS region that we are targeting and the target environment. (The reference deployment of this system currently lives on us-west-2, with only a staging environment.) I've also improved the script infrastructure for iterating on these deployments, offering a console script that facilitates interaction with the infra script during development.

The basic structures suggested by this PR's contributions are to be considered as a starting point for future discussions as we develop best practices in the future.

This is still a bit WIPpy. Some amount of work is still required to

jpolchlo added 20 commits July 11, 2022 15:53
… variables; allows for easier configuration of AWS region and targeting deployments (staging, production, etc)
… when creating a new cluster from scratch; aws-auth ConfigMap doesn't exist by the time the module attempts to customize it
…n RDS instance and places a Franklin deployment and LoadBalancer service onto the cluster; Franklin is reachable via the resulting ELB; still need to supply a route53 resource to create a DNS alias to this ELB
@jpolchlo jpolchlo mentioned this pull request Aug 15, 2022
3 tasks
@jpolchlo
Copy link
Collaborator Author

I've deferred the outstanding tasks in this PR to separate issues so that I can just merge the feature. This code, after all, does work as desired, even if it isn't perfect. We can improve things at a later date.

@jpolchlo jpolchlo merged commit a294c27 into master Sep 20, 2022
@jpolchlo jpolchlo deleted the feature/terraform-module branch October 19, 2022 21:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant