Skip to content

Commit

Permalink
#1 updating S3 bucket for GCS incl. instructions
Browse files Browse the repository at this point in the history
  • Loading branch information
matthewmc1 committed Jan 27, 2022
1 parent 935dea3 commit 5c6743b
Show file tree
Hide file tree
Showing 15 changed files with 413 additions and 0 deletions.
19 changes: 19 additions & 0 deletions GCP/1-Configure-Credentials-To-Access-GCP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Configure Credentials To Access GCP At The Programmatic Level


The purpse of this lab is to configure IAM credentials on your local computer so that you can access GCP at a programmatic level (SDKs, CLI, Terraform, etc.)

## Install gcloud CLI
1. [Cloud SDK](https://cloud.google.com/sdk/docs/install)

## Billing Account

You should have a billing account assosicated to your account prior to starting this, if you have never used GCP before this will also entitle you to credits on sign-up to use this project but make sure to destroy after so that you are not charged for this.

## Login & Create Project
1. Running locally to create, first run `gcloud auth application-default login` - this will login using the Google sign-in option.
2. Set your default project for running `gcloud projects create devops-the-hardway`
3. Confirm project is created `gcloud projects list`
4. Link billing account to the project `gcloud beta billing projects link devops-the-hardway --billing-account {BILLING-ID}`
4. Set default project `gcloud config set project devops-the-hardway`

Original file line number Diff line number Diff line change
@@ -0,0 +1,38 @@
# Create an Google Cloud Storage bucket to store Terraform state files

In this lab you will create an Google Cloud Storage bucket that will be used to store Terraform state files

## Create The Terraform Configurations

1. You can find the Terraform configuration for the Google Cloud Storage bucket [here](https://github.com/mmcgibbon1/DevOps-The-Hard-Way-GCP/tree/trunk/Terraform-GCP-Services-Creation/terraform-state-gcs-bucket). The Terraform configuration files are used to create an Google Cloud Storage bucket that will store your TFSTATE.

The Terraform `main.tf` will do a few things:
- Create the Google Cloud Storage bucket in the `EU` region for regional availability
- Ensure that version enabling is set to `True`


2. Create the bucket by running the following:
- `terraform init` - To initialize the working directory and pull down the provider
- `terraform plan -out gcs.tfplan` - To go through a "check" and confirm the configurations are valid and create a plan file based on the name provided.
- `terraform apply gcs.tfplan` - To create the resource

3. Sample output from `terraform plan -out gcs.tfplan`

```
# google_storage_bucket.terraform_state will be created
+ resource "google_storage_bucket" "terraform_state" {
+ force_destroy = false
+ id = (known after apply)
+ location = "EU"
+ name = "terraform-state-devopsthehardway-gcp"
+ project = (known after apply)
+ self_link = (known after apply)
+ storage_class = "STANDARD"
+ uniform_bucket_level_access = true
+ url = (known after apply)
+ versioning {
+ enabled = true
}
}
```
17 changes: 17 additions & 0 deletions Terraform-GCP-Services-Creation/2-Create-ECR.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Create an Elastic Container Registry Repository

In this lab you will create a repository to store the Docker image that you created for the Uber app.

## Create the ECR Terraform Configuration

1. You can find the Terraform configuration for ECR [here](https://github.com/AdminTurnedDevOps/DevOps-The-Hard-Way-AWS/tree/main/Terraform-AWS-Services-Creation/ECR). The Terraform configuration files are used to create a repository in Elastic Container Repository (ECR).

The Terraform `main.tf` will do a few things:
- Use a Terraform backend to store the `.tfstate` in an S3 bucket
- Use the `us-east-1` region, but feel free to change that if you'd like
- Use the `aws_ecr_repository` Terraform resource to create a new respository.

2. Create the bucket by running the following:
- `terraform init` - To initialize the working directory and pull down the provider
- `terraform plan` - To go through a "check" and confirm the configurations are valid
- `terraform apply - To create the resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# Create An EKS Cluster and IAM Role/Policy

In this lab you will create:
- The appropriate IAM role and policy for EKS.
- The EKS cluster

## Create the EKS Terraform Configuration

1. You can find the Terraform configuration for EKS [here](https://github.com/AdminTurnedDevOps/DevOps-The-Hard-Way-AWS/tree/main/Terraform-AWS-Services-Creation/EKS-With-Worker-Nodes). The Terraform configuration files are used to create an EKS cluster and IAM Role/Policy for EKS.

The Terraform `main.tf` will do a few things:
- Use a Terraform backend to store the `.tfstate` in an S3 bucket
- Use the `us-east-1` region, but feel free to change that if you'd like
- Use the `aws_iam_role` and `aws_iam_policy` Terraform resource to create a new IAM configuration.

2. Create the bucket by running the following:
- `terraform init` - To initialize the working directory and pull down the provider
- `terraform plan` - To go through a "check" and confirm the configurations are valid
- `terraform apply - To create the resource
37 changes: 37 additions & 0 deletions Terraform-GCP-Services-Creation/4-Run-CICD-For-EKS-Cluster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
# Create EKS Cluster With CICD

In this lab, you'll learn how to create an EKS cluster using GitHub Actions. The code can be found [here](https://github.com/AdminTurnedDevOps/DevOps-The-Hard-Way-AWS/tree/main/Terraform-AWS-Services-Creation/EKS-With-Worker-Nodes)


## Secrets
Prior to running the pipeline, you'll need to set up authentication from GitHub to AWS. To do that, you'll set up secrets.

You'll need an AWS Access Key ID and an AWS Secret Access Key as those are the two secrets you'll be adding into the GitHub repository. These two secrets will allow you to connect to AWS from GitHub Actions.

1. In the code repository, go to Settings --> Secrets
2. Add in two secrets:
`AWS_ACCESS_KEY_ID`
`AWS_SECRET_ACCESS_KEY`

The values should come from an AWS Access Key and Secret Key. The access key/secret key must be part of a user that has policies attached for the resources being created in AWS.

3. Save the secrets.

## Pipeline
Now that the secrets are created, it's time to create the pipeline.

1. Under the GitHub repository, click on the **Actions** tab
2. Under **Get started with Actions**, click the *set up a workflow yourself* button
3. Inside of the workflow, copy in the contents that you can find [here](https://github.com/AdminTurnedDevOps/DevOps-The-Hard-Way-AWS/blob/main/.github/workflows/main.yml)

The pipeline does a few things:
- On line 4, you'll see `workflow_dispatch`, which means the pipeline won't automatically run unless you kick it off. You can of course change this to have the pipeline automatically run if you, for example, push code to the `dev` or `main` branch.
- The code is checked-out
- Authentication occurs to AWS
- Terraform is set up
- Terraform init occurs
- Terraform format occurs
- Terraform plan occurs
- Terraform apply occurs

4. Run the pipeline and watch as the pipeline automatically creates the EKS cluster
25 changes: 25 additions & 0 deletions Terraform-GCP-Services-Creation/ECR/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
terraform {
backend "s3" {
bucket = "terraform-state-devopsthehardway"
key = "ecr-terraform.tfstate"
region = "us-east-1"
}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}

provider "aws" {
region = "us-east-1"
}

resource "aws_ecr_repository" "devopsthehardway-ecr-repo" {
name = var.repo_name
image_tag_mutability = "MUTABLE"

image_scanning_configuration {
scan_on_push = true
}
}
1 change: 1 addition & 0 deletions Terraform-GCP-Services-Creation/ECR/terraform.tfvars
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
repo_name = "devopsthehardway-gcp"
5 changes: 5 additions & 0 deletions Terraform-GCP-Services-Creation/ECR/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
variable repo_name {
type = string
default = "devopsthehardway"
description = "ECR repo to store a Docker image"
}
93 changes: 93 additions & 0 deletions Terraform-GCP-Services-Creation/EKS-Fargate/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
terraform {
backend "s3" {
bucket = "terraform-state-devopsthehardway"
key = "eks-terraform.tfstate"
region = "us-east-1"
}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}


# IAM Role for EKS to have access to the appropriate resources
resource "aws_iam_role" "eks-iam-role" {
name = "devopsthehardway-eks-iam-role"

path = "/"

assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF

}

## Attach the IAM policy to the IAM role
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks-iam-role.name
}

## Create the EKS cluster
resource "aws_eks_cluster" "devopsthehardway-eks" {
name = "devopsthehardway-cluster"
role_arn = aws_iam_role.eks-iam-role.arn

vpc_config {
subnet_ids = [var.subnet_id_1, var.subnet_id_2]
}

depends_on = [
aws_iam_role.eks-iam-role,
]
}


resource "aws_iam_role" "eks-fargate" {
name = "eks-fargate-devopsthehardway"

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "eks-fargate-pods.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "AmazonEKSFargatePodExecutionRolePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSFargatePodExecutionRolePolicy"
role = aws_iam_role.eks-fargate.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy-fargate" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks-fargate.name
}

resource "aws_eks_fargate_profile" "devopsthehardway-eks-serverless" {
cluster_name = aws_eks_cluster.devopsthehardway-eks.name
fargate_profile_name = "devopsthehardway-serverless-eks"
pod_execution_role_arn = aws_iam_role.eks-fargate.arn
subnet_ids = [var.private_subnet_id_1]

selector {
namespace = "default"
}
}
14 changes: 14 additions & 0 deletions Terraform-GCP-Services-Creation/EKS-Fargate/variables.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
variable "subnet_id_1" {
type = string
default = "subnet-0724276a66ddfe51e"
}

variable "subnet_id_2" {
type = string
default = "subnet-0bc007da6e373a517"
}

variable "private_subnet_id_1" {
type = string
default = "subnet-0a18be575c2cd0968"
}
116 changes: 116 additions & 0 deletions Terraform-GCP-Services-Creation/EKS-With-Worker-Nodes/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
terraform {
backend "s3" {
bucket = "terraform-state-devopsthehardway"
key = "eks-terraform-workernodes.tfstate"
region = "us-east-1"
}
required_providers {
aws = {
source = "hashicorp/aws"
}
}
}


# IAM Role for EKS to have access to the appropriate resources
resource "aws_iam_role" "eks-iam-role" {
name = "devopsthehardway-eks-iam-role"

path = "/"

assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
EOF

}

## Attach the IAM policy to the IAM role
resource "aws_iam_role_policy_attachment" "AmazonEKSClusterPolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"
role = aws_iam_role.eks-iam-role.name
}
resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly-EKS" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.eks-iam-role.name
}

## Create the EKS cluster
resource "aws_eks_cluster" "devopsthehardway-eks" {
name = "devopsthehardway-cluster"
role_arn = aws_iam_role.eks-iam-role.arn

vpc_config {
subnet_ids = [var.subnet_id_1, var.subnet_id_2]
}

depends_on = [
aws_iam_role.eks-iam-role,
]
}

## Worker Nodes
resource "aws_iam_role" "workernodes" {
name = "eks-node-group-example"

assume_role_policy = jsonencode({
Statement = [{
Action = "sts:AssumeRole"
Effect = "Allow"
Principal = {
Service = "ec2.amazonaws.com"
}
}]
Version = "2012-10-17"
})
}

resource "aws_iam_role_policy_attachment" "AmazonEKSWorkerNodePolicy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"
role = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEKS_CNI_Policy" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"
role = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "EC2InstanceProfileForImageBuilderECRContainerBuilds" {
policy_arn = "arn:aws:iam::aws:policy/EC2InstanceProfileForImageBuilderECRContainerBuilds"
role = aws_iam_role.workernodes.name
}

resource "aws_iam_role_policy_attachment" "AmazonEC2ContainerRegistryReadOnly" {
policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"
role = aws_iam_role.workernodes.name
}

resource "aws_eks_node_group" "worker-node-group" {
cluster_name = aws_eks_cluster.devopsthehardway-eks.name
node_group_name = "devopsthehardway-workernodes"
node_role_arn = aws_iam_role.workernodes.arn
subnet_ids = [var.subnet_id_1, var.subnet_id_2]
instance_types = ["t3.xlarge"]

scaling_config {
desired_size = 1
max_size = 1
min_size = 1
}

depends_on = [
aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,
aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,
#aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,
]
}
Loading

0 comments on commit 5c6743b

Please sign in to comment.