These HCL modules work together to create a customizable and secure network infrastructure consisting of two VPCs linked via a VPC peering connection, a NAT gateway, Internet gateway, EC2 instances, and a Bastion host allowing connectivity from external administrators.
They have been specifically designed to be applied within a specific order.
The folder structure is as seen below:
├── environments
│ ├── nonprod
│ │ ├── network
│ │ │ └── plans
│ │ └── servers
│ │ ├── keys
│ │ └── plans
│ └── prod
│ ├── network
│ │ └── plans
│ └── servers
│ ├── keys
│ └── plans
├── modules
│ ├── globalvars
│ ├── network
│ ├── nonprodvars
│ ├── prodvars
│ └── servers
├── routing
│ └── plans
├── s3_state
│ └── plans
└── vpc_peering
└── plans
The modules should be run in the following order. Deviations from this order might cause unexpected results.
- s3_state (Creates all the buckets to be used)
- nonprod/network (Creates VPC, subnets, Internet and NAT gateways)
- nonprod/servers (Creates the EC2 instances, security groups, etc.)
- prod/network (Creates VPC and subnets)
- prod/servers (Creates the EC2 instances, security groups, etc.)
- vpc-peering (Takes VPC information from each environment to configure peer link)
- routing (Creates routing tables and routes, then assigns them to subnets)
- Browse to s3_state directory.
- Run the
terraform init
,terraform plan
andterraform apply
commands to create 4 buckets: production, nonproduction, vpc peering, and routing. - Browse to environments/nonprod/network directory.
- Run the
terraform init
,terraform plan
andterraform apply
commands to create nonproduction network infrastructure. - Browse to environments/nonprod/servers directory.
- Create a keys directory via
mkdir keys
command. - Run
ssh-keygen -t rsa -f keys/nonprod-key
to create a key pair for EC2 creation. Optionally, you may provide a passphrase for the private key. - Run the
terraform init
,terraform plan
andterraform apply -var-file=default.tfvars
commands to create nonproduction EC2 instances.- Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
- Save the Bastion public IP address that shows at the end of the apply operation. It will be used later.
- Browse to environments/prod/network directory.
- Run the
terraform init
,terraform plan
andterraform apply
commands to create production network infrastructure. - Browse to environments/prod/servers directory.
- Create a keys directory via
mkdir keys
command. - Run
ssh-keygen -t rsa -f keys/prod-key
to create a key pair for EC2 creation. Optionally, you may provide a passphrase for the private key. - Run the
terraform init
,terraform plan
andterraform apply -var-file=default.tfvars
commands to create production EC2 instances.- Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
- Browse to vpc_peering directory.
- Run the
terraform init
,terraform plan
andterraform apply
commands to create the VPC peering connection. - Browse to routing directory.
- Run the
terraform init
,terraform plan
andterraform apply
commands to create the route tables and routes across both VPCs.
Note: Alternatively, custom .tfvars files can be used as long as they adhere to the schema seen in default.tfvars.
Schema:
config_input = [
{
# EC2 Instance Object 1
"name" : <EC2 instance name, string value>,
"type" : <EC2 instance type, string value>,
"counter" : <Number of EC2 instances, integer value>,
"az_name" : <Availability Zone name, string value>
},
{
# EC2 Instance Object 2
"name" : <EC2 instance name, string value>,
"type" : <EC2 instance type, string value>,
"counter" : <Number of EC2 instances, integer value>,
"az_name" : <Availability Zone name, string value>
}
]
Note: Please note, you may add more Instance Objects as needed but if you want to have multiple copies of the same resource, please increment the counter attribute.
- On an admin machine running OpenSSH, open a terminal window.
- Collect the Bastion public IP (bastion_pub_ip) address output after deploying the environments/nonprod/servers module.
- Collect the prod and nonprod private keys you've generated and place them in reachable locations.
- At the command prompt, run ssh -i ec2-user@ -L ::
- Now an SSH tunnel has been established with the Bastion EC2 instance, via which traffic to private EC2 instances will traverse (see SSH Tunneling)
- To access websites in nonprod private EC2 instances, set the -L values to ::80 and browse to http://localhost: via a web browser.
- To access SSH servers in any of the private EC2 instances, set the -L values to ::22 and use a separate CLI terminal window to run this command: ssh -i ec2-user@localhost -p
- To access the "MySQL server" bonus, set the -L values to ::3306 and browse to http://localhost: via a web browser.
- Note, this is a simulation where the TCP port for MySQL has been configured as the listening port for a Python HTTP web server module and so browsing to the localhost address mentioned here will show a basic webpage listing a directory's contents.
Destroying the deployed resources requires that the modules are accessed in the reverse order.
- Browse to routing directory.
- Run the
terraform destroy
command to remove the route tables and routes across both VPCs. - Browse to vpc_peering directory.
- Run the
terraform destroy
command to remove the VPC peering connection. - Browse to environments/nonprod/servers directory.
- Run the
terraform destroy -var-file=default.tfvars
command to remove nonproduction EC2 instances.- Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
- Browse to environments/nonprod/network directory.
- Run the
terraform destroy
command to remove nonproduction network infrastructure. - Browse to environments/prod/servers directory.
- Run the
terraform destroy -var-file=default.tfvars
command to remove production EC2 instances.- Note the -var-file=default.tfvars argument which contains some of the configuration for EC2 instances.
- Browse to environments/prod/network directory.
- Run the
terraform destroy
command to remove production network infrastructure. - Browse to s3_state directory.
- Run the
terraform destroy
command to remove 4 buckets: production, nonproduction, vpc peering, and routing.