Instructions for setting up CKAN infrastructure on AWS
- Terraform uses the AWS CLI, install that: https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-install.html
- Make sure you configure the CLI https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
- Install terraform here: https://learn.hashicorp.com/terraform/getting-started/install.html
quick reference on Mac with homebrew:
brew install terraform
- In console, move to
/infrastructure
directory - Copy the
terraform.tfvars.example
toterraform.tfvars
and fill out with your choices. - Run
terraform init
to initialize local terraform state - After any infrastructure changes, run
terraform apply
- Assume that your domain is hosted in route53
- Get the Hosted Zone ID such as
A5GJ576DARR2YZ
- Update the hosted_zone in your local
terraform.tfvars
- This will create a DNS record to generate a SSL Certificate for CKAN
- The list takes CIDR block ranges in the format of
a.b.c.d/z
. see: https://en.wikipedia.org/wiki/Classless_Inter-Domain_Routing - if you want to add a single IP address, the suffix is
/32
i.e.1.2.3.4/32
- Add all wnated IPs in you
terraform.tfvars
file
Opsworks manages public SSH keys on instances for access by your team. Instances in the autoscale group are added to opsworks.
- (note: you must connect from one of the administrative ips)
- Import IAM users you want to give access to in Opsworks (https://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-users-manage-import.html)
- If you don't already have a public/private keypair set up, create one using this guide. Adding the SSH key to the agent is optional. https://help.github.com/en/articles/generating-a-new-ssh-key-and-adding-it-to-the-ssh-agent#generating-a-new-ssh-key
- Copy and paste the contents of public SSH key (NOT the key.PEM file) to the opsworks user (https://docs.aws.amazon.com/opsworks/latest/userguide/security-settingsshkey.html)
- Grant access to the instances in Opsworks for the instances
- Get ssh instructions from Opsworks web console page
EFS (Elastic File System) is used to store files that are uploaded to CKAN and any site configuration changes. The directory is mounted on all ECS Cluster member hosts at /mnt/efs/
. Hosts reading contents of the directory are pointed to a network volume shared among all hosts.
The mounted EFS directory must be owned by user/group id 92
- which is ckan in the container. Without this, the application inside the container cannot write to the mounted EFS volume on the host.
SSH in to any ECS host (listed in opsworks), and run sudo chown -R 92:92 /mnt/efs/ckan
. This only needs to be done once per EFS filesystem.
The RDS (Relational Database Service) database needs to be initialized with a user and database. Terraform will generate an empty database. Log in to the database using a SQL tool (such as pgadmin) and run the SQL in templates/rds-bootstrap.sh
to create the datastore_ro
role and the datastore
database.
(This is not needed, default will store state locally where terraform binary runs.)
- Rename
backend.tf.example
tobackend.tf
- Manually create an S3 bucket to store state. Terraform recommends to enable versioning in bucket.
- Manually create a DynamoDB table to store lock state. Primary partition key of table MUST BE
LockID
(String) - Specify S3 bucket and DynamoDB table in
backend.tf
- Ensure your IAM user has access to S3 bucket and DynamoDB table
- Run
terraform init
- Your state and lock is now stored remotely