Skip to content

AlexKupreev/fastapi-terraform-gke-example

Repository files navigation

Example FastAPI app in GKE

FAQ

What is it?

This is a reworked python API backend from great Full Stack FastAPI PostgreSQL template that is to be deployed to GKE cluster. The python code was refactored as inspired by awesome Architecture Patterns with Python. The app aims to implement some of 12 factor app principles principles.

Basic features:

  • python API backend:
    • FastAPI
    • SQLAlchemy ORM with PostgreSQL DB
    • Cloud Logging integration
    • Cloud Pub/Sub integration (for Cloud Scheduler usage)
  • terraform configuration for setting up Google Cloud infrastructure
  • helm charts for deployment into GKE
  • CI/CD using Cloud Build

Why is it needed?

  • It is a self-educational project - I wanted to understand how terraform, kubernetes and cloud-native approach work.
  • Did not find anything similar containing API, architectural approach and cloud native at once.
  • This is an easy-to-go thing - follow instructions and get a deployed minimal API in GKE cluster - with authentication and simple user management (yes, it really worked on GKE cluster, but was a way expensive).

What is its status?

It is completed in its basic version:

  • HTTP REST API endpoint
  • Deployed to GKE minimal cluster
  • Pub/Sub pull subscriber listening for Cloud Schedule jobs
  • HTTP server and Pub/Sub listener sit within a single container and are managed by supervisord (separate containers would ne more convenient and appropriate for Cloud native approach, but more expensive)
  • Cloud Logging writing logs (a bit confusing that several messages are pushed as errors, that is probably because of supervisord default way of log printing)
  • Emailing probably doesn't work - it is just a copy-paste from the original repo, not tested it in GKE.
  • Sentry and Flower integration does not work as well, their configuration should be ignored.

Please note: this configuration is pretty expensive for simple pet project apps.

Requirements

Configuration

Copy docker/compose/.env.tpl to docker/compose/.env and fill in necessary settings.

Backend local development

Running dev application locally

  • Start the stack with Docker Compose:

    docker-compose -f docker/compose/docker-compose.dev.yml up -d
  • Open your browser and interact with these URLs:

Note: The first time starting the stack, it might take a minute for it to be ready. While the backend waits for the database to be ready and configures everything. You can check the logs to monitor it.

To check the logs, run:

docker-compose logs

To check the logs of a specific service, add the name of the service, e.g.:

docker-compose logs backend

To rebuild app container run

docker-compose -f docker/compose/docker-compose.dev.yml build

Please note, that there is another docker image: docker/backend.dockerfile and compose configuration: docker/compose/docker-compose.yml - they are for production (and production running locally).

General development workflow

  1. The dependencies are managed with Poetry, go there and install it.

  2. Go to project root and install all the dependencies with:

    $ poetry install
  3. Start a shell session with the new environment:

    $ poetry shell
  4. Open your editor and make sure your editor uses the environment you just created with Poetry.

Docker Images

Unlike the original template, that one has two separate dockerfiles and docker-compose configurations:

  • docker/backend.dev.dockerfile - development configuration with hot reload (compose file: docker/compose/docker-compose.dev.yml)
  • docker/backend.dockerfile - production configuration (compose file: docker/compose/docker-compose.yml)

To get inside the container with a bash session you can start the stack with:

$ docker-compose up -d

and then exec inside the running container:

$ docker-compose exec backend bash

You should see an output like:

root@7f2607af31c3:/app#

that means that you are in a bash session inside your container, as a root user, under the /app directory.

Testing

To test the app from dev environment go to the project root and run:

$ pytest

To run the local tests with coverage reports:

$ pytest --cov=.

Code style and static checks

Run code formatter:

$ black .

Run linters:

$ flake8
$ pylint src/

Run static type checker:

$ mypy src/

Migrations

As during local development your app directory is mounted as a volume inside the container, you can also run the migrations with alembic commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.

Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors.

  • Start an interactive session in the backend container:
$ docker-compose exec backend bash
  • After changing a model (for example, adding a column), inside the container, create a revision, e.g.:
$ alembic revision --autogenerate -m "Add column last_name to User model"
  • Commit to the git repository the files generated in the alembic directory.

  • After creating the revision, run the migration in the database (this is what will actually change the database):

$ alembic upgrade head

If you don't want to start with the default models and want to remove/modify them from the beginning without having any previous revision, remove the revision files (.py Python files) under ./alembic/versions/. Then create a first migration as described above.

After completing the first migration, initial data can be pre-filled using API endpoint:

POST %domain:port%/api/v1/basic_utils/test-pubsub/prefill_db

Setting up CI/CD pipeline using Github and GKE

Note: I use project-name-314159 as the project name in Cloud Services.

Articles used while setting up the process:

Setting up infrastructure with Terraform

Assume that project and billing have already been created.

Install SDK and kubectl, then init SDK:

gcloud init

This gcloud configuration has been called [fastapi-gke].

Use Terraform to roll out a cluster:

wget https://releases.hashicorp.com/terraform/0.14.8/terraform_0.14.8_linux_amd64.zip
unzip terraform_0.14.8_linux_amd64.zip
sudo mv terraform /opt/terraform
sudo ln -s /opt/terraform /usr/local/bin/terraform

Enable the Google Cloud APIs that will be used:

gcloud services enable compute.googleapis.com
gcloud services enable servicenetworking.googleapis.com
gcloud services enable cloudresourcemanager.googleapis.com
gcloud services enable container.googleapis.com
gcloud services enable sqladmin.googleapis.com
gcloud services enable containerregistry.googleapis.com
gcloud services enable logging.googleapis.com
gcloud services enable pubsub.googleapis.com
gcloud services enable cloudscheduler.googleapis.com
gcloud services enable appengine.googleapis.com

Then create a service account named terraform-gke:

gcloud iam service-accounts create terraform-gke

Now grant the necessary roles for our service account to create a GKE cluster and the associated resources:

gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/container.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/compute.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/iam.serviceAccountAdmin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/resourcemanager.projectIamAdmin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/cloudsql.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/storage.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/logging.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/pubsub.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/cloudscheduler.admin
gcloud projects add-iam-policy-binding project-name-314159 --member serviceAccount:[email protected] --role roles/appengine.appAdmin

Finally, create and download into the current directory a key file that Terraform will use to authenticate as the service account against the Google Cloud Platform API:

gcloud iam service-accounts keys create terraform-gke-keyfile.json --iam-account=terraform-gke@project-name-314159.iam.gserviceaccount.com

Terraform configuration is stored in terraform/ directory. To work with it, cd and copy terraform-gke-keyfile.json inside.

Also, it is recommended to create a dedicated GCS bucket named fastapi-terraform-gke-example-tf-gke for that configuration:

gsutil mb -p project-name-314159 -c regional -l us-east1 gs://fastapi-terraform-gke-example-tf-gke/

activate versioning:

gsutil versioning set on gs://fastapi-terraform-gke-example-tf-gke/

and grant read-write permissions to service account:

gsutil iam ch serviceAccount:[email protected]:legacyBucketWriter gs://fastapi-terraform-gke-example-tf-gke/

Create AppEngine app to use CloudScheduler (creation with terraform requires "Owner" role that I'd like to avoid):

gcloud app create --region=us-east1

Configure GKE cluster appropriately, variable values to be set in variables.auto.tfvars (template file).

Then run

cd terraform
terraform init
terraform plan

and if everything in the plan looks ok

terraform apply

Note: there could be errors with CloudSQL user creation - they were fixed by re-running terraform apply.

To destroy any created resources, run

terraform destroy

When removing any resources manually, terraform could get unsync and manual state cleanup might be useful:

terraform state rm "%resource name%"

When Terraform is done, we can check the status of the cluster and configure the kubectl command line tool to connect to it with:

gcloud container clusters list
gcloud container clusters get-credentials gke-cluster --region=us-east1

Setting up deployment with Helm

The resources described in this file allow the tiller pod to create resources in the cluster, apply it with:

kubectl apply -f tiller.yaml

kubectl has already set up configuration, so it can create service account for tiller.

cd %project%
docker build -f docker/backend.dockerfile -t gcr.io/project-name-314159/fastapi-terraform-gke-example .

To push the image, one need to add GCR to docker config (for Linux-based images):

gcloud auth configure-docker

Then push created image:

docker push gcr.io/project-name-314159/fastapi-terraform-gke-example

Fill in project settings and secrets (template). Deploy project secrets:

kubectl create secret generic fastapi-terraform-gke-example --from-env-file=secrets.txt

Then deploy the chart:

cd kubernetes
helm install fastapi-terraform-gke-example ./

IP address of the ingress (may take some time to apply):

kubectl get ingresses

Reserved IP address:

gcloud compute addresses describe global-cluster-ip --global

To uninstall the chart, run:

helm delete fastapi-terraform-gke-example

For debugging:

kubectl get pods
kubectl logs %pod-name% -c %container-name%
kubectl describe pod %pod-name%

Setting up CI/CD

Configure the repository for Cloud Build pipeline according to documentation, then set a trigger for CI/CD pipeline.

Continuous Integration

Configuration is set in cloudbuild-ci.yaml.

Go to the trigger page, and follow official documentation.

Create trigger with the following settings:

  • Name: fastapi-terraform-gke-example-ci
  • Event: Pull request
  • Configuration: Cloud Build configuration file
  • Cloud Build configuration file location: /cloudbuild-ci.yaml

Trigger (linting and running tests) will work on every Pull Request.

Continuous Deployment

Build and push custom helm image:

cd %somewhere%
git clone https://github.com/GoogleCloudPlatform/cloud-builders-community.git
cd cloud-builders-community/helm
docker build -t gcr.io/project-name-314159/helm .
docker push gcr.io/project-name-314159/helm

Set in cloudbuild.yaml appropriate values:

  _CUSTOM_REGION: us-east1
  _CUSTOM_CLUSTER: gke-cluster

Go to the trigger page, and follow official documentation.

Create trigger with the following settings:

  • Name: fastapi-terraform-gke-example-deploy-helm
  • Event: Push new tag
  • Tag (regex): v.*
  • Configuration: Cloud Build configuration file
  • Cloud Build configuration file location: /cloudbuild.yaml

Open Cloud Build -> Settings -> Service Account permissions and for project service account enable "Kubernetes Engine Developer".

Trigger (new container building and deployment) will work on every new tag assignment, just do:

$ git tag v.x.y.z
$ git push origin main --tags

Troubleshooting

Local application logs are being written to application.log (rotated).

Cloud logs can be seen in dashboard by filters:

resource.type="k8s_container"
resource.labels.project_id="project-name-314159"
resource.labels.cluster_name="gke-cluster"
resource.labels.container_name=~"fastapi-terraform-gke-example*"

About

FastAPI backend example deployed to GKE with Terraform

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published