Skip to content

This repository is the blueprint for a complete, production-grade cloud automation framework. I didn't just follow a tutorial; I built this from the ground up to solve a real problem: making deployments fast, safe, and completely hands-off

Notifications You must be signed in to change notification settings

Ayushmore1214/CloudCore

Repository files navigation

CloudCore Pipeline

Tech Stack

Terraform AWS GitHub Actions Docker Discord Shell Script


This repository is the blueprint for a complete, production-grade cloud automation framework. I didn't just follow a tutorial; I built this from the ground up to solve a real problem: making deployments fast, safe, and completely hands-off.

The philosophy is simple: a git push should be a release, not the start of a nervous, multi-hour manual checklist.

This project achieves that by treating both the application and the infrastructure as code. The entire lifecycle—from a developer committing a line of code to that change being live, tested, and monitored on the internet—is handled by a series of intelligent, automated workflows I built using Terraform and GitHub Actions.


How It Works

This project isn't just a simple deployment script. It's a multi-stage system with checks and balances built-in:

  • Infrastructure as Code (IaC): The entire AWS environment (S3 buckets, CloudFront CDN, IAM roles, and CloudWatch monitoring) is defined as code using Terraform. There's no manual setup required.

  • Automated CI/CD: When code is pushed to the main branch, a GitHub Actions workflow kicks off. This workflow handles everything from testing to deployment to sending notifications.

  • Quality Gates: Before any code gets deployed, an HTML validation test runs automatically. If the test fails, the pipeline stops, preventing a bad release.

  • Infrastructure CI: I also built a separate pipeline for the Terraform code itself. When a Pull Request is opened that changes the infrastructure, it automatically runs a terraform plan and posts the output as a comment. This lets me review the exact impact of a change before it's merged.

  • Monitoring & Alerting: Once deployed, the site doesn't just run in the dark.

    • CloudWatch Alarms are set up to watch for spikes in server or client errors (5xx/4xx).
    • If an alarm is triggered, SNS Notifications send an alert email.
    • A custom CloudWatch Dashboard gives a clear overview of the site's health.
  • Post-Deployment Canary: After a successful deployment, a final "canary" job uses Playwright to visit the live website and verify that the main headline is correct. This is a crucial final check to make sure the deployment actually worked.

  • Notifications: The pipeline reports its status (success, failure, and canary health) to a Discord channel, so I always know what's going on.


Project Showcase

The full pipeline in action: Test -> Deploy -> Canary Health Check.

Successful Pipeline Run

The infrastructure CI pipeline commenting on a Pull Request with a terraform plan.

Terraform Plan in PR

Real-time status updates are sent to Discord, including the canary health report.

Discord Notifications

The final result: a live website deployed and monitored by the pipeline.

Live Website


How to Run This Project

Here’s a guide to getting this pipeline up and running yourself. I've fought the bugs so you don't have to.

  1. Prerequisites: You'll need an AWS account, Terraform, and Git installed.

  2. Clone the Repository:

    git clone https://github.com/Ayushmore1214/K-Stack.git
    cd K-Stack
  3. Configure Variables: Open terraform/variables.tf. You'll need to change the default values for project_name (this has to be a globally unique S3 bucket name) and alert_email.

  4. Set Up AWS Credentials: Make sure your terminal is authenticated with AWS.

    aws configure
    • Important Note for Codespaces/Cloud IDEs: These environments often use temporary credentials that can interfere. You'll likely need to run the following commands before every apply or destroy to clear them out:
      unset AWS_ACCESS_KEY_ID
      unset AWS_SECRET_ACCESS_KEY
      unset AWS_SESSION_TOKEN
  5. Deploy the Infrastructure: Navigate to the Terraform directory and run the commands to build the AWS resources.

    cd terraform
    terraform init
    terraform apply --auto-approve

    This will output the keys and IDs you need for the next step.

  6. Configure GitHub Secrets: In your own fork of this repository, go to Settings > Secrets and variables > Actions and add the following secrets. Use the outputs from the terraform apply command.

    • AWS_ACCESS_KEY_ID
    • AWS_SECRET_ACCESS_KEY
    • S3_BUCKET_NAME
    • CLOUDFRONT_ID
    • SITE_URL
    • DISCORD_WEBHOOK_URL (You can get this from your Discord server's Integrations > Webhooks settings)
  7. Confirm the Alert Email: Check your inbox for an email from "AWS Notification." You have to click the confirmation link inside to start receiving SNS alerts.

  8. Trigger the Pipeline: Commit and push a change to the main branch. This will kick off your first run.

    git push origin main

About

This repository is the blueprint for a complete, production-grade cloud automation framework. I didn't just follow a tutorial; I built this from the ground up to solve a real problem: making deployments fast, safe, and completely hands-off

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published