- This repo would mainly include building a serverless environment with aws lambda and aws fargate.
- Purpose : lower the cost
- This demo would be setting up an API to store some random data in aws RDS (MySQL)
- Flow :
| Client enter url
|── Call ──> aws API Gateway
|── Trigger ──> aws lambda
|── Insert random data --> DB
|── Trigger --> aws fargate
|── Insert random data --> DB
- Package the code with docker framework
- Two choices to push container to public
- Dockerhub ref: HoiDam/PythonIPYNB_Playground
- AWS ECR No ref here . Not yet tried
- Task Definitions would store your pull your container file every task get triggered Setup procedure:
- Enter the front page of AWS ECS
- Go to Task Definitions
- Create new Task Defintion
- Choose Fargate
- Type your task definition name
- Go to Container Defintions and click "Add container"
- Click Create
- Clusters would be an area to store all your tasks run and save the logs Setup procedure:
- Go to Clusters
- Click "Networking only" and "next step"
- Type your cluster name
- Click create
- Get the aws access keys for the program to run
- Need certain permissions Setup procedure:
- Go to AWS IAM
- Go to Users
- Click "Add user"
- Enter Username
- Choose access type : Programmatic access
- Choose attach exisiting policies directly and choose belows policy
AmazonECS_FullAccess
- Skip all next and Click Create user
- Click the user that you created
- Go to security credentials
- Create access key
- Copy the keys to your secret file (DO NOT EXPOSE IT)
- using boto3/pymysql library
- create client connection
client=boto3.client("ecs",
aws_access_key_id=key,
aws_secret_access_key=pw,
region_name="ap-east-1"
)
Access + Secret keys = The keys which saved in secret file
Region name = Current AWS Region code (E.g ap-east-1)
response = client.run_task(
cluster='fargatetest', # name of the cluster
launchType = 'FARGATE',
taskDefinition='fargatetest:1', # replace with your task definition name and revision
count = 1,
platformVersion='LATEST',
networkConfiguration={
'awsvpcConfiguration': {
'subnets': [
'subnet-123456', # replace with your public subnet or a private with NAT
'subnet-233456' # Second is optional, but good idea to have two
],
'securityGroups':[
'sg-123456'
],
'assignPublicIp': 'ENABLED', # dockerhub container : enabled ; aws ECR : disabled
}
}
)
cluster = The cluster name you created
taskDefinition = The task definition name you created
vpcConfiguration = same subnets with aws RDS
Credit : Bwong951
server.py
is an aws lambda written for local testing
python server.py
to test your code written in lambda_handler.py
Setup procedure:
- Enter the front page of AWS Lambda
- Click on "Create Function"
- Select "Author from scratch", type a function name and select "Python 3.7" for "Runtime". Then click "Create Function"
- Click "Edit" on the "Basic settings" panel.
- Type "lambda_handler.handler" under the "Handler" field
- Click "Save"
Setup procedure:
- Enter the front page of your Lambda function
- Select the same VPC as your database instance under the "VPC" field
- Select the same subnets under as your database under the "Subnets" field
- Select the same security group as your database under the "Security groups" field
(note that if your security group does not allow inbound connection from your security group, you cannot assess the API) - Under the VPC panel, navigate to the "Security Groups" page and click on the security group used by the target RDS.
- Under "Inbound Rules", click "Edit Inbound Rules".
- Add a rule with "MYSQL/Aurora" (or "All Traffic", I have not tested with "MYSQL/Aurora" yet) as "Type",
and set "Source" as "Custom" and set the value as the Security Group ID of the current secutiry group (e.g. sg-xxxxxx).
Setup procedure:
- Under the VPC panel, navigate to the "Security Groups" page and click on the security group used by the target RDS.
- Under "Inbound Rules", click "Edit Inbound Rules".
- Add a rule with "MYSQL/Aurora" as "Type", and set "Source" as "Custom" and set the value as the Elastic IP of that NAT Gateway.
- Click "Save rules".
Setup procedure:
- Enter the front page of your Lambda function
- Click "Edit" on the "Basic settings" panel
- Click on the blue text "View the role" to enter the summary page of this role
- Attach policies to the role
ec2:CreateNetworkInterface
ec2:DescribeNetworkInterfaces
ec2:DeleteNetworkInterface (optional)
Setup procedure (HTTP API):
- Create an HTTP API under the API Gateway console, select the lambda function for this project as the integration.
- Configure the default route mthod as POST.
- Set the resource path as /{category}/{action}
- Go to the CORS panel of your API.
- Click "Edit".
- Type "*" under Access-Control-Allow-Origin and click "Add".
- Type "*" under Access-Control-Allow-Headers and click "Add".
- Pick POST and GET under "Access-Control-Allow-Methods".
- Click "Save".
As lambda do not provide a server to install packages . Upload the codes with libraries file is needed .
- Remarks:
There is limit on upload file size 10MB . If greater , push it to aws s3 first then link url.