Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix typos, grammar #9

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
58 changes: 29 additions & 29 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ Real-time advertising platforms running bidding, ad serving, and verification wo

The Real-Time Bidding on AWS Solution is a deployable reference architecture demonstrating the "art of the possible" and enabling DSPs to jumpstart development of innovative smart bidding services. Using the Real-Time Bidding on AWS Solution, demand side partners can rapidly deploy and build upon an open source architecture to enable the fast assessment of ad opportunities at scale, and focus on the bid assessment rather than non-differentiated processing logistics.

The Real-Time Bidder Solution on AWS consist of 5 modules:
The Real-Time Bidder Solution on AWS consists of 5 modules:

1. **Data generator** to generate synthetic data for the device, campaign, budget, and audience. The size of each table can be defined in the configuration file, it is recommended that 1 Bn devices be generated for the devices table, 1 million campaigns and associated budgets, and one hundred thousand audiences.

Expand All @@ -16,7 +16,7 @@ The Real-Time Bidder Solution on AWS consist of 5 modules:

4. **data repository** for the device, campaign, budget, and audience data. You can choose from DynamoDB or Aerospike.

5. **Data Pipeline** that receives, aggregates and writes the bid request and the bid request response to a data repository, and /6 a set of tools to ingest the metrics generated by the bidder and display the results for evaluation.
5. **Data Pipeline** that receives, aggregates and writes the bid request and the bid request response to a data repository, and a set of tools to ingest the metrics generated by the bidder and display the results for evaluation.

6. **Monitoring Tools**, Grafana and Prometheus on the EKS cluster to collect and display the application logs such as the bid requests per second and latency.

Expand All @@ -36,7 +36,7 @@ The Real-Time Bidder Solution on AWS consist of 5 modules:

* Administrator and Programmatic access
* Git security credentials.
* If you already are using cloud9 in the AWS Account you also have AWSCloud9SSMInstanceProfile policy in your IAM Policy and you don't have to create it again. Comment AWSCloud9SSMInstanceProfile object in deployment/infrastructure/templates/cloud9.yaml file (line number 40)
* If you already are using Cloud9 in the AWS Account you also have AWSCloud9SSMInstanceProfile policy in your IAM Policy and you don't have to create it again. Comment AWSCloud9SSMInstanceProfile object in deployment/infrastructure/templates/cloud9.yaml file (line number 40)

### 4. Service Limits -

Expand Down Expand Up @@ -100,7 +100,7 @@ cdk deploy
git clone <URL>
```

11. You have clone the empty repo that was created as part of the CDK deployment. Now, copy the contents of the downloaded repo in step 1 to the new cloned codecommit repo and commit the changes.
11. You have cloned the empty repo that was created as part of the CDK deployment. Now, copy the contents of the downloaded repo in step 1 to the new cloned codecommit repo and commit the changes.

```
git add .
Expand All @@ -113,17 +113,17 @@ git push

13. Once the deployment is completed go to the cloudformation console and navigate to root stack (this will be the stack with name that you have provided in cdk.json file in step 4). Go to Outputs tab and copy `ApplicationStackName`, `ApplicationStackARN`, `EKSAccessRoleARN`, `EKSWorkerRoleARN`, `Cloud9IDE-URL`, and `Cloud9EnvID`. We will be using them in next steps.

14. As part of the deployment pipeline, you have also deployed a cloud9 instance. Run the following command to enable access to the Cloud9 instance. you can find more info [here](https://docs.aws.amazon.com/cloud9/latest/user-guide/share-environment.html)
14. As part of the deployment pipeline, you have also deployed a Cloud9 instance. Run the following command to enable access to the Cloud9 instance. you can find more info [here](https://docs.aws.amazon.com/cloud9/latest/user-guide/share-environment.html)
```
aws cloud9 create-environment-membership --environment-id <Environment ID> --user-arn USER_ARN --permissions PERMISSION_LEVEL
```
>Note: If the cloud9 deployment fails or doesnt create an instance, use the [cloud9.yaml file](./deployment/infrastructure//templates/cloud9.yaml) to manually deploy the cloud9 instance
>Note: If the Cloud9 deployment fails or doesn't create an instance, use the [cloud9.yaml file](./deployment/infrastructure//templates/cloud9.yaml) to manually deploy the Cloud9 instance.

15. Access the Cloud9 IDE using the URL that you have copied from step 13. Deploy the pre-requisits (`Helm`, `Kubectl`and `jq`) on the Cloud9 Instance as mentioned in the prereqisits session.
>Tip: Commands for steps 15 - 26 are inlcuded in a shell script [cloud9-setup.sh](./cloud9-setup.sh) and is accessible from the code repo copy that gets pulled down automatically when you lauch the Cloud9 instance created by the stack. Navigate to the directory and change the script permissions `chmod 700 cloud9-setup.sh` before executing the script.
15. Access the Cloud9 IDE using the URL that you have copied from step 13. Deploy the prerequisites (`Helm`, `Kubectl`and `jq`) on the Cloud9 Instance as mentioned in the prerequisites session.
>Tip: Commands for steps 15 - 26 are included in a shell script [cloud9-setup.sh](./cloud9-setup.sh), which is accessible from the code repo copy that gets pulled down automatically when you launch the Cloud9 instance created by the stack. Navigate to the directory and change the script permissions `chmod 700 cloud9-setup.sh` before executing the script.
16. In the cloud9 terminal, clone the CodeCommit repo as you have done in step 10.

17. On terminal, navigate to the repository folder and run the following commands. The follwing commands will set the variables in your terminal which are used to connect to EKS cluster and run benchmarks
17. On terminal, navigate to the repository folder and run the following commands to set the variables in your terminal which are used to connect to the EKS cluster and run benchmarks.

```
export AWS_ACCOUNT=<Account Number>
Expand All @@ -147,42 +147,42 @@ export STACK_NAME=$ROOT_STACK
CREDS_JSON=`aws sts assume-role --role-arn $EKS_ACCESS_ROLE_ARN \
--role-session-name EKSRole-Session --output json`
```
>Note: You may need to manually configure AWS CLI credentials using `aws configure` if the temporary cloud9 tokens doesnt work.
19. As output of above command you will get AccessKeyId, SecretAccessKey, and SessionToken. Copy them and pass them in to variables as shown below.
>Note: You may need to manually configure AWS CLI credentials using `aws configure` if the temporary Cloud9 tokens don't work.
19. The above command will output AccessKeyId, SecretAccessKey, and SessionToken. Copy them and pass them in to variables as shown below.
```
export AWS_ACCESS_KEY_ID=`echo $CREDS_JSON | jq '.Credentials.AccessKeyId' | tr -d '"'`
export AWS_SECRET_ACCESS_KEY=`echo $CREDS_JSON | jq '.Credentials.SecretAccessKey' | tr -d '"'`
export AWS_SESSION_TOKEN=`echo $CREDS_JSON | jq '.Credentials.SessionToken' | tr -d '"'`
CREDS_JSON=""
```
20. Now call the make eks@grant-access target file to access EKS cluster by using the command below (This command has to be run in the code repository folder in terminal).
20. Now grant access to the EKS cluster by using the command below (This command has to be run in the code repository folder in terminal).
```
make eks@grant-access
```
21. We now have creds to login to EKS cluster. Unset the EksAccessRole using below command.
21. We now have creds to login to the EKS cluster. Unset the EksAccessRole using below command.
```
unset AWS_ACCESS_KEY_ID AWS_SECRET_ACCESS_KEY AWS_SESSION_TOKEN
```
22. Run the following command to connect to EKS cluster.
22. Run the following command to connect to the EKS cluster.
```
make eks@use
```
23. The following command will get you the pods in cluster and you must see the pods as shown in the screenshot below.
23. The following command will get you the pods in the cluster; see the screenshot below for example output.
```
kubectl get pods
```
![Clone Repo](./images/getpods.png)

24. The below command will clean up the existing load generator container that was deployed during the initial deployment. You need to run this command everytime you want to run a new benchmark.
24. The below command will clean up the existing load generator container that was deployed during the initial deployment. You need to run this command every time you want to run a new benchmark.
```
make benchmark@cleanup
```

25. Trigger the benchmark by initiating the load-generator along with the parameters.
25. Trigger the benchmark by initiating the load generator along with the parameters.
```
make benchmark@run TIMEOUT=100ms NUMBER_OF_JOBS=1 RATE_PER_JOB=200 NUMBER_OF_DEVICES=10000 DURATION=500s
```
_You can supply following parameters to load-generator and perform benchmarks_
_You can supply the following parameters to load-generator and perform benchmarks_
```
TIMEOUT=100ms # Request timeout (default 100ms)
DURATION=500s # duration of the load generation
Expand All @@ -192,11 +192,11 @@ NUMBER_OF_JOBS=1 # number of parallel instances of the load generator
SLOPE=0 # slope of requests per second increase (zero for a constant rate; see <https://en.wikipedia.org/wiki/Slope>)
ENABLE_PROFILER=1 # used to start profiling session, leave unset to disable
```
26. Once the load-generator is triggered you can run the following port-forward command to connect to Grafana Dashboard.
26. Once the load generator is triggered you can run the following port-forward command to connect to Grafana Dashboard.
```
kubectl port-forward svc/prom-grafana 8080:80
```
27. On your cloud9 instance click and preview button on the top to open a browser window/tab and access Grafana Dashboard. Use the following credentials to login (Turn off enhanced tracking off if you are using Firefox)
27. On your Cloud9 instance click the preview button on the top to open a browser window/tab and access Grafana Dashboard. Use the following credentials to login (Turn off enhanced tracking off if you are using Firefox).

```
username: admin
Expand All @@ -205,15 +205,15 @@ Password: prom-operator

![Clone Repo](./images/Grafanalogin.png)

28. Once you login, click on the dashboard button on the left hand menu and select manage as shown in the figure below.
28. Once you login, click on the dashboard button on the left hand menu and select "Manage" as shown in the figure below.

![Clone Repo](./images/managedashboards.png)

29. Search and access 'bidder' dashboard from the list.

![Clone Repo](./images/bidderdashboard.png)

30. You will see the bid request that are being generated on the right hand side and latency on the left hand side of the dashboard as shown in the figure below.
30. You will see the bid requests that are being generated on the right hand side and latency on the left hand side of the dashboard as shown in the figure below.

![Clone Repo](./images/benchmarksresults.png)

Expand All @@ -222,15 +222,15 @@ The metrics include:
* Bid requests generated
* Bid requests received
* No Bid responses
* Latency on 99, 95 and 90 pecentile
* Latency on 99th, 95th and 90th percentile

This benchmarks will help to demonstrate that the AdTech Real-time-bidder application performace on AWS Graviton instances.
These benchmarks demonstrate the AdTech Real-time-bidder application performance on AWS Graviton instances.


# Notes
* You can disable the data pipeline by setting KINESIS_DISABLE: "ture" in deployment/infrastructure/charts/bidder/values.yaml file
* We have used unsafe pointers to optimise the heap allocation and are not converting the pointer types in the code. If this code is used in production, we strongly recommand you to look at you current code and set the pointer type in ./apps/bidder/code/stream/producer/batch_compiler.go file.
* For the ease of deployment, we have pre-configured user credentials for Grafana Dashboard. This is not a best practice and we strongly recommand you to configure access via AWS IAM/SSO. (./deployment/Infrastructure/deployment/prometheus/prometheus-values.yaml, ./tools/benchmark-utils/function.sh)
* Since CloudTrail is enabled on the AWS account by default. we strongly recommand not to disable it. We have not made any cloudtrail configuration on the codekit to enable it if it is disabled.
>IMPORTANT : Executing the bench mark, collecting data and deleting the stack is recommended to keep costs under control
* You can disable the data pipeline by setting KINESIS_DISABLE: "true" in deployment/infrastructure/charts/bidder/values.yaml file
* We have used unsafe pointers to optimise the heap allocation and are not converting the pointer types in the code. If this code is used in production, we strongly recommend that you set the pointer type in ./apps/bidder/code/stream/producer/batch_compiler.go file.
* For ease of deployment, we have pre-configured user credentials for Grafana Dashboard. This is not a best practice and we strongly recommend that you configure access via AWS IAM/SSO. (./deployment/Infrastructure/deployment/prometheus/prometheus-values.yaml, ./tools/benchmark-utils/function.sh)
* CloudTrail is enabled on the AWS account by default; we strongly recommend that you not disable it. We have not configured CloudTrail in CodeKit to enable it if it is disabled.
>IMPORTANT : To keep costs under control, we recommend that you delete this application's CloudFormation stacks after you have executed benchmarks and analyzed the data.