Terminus is a microservice’s performance modeling and sand-boxing tool. Terminus is derived from the word "terminal" which means end point of something. Terminus consists of components implementing microservices architecture; these components can be scaled up and down on demand. The tool automates the setup of a Kubernetes cluster and deploys the monitoring services and a load generator. It was developed using Golang and Python. Terminus comprises both the API and the user interface allowing the user to easily interact with the tool and conduct comprehensive performance modeling for an arbitrary microservice application.
The architecture of the tool and the communication paths between its components in a typical use-case are shown in below figure:
Currently, only AWS cloud is supported. So this tool can only be run on AWS cloud. So you can either use AWS CLI or AWS console for initial tasks.
On MacOS, Windows and Linux OS:
The officially supported way of installing the tool is with pip
:
pip install awscli
You can grab the tool with homebrew, although this is not officially supported by AWS.
brew update && brew install awscli
You can download the MSI installer from this page and follow the steps through the installer which requires no other dependencies: https://docs.aws.amazon.com/cli/latest/userguide/awscli-install-windows.html
-
In order to build clusters within AWS we'll create a dedicated IAM user for
kops
. This user requires API credentials in order to usekops
. -
The
kops
user will require the following IAM permissions to function properly:AmazonEC2FullAccess AmazonRoute53FullAccess AmazonS3FullAccess IAMFullAccess AmazonVPCFullAccess
You can create the kops IAM user from the command line using the following (if AWS CLI is installed):
aws iam create-group --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonEC2FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonRoute53FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/IAMFullAccess --group-name kops aws iam attach-group-policy --policy-arn arn:aws:iam::aws:policy/AmazonVPCFullAccess --group-name kops aws iam create-user --user-name kops aws iam add-user-to-group --user-name kops --group-name kops aws iam create-access-key --user-name kops
You should record the SecretAccessKey and AccessKeyID in the returned JSON output,
Create the user, and credentials, using the AWS console.
The kops
user will require the following IAM permissions to function properly:
```
AmazonEC2FullAccess
AmazonRoute53FullAccess
AmazonS3FullAccess
IAMFullAccess
AmazonVPCFullAccess
```
Add the above permission to the user account and note down the SecretAccessKey and AccessKeyID.
- Go into the Services->EC2->Network&Security->KeyPairs
- Create a new key pair and note down its name.
- Go into the Services->EC2->Network&Security->Security Groups
- Create a new security Group with allow all traffics from any source (for the time being) for both inbound and outbound rules. Note down its ID.
- Go into Services->VPC->Subnets
- Note down the default Subnet ID
- Create a t2.large with OS as Ubuntu 18.04 VM on AWS.
- Add storage disk of minimum 90GB.
- Add the security group with allowed access to all traffic from any source to it.
- SSH into it
- Mongodb needs to installed on the machine from which this restoreing is done. Here is the procedure to install or follwo these command for Ubuntu 18.04 :
$ sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 9DA31620334BD75D9DCB49F368818C72E52529D4
$ echo "deb [ arch=amd64 ] https://repo.mongodb.org/apt/ubuntu bionic/mongodb-org/4.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-4.0.list
$ sudo apt-get update
$ sudo apt-get install -y mongodb-org
- Install s3cmd
$ sudo apt-get install -y s3cmd
- Configure s3cmd
$ s3cmd --configure
- Provide your Access key and Secret Key. Default Region as ap-southeast-2 Leave the rest to default and save the configuration.
- Influxdb installation :
$ curl -sL https://repos.influxdata.com/influxdb.key | sudo apt-key add -
$ source /etc/lsb-release
$ echo "deb https://repos.influxdata.com/${DISTRIB_ID,,} ${DISTRIB_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
$ sudo apt-get update && sudo apt-get install -y influxdb
After doing SSH into the VM follow below procedure.
-
Clone the Repository
-
Fill the following AWS values in the
Terminus_Interface/.env
fileAWS_KEY= AWS_SECRET= AWS_DEFAULT_REGION=us-east-2 AWS_KEY_PAIR_NAME= AWS_SUBNET_ID=example subnet-3eb5a357 AWS_SECURITY_GROUP= example sg-07ba209aa8a4db1bd KUBE_CLUSTER_NAME=<name>.k8s.local KOPS_CONTAINER_NAME=<containerName> KOPS_S3_BUCKET_NAME=<name>-kops-state-store
-
cd into
Terminus_Interface/scripts
directory -
Run the script using the commands
chmod +x deploy_app.sh sudo sh deploy_app.sh
-
Then use the web browser to visit
http://VM_IP:8082
- All the mongodb dump is present inside
Terminus_Interface\mongo\mongodump directory.
- It can be restored to the started mongodb instance by running this command :
$ cd Terminus_Interface\mongo\mongodump
$ mongorestore -h <VM_PUBLIC_IP> -u <username> -p <password> ./dump
It needs to be only downloaded if you want to perform traning again or view the data other than MSC. All the collected monitoring data is stored in the S3 bucket. To use those dataset for the initial setup, follow the below procedure.
- Make a directory to contain all the data
$ sudo mkdir /data
- Now, gGet all the compressed influxdb files by running
$ sudo s3cmd get --recursive s3://terminusinfluxdatamain/ /data
- Now we will have all the data inside the directory, we need to restore back to influxdb by running the following command:
$ sudo influxd restore -portable -host <VM_public_IP:8088> /data/sept/
- From web browser visit
http://VM_IP:8082
- Do all types of training for each services by selecting from the menu. It will take some time to complete the training.
- After the training Click on Store All MSCs. This will also take some time to complete.
- After that, click on Store MSC Prediction Data.
- Lastly, Do the KubeEventAnalysis for all the services.
- Go to MSC->ConductTest->MSC Determine.
- Fill in the values of
1. Service Name: Choose from the selected services
2. Replicas: For the Number of Replicas for that service
3. Limits/Non-Limits: Whether to apply any limits on the service pod or not.
4. Main-Instance Type: Select the Slave node Instance Type
5. Limiting-Instance Type: The limits to be applied on the slave nodes. keep in mind that, this should always be less than Main-Instance type.
6. Master Node Instance Type: Type of Master node.
7. Deployment Instance Type: Type of the instance for the deployment of agent and generating load.
8. MAX RPS: Maximum RPS to which the testing needs to be done.
9. Services: Number of services present in the file.
10. NodeCount: Number of slave nodes. This will be dependent on the number of replicas required.
- Click Start Test.
- Wait for the test to get completed. Data and logs in real time data can be seen by going to MSC->Ongoing/Completed Tests ->Select the Test. After that choose Grafana for viewing the Kubernetes and Load Generation data, kibana and logs for the Logs.
- Once everything is done go to Analyzed Results->MSC Analysis ALL -> Select the service -> Click Submit. You would see graphs for the experimental and predicted MSC.
- Other results can be seen like the comparison of limits and non-limits data from the menu.
- To test Sandboxing approach, docker-compose file need to uploaded in
Terminus_Interface\sandbox_microservices_service\data\docker_compose_files
- After uploading, go to Sandboxing-Microservice->Conduct Test->API intercept
- Select the file name, enter the Main service name and api endpoint.
- Then click submit to record API responses.
- After the response interception is done, go to Sandboxing-Microservice->Conduct Test->API intercept->Get Yaml for service
- Select the file name and enter the main service name.
- Then click submit. Graph structure for all the services will be formed.
- After the response interception is done, go to Sandboxing-Microservice->Conduct Test->API intercept->Sandboxed Tree
- Enter the service name and choose between to get the compose file from the new version or the old one.
- Then click submit. Docker-compose.yml file will be provided.
- Afterwards, this file can be converted to Kubernetes yaml file and testing can be done on it for building performance model.
Tree formed from the docker-compose.yml file of this application can be seen in below figure as an Example.
It's automatically created for all the new API endpoints added. The header needs to be added above every API. Here is an example for one:
// ConductTestToCalculatePodBootTime godoc
// @Summary Collect the data for pod botting time
// @Description Collect the data for pod botting time /conductTestToCalculatePodBootTime/1/1/1/t2.xlarge/40/t2.large/compute/primeapp/_api_prime
// @Tags START_TEST
// @Accept text/html
// @Produce json
// @Param numsvs query string true "number of services in microservice"
// @Param appname query string true "name of application[primeapp, movieapp, webacapp]"
// @Param apptype query string true "application type[compute, dbaccess, web]"
// @Param replicas query string true "number of replicas of service "
// @Param instancetype query string true " host instance type "
// @Param limitinginstancetype query string true " limitinginstancetype instance type on main instance"
// @Param apiendpoint query string true " api end point specify like _api_test for /api/test"
// @Param maxRPS query string true " max RPS"
// @Success 200 {string} string "ok"
// @Failure 400 {string} string "ok"
// @Failure 404 {string} string "ok"
// @Failure 500 {string} string "ok"
// @Router /conductTestToCalculatePodBootTime/{numsvs}/{nodecount}/{replicas}/{instancetype}/{mastertype}/{testvmtype}/{maxRPS}/{limitinginstancetype}/{apptype}/{appname}/{mainservicename}/{apiendpoint}/ [get]
- Add new headers or function.
- Download swag by using:
$ go get -u github.com/swaggo/swag/cmd/swag
- Run
swag init
in theTerminus_Interface\server\
folder which contains themain.go
file. This will parse your comments and generate the required files (docs
folder anddocs/docs.go
).
$ swag init
- Copy
Terminus_Interface\server\docs\swagger\swagger.yaml
toTerminus_Interface\server\assets\swagger.yaml
- Once the website is running, you can access the Swager.yaml file
http://VM_IP:8082/assets/swagger.yaml
- This yaml file then can be used on
https://editor.swagger.io/
to generate API documentation.
Some of the testing python notebooks and other useful scripts can be found in the importantScripts folder. These notebooks are the test version for the linear regression with different parameters and data.
Please add issues if you have a question or found a problem.
Pull requests are welcome too!
Contributions are most welcome. Please message me if you like the idea and want to contribute.