-
Notifications
You must be signed in to change notification settings - Fork 10
Continuous Benchmarking Credential Migration
This article contains a checklist of accounts/credentials that needs to be updated prior to the completion of FYPs to ensure the continued operation of daily experiments and CI/CD operations on STeLLAR.
No action is needed for AWS as all credentails are under EASELab's account.
The following values are needed for STeLLAR to deploy to Azure on the CI/CD pipeline:
AZURE_SUBSCRIPTION_ID
AZURE_TENANT_ID
AZURE_CLIENT_ID
AZURE_CLIENT_SECRET
You should already have an Azure account with a AZURE_SUBSCRIPTION_ID
. The remaining values can be obtained via the Azure CLI or via the Azure console.
Create an Azure Service Principal with a secret via the Azure CLI:
az ad sp create-for-rbac --name "STeLLAR GitHub Actions" \
--role contributor \
--scopes /subscriptions/<your_azure_subscription_id> \
--sdk-auth
The command will output the credentials in JSON format.
-
Register an application on Microsoft Entra ID. The optional fields can be left empty. The
AZURE_CLIENT_ID
andAZURE_TENANT_ID
will be displayed on the dashboard of the application.
-
Assign a role to the application. The application should have a “Contributor” role.
-
Create a new client secret for the application. The
AZURE_CLIENT_SECRET
will be displayed.
Finally, add the four values you have obtained as a secret on the STeLLAR repository.
Create an Azure VM with the following configuration:
-
Region: (US) West US
- Note: This is the current region all of our VMs are running at. Benchmarked Azure Functions are also deployed to this region.
- Image: Ubuntu Server 22.04 LTS - x64 Gen2
- VM architecture: x64
-
Size: Standard_B1ms - 1 vcpu, 2GiB memory (US$18.10/month)
- Note: The VM with larger 2GiB memory is required for image size experiments. Smaller VMs are known to run out of memory and crash when executing 100MB experiments.
-
OS disk type: Standard HDD
- Note: Standard HDD is cheaper and generally sufficient for our experiment needs.
-
Delete NIC when VM is deleted: Checked
- Note: Optional. Enabling this makes resource cleanup easier if you need to remove this self-hosted runner in the future.
- You may use the default options for those that were not specified above.
Execute the setup script to install the STeLLAR dependencies:
chmod +x ./scripts/setup.sh
./scripts/setup.sh
Add the VM you created as a self-hosted runner for GitHub Actions.
The final ./run.sh
command in GitHub’s instructions to set up the self-hosted runner should be executed in a tmux terminal so that it can continue running after the ssh session ends.
Google Cloud Run’s STeLLAR project (Project number: 567992090480, Project ID: stellar-benchmarking) is under EASELab’s organisation account ([email protected]), and the credentials are linked to the following service account (viewable under IAM dashboard in GCloud):
However, a billing account needs to be added to the organisation in order to deploy GCR. Search “Create a new billing account” in the GCloud console search bar:
Afterwards, the project needs to be shifted from the old billing account to the new billing account. Google has provided a comprehensive tutorial about the process on this page. (See “Change the Cloud Billing account linked to a project”)
Self-hosted runners for GCR experiments use Google Cloud’s Compute Engine service. The list of all running Compute Engines can be seen on GCloud Console (search for “Compute Engine on the search bar):
To minimise setup and configuration of dependencies needed for STeLLAR, a machine image based on the existing configurations and setup has been set up which can be used to create new VMs. Choose “New VM instance from machine image” after selecting “CREATE INSTANCE”:
Choose the following configuration:
- Region: us-west1
- Zone: us-west1-a
- Machine Configuration: e2
- Machine Type: e2-small (e2-micro is unable to handle Java function builds and may fail Java deployments to GCR)
Reference configuration:
Note that 16GB of disk space may not be enough and may risk running into disk space issues as experiments run over a long period of time. It is recommended to extend this to 64GB of disk size.
Select the VM you wish to change from the Compute Engine dashboard and click on the boot disk in the Boot Disk section:
Select “Edit” and input your new Disk Size:
Deployment of Cloudflare workers requires a Cloudflare Account as well as a single API Token.
To create an API Token, go to “My Profile” > “API Tokens” > “Create” :
Select the pre-defined template for editing Cloudflare Workers:
Update the credential CLOUDFLARE_API_TOKEN
under Github Actions secrets to deploy Cloudflare workers under the new account and API token:
As Cloudflare does not currently offer a service for Virtual machine computing services, AWS EC2 instances are used instead.
EC2 instances running Cloudflare experiments are located in us-east-2 (Ohio) as it was determined to be the closest to the deployed Cloudflare Workers. This may or may not change in the future, and a check on the geographical location of deployed Cloudflare Workers is recommended when deploying from the new account.