-
Notifications
You must be signed in to change notification settings - Fork 0
chore: refactor and productionize #38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR refactors the k8s deployment setup to support multiple environments (mainnet and testnet) using kustomize, renaming the service from "service-quality-oracle" to "rewards-eligibility-oracle" throughout all configurations and creating a production-ready deployment structure.
Key changes:
- Restructured k8s deployment using kustomize with base manifests and environment-specific overlays
- Renamed service from "service-quality-oracle" to "rewards-eligibility-oracle" across all configurations
- Added separate deployment configurations for mainnet and testnet environments with appropriate settings
Reviewed Changes
Copilot reviewed 29 out of 29 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
k8s/secrets.yaml.example | Updated service name references from service-quality-oracle to rewards-eligibility-oracle |
k8s/restart-deployments.sh | New script for restarting deployments with updated service name |
k8s/persistent-volume-claim.yaml | Updated PVC names to use rewards-eligibility-oracle naming |
k8s/environments/testnet/* | New testnet environment configuration with Arbitrum Sepolia settings |
k8s/environments/mainnet/* | New mainnet environment configuration with Arbitrum One settings |
k8s/deployment.yaml | Updated deployment configuration with new service name |
k8s/configmap.yaml | Updated configmap name to rewards-eligibility-oracle |
k8s/base/* | New base kustomize configuration with common resources |
k8s/README.md | Comprehensive documentation update for new structure and deployment process |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
@@ -0,0 +1,45 @@ | |||
# Mainnet Secrets for Service Quality Oracle# IMPORTANT: This is a TEMPLATE file - DO NOT commit actual secrets! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Missing space after 'Oracle' in the comment header. Should be '# Mainnet Secrets for Service Quality Oracle # IMPORTANT:'
# Mainnet Secrets for Service Quality Oracle# IMPORTANT: This is a TEMPLATE file - DO NOT commit actual secrets! | |
# Mainnet Secrets for Service Quality Oracle # IMPORTANT: This is a TEMPLATE file - DO NOT commit actual secrets! |
Copilot uses AI. Check for mistakes.
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/rewards-eligibility-oracle%40graph-mainnet.iam.gserviceaccount.com" | ||
} | ||
|
||
# Blockchain private key for Arbitrum Sepolia transactions (without 0x prefix) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment indicates 'Arbitrum Sepolia transactions' but this is in the testnet config where Arbitrum Sepolia is appropriate. However, the mainnet version of this comment at line 26 in config.secret.yaml still says 'Arbitrum Sepolia' which should be 'Arbitrum One' for mainnet.
Copilot uses AI. Check for mistakes.
make k8s deployment compatible with internal infra to deploy to different environments such as mainnet and testnet with kustomize. I think for the simple version maybe we can move into the docs folder and include a note in the Readme.md @MoonBoi9001 ?
The idea is to leave the deployment setup in this repo for now while development is ongoing and eventually migrate into the main infra repo. Or we can leave this PR on a separate branch off of main and keeps things simple for end-users.