Skip to content

Commit

Permalink
Nightly Mongo Dev Updater (#748)
Browse files Browse the repository at this point in the history
* Create mongo-reset.yaml for nightly update workflow

* temp push

* Update mongo-reset.yaml after registering in actions tab

* check triggers

* push to register action again

* pull cron while migrating to k8 cronjob

* Create mongo-reset.yaml cronjob in k8 infra

* Delete .github/workflows/mongo-reset.yaml

* Create mongo-reset.yaml for manual triggering

* add image version, latest tag doesn't exist for alpine/k8s

* add service account

* drop dev database

* polish cronjob

* update github action to log output (succeed on proper job execution), increase job deletion time to 60s after terminating

* register github action first run

* remove push trigger for github action, fix typo

* rename files and k8s role

* forbid concurrency

* add bt prefix to name

* fix comment in helm chart

* remove chart

* add labels to datapuller

* fix name

* fix pod query

---------

Co-authored-by: maxmwang <[email protected]>
  • Loading branch information
klhftco and maxmwang authored Feb 2, 2025
1 parent 1d2c12a commit 2433619
Show file tree
Hide file tree
Showing 6 changed files with 89 additions and 7 deletions.
27 changes: 27 additions & 0 deletions .github/workflows/runbook-reset-dev-mongo.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
name: Reset Dev Mongo

on:
workflow_dispatch:

jobs:
reset-mongo:
name: SSH and Reset Dev MongoDB State
runs-on: ubuntu-latest
steps:
- name: SSH and Reset MongoDB
uses: appleboy/[email protected]
with:
host: ${{ secrets.SSH_HOST }}
username: ${{ secrets.SSH_USERNAME }}
key: ${{ secrets.SSH_KEY }}
script: |
set -e # Exit immediately if a command fails
# Create Mongo job from mongo-reset
kubectl create job --from=cronjob/bt-base-reset-dev-mongo bt-base-reset-dev-mongo-ga-manual
echo "MongoDB reset scheduled."
# Wait for job_pod log output
job_pod=$(kubectl get pods -o custom-columns=NAME:.metadata.name --no-headers -n bt | grep 'bt-base-reset-dev-mongo-ga-manual')
kubectl wait --for=condition=ready pod/$job_pod -n bt --timeout=30s
kubectl logs -f $job_pod -n bt
2 changes: 1 addition & 1 deletion infra/app/templates/cleanup.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ metadata:
spec:
template:
spec:
serviceAccountName: bt-app-cleanup
serviceAccountName: bt-k8s-role
containers:
- name: cleanup
image: alpine/helm
Expand Down
2 changes: 2 additions & 0 deletions infra/app/templates/datapuller.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ spec:
spec:
template:
spec:
labels:
{{- include "bt-app.datapullerLabels" $root | nindent 12 }}
containers:
- name: datapuller-{{ $jobName }}
image: {{ printf "%s/%s:%s" $jobConfig.image.registry $jobConfig.image.repository ( toString $jobConfig.image.tag ) }}
Expand Down
2 changes: 1 addition & 1 deletion infra/base/templates/issuer.yaml
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
{{ /* https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/#api-tokens */ }}
{{- /* https://cert-manager.io/docs/configuration/acme/dns01/cloudflare/#api-tokens */ -}}
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
apiVersion: v1
kind: ServiceAccount
metadata:
name: bt-app-cleanup
name: bt-k8s-role

---

apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: bt-app-cleanup
name: bt-k8s-role
rules:
- apiGroups: ["*"]
resources: ["*"]
Expand All @@ -19,12 +19,12 @@ rules:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: bt-app-cleanup
name: bt-k8s-role
subjects:
- kind: ServiceAccount
name: bt-app-cleanup
name: bt-k8s-role
apiGroup: ""
roleRef:
kind: Role
name: bt-app-cleanup
name: bt-k8s-role
apiGroup: "rbac.authorization.k8s.io"
53 changes: 53 additions & 0 deletions infra/base/templates/reset-dev-mongo.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
apiVersion: batch/v1
kind: CronJob
metadata:
name: {{ .Release.Name }}-reset-dev-mongo
namespace: bt
spec:
schedule: "0 5 * * *" # Daily at 5 AM, after datapuller
timeZone: America/Los_Angeles
concurrencyPolicy: Forbid
suspend: false
jobTemplate:
spec:
ttlSecondsAfterFinished: 180
template:
spec:
serviceAccountName: bt-k8s-role
containers:
- name: reset-dev-mongo
image: alpine/k8s:1.29.11
command:
- sh
- -c
- |
set -e # Exit immediately if a command fails
# Find stage and dev MongoDB pods
stage_pod=$(kubectl get pods -o custom-columns=NAME:.metadata.name --no-headers -n bt | grep 'bt-stage-mongo')
dev_pod=$(kubectl get pods -o custom-columns=NAME:.metadata.name --no-headers -n bt | grep 'bt-dev-mongo')
# Dump staging MongoDB state
echo "Dumping staging MongoDB state..."
kubectl exec --namespace=bt \
"$stage_pod" -- mongodump --archive=/tmp/stage_backup.gz --gzip
kubectl cp --namespace=bt \
"$stage_pod:/tmp/stage_backup.gz" /tmp/stage_backup.gz
kubectl exec --namespace=bt \
"$stage_pod" -- rm /tmp/stage_backup.gz
# Restore dump into dev MongoDB
echo "Restoring dump into dev MongoDB..."
kubectl cp --namespace=bt \
/tmp/stage_backup.gz "$dev_pod:/tmp/stage_backup.gz"
kubectl exec --namespace=bt \
"$dev_pod" -- mongosh bt --eval "db.dropDatabase()"
kubectl exec --namespace=bt \
"$dev_pod" -- mongorestore --archive=/tmp/stage_backup.gz --gzip --drop
kubectl exec --namespace=bt \
"$dev_pod" -- rm /tmp/stage_backup.gz
# Cleanup local files
rm /tmp/stage_backup.gz
echo "MongoDB reset completed successfully!"
restartPolicy: Never

0 comments on commit 2433619

Please sign in to comment.