This project is not yet officially supported or endorsed by Red Hat.
3scale dump is an unofficial shell script for dumping a Red Hat 3scale On-premises project, bringing more and better formatted information than the regular OpenShift dump script.
Stable version in: https://raw.githubusercontent.com/estevaobk/3scaledump/1.0-stable/3scale-dump.sh
After downloading the script, make it executable:
$ chmod +x 3scale-dump.sh
$ ./3scale-dump.sh <3scale Project> [Compress Format] 2>&1 | tee 3scale-dump-logs.txt
3scale Project: The official project hosting 3scale inside Red Hat OpenShift.
Compress Format: How the log files from the pods are going to be compressed.
Possible values are: 'gzip', 'xz' or 'auto' for auto-detect.
NOTE: Leaving this value empty is equal to 'auto'.
The text file '3scale-dump-logs.txt' provides more information about the data
retrieval process, including if anything goes wrong.
After successfully executing the script, a file named 3scale-dump.tar
will exist on the current directory containing the 3scale project information. Notice that it doesn't include any sort of compression (e.g. .tar.gz
or .tar.xz
), since all the logs from the pods have already been compressed and hence its main purpose is to just archive all the information.
The directory 3scale-dump
is created under the currently working one and is used as a temporary location to store the configuration files while they are being retrieved and before being archived into the Dump File. It's typically cleaned up automatically, unless an unexpected file or directory is present inside it.
With the exception of the Pods and Events Information
, OpenShift related configuration is fetched both in the form of a Single File
and several Object Files
. The same information assembled on the Single File
is also distributed within the several Object Files
and it's up to the Engineer to choose the preferred format of reading the data retrieved.
- Pods:
- All Pods:
/status/pods-all.txt
- Running Pods:
/status/pods.txt
- All Pods:
- Events:
/status/events.txt
- Single:
3scale-dump/dc.yaml
- Objects:
3scale-dump/dc/[object].yaml
-
Files:
3scale-dump/logs/[pod].[gz,xz]
NOTE: Shell Script included on
3scale-dump/logs/uncompress-logs.sh
to uncompress all the logs.
- Single:
3scale-dump/secrets.yaml
- Objects:
3scale-dump/secrets/[object].yaml
- Single:
3scale-dump/routes.yaml
- Objects:
3scale-dump/routes/[object].yaml
- Single:
3scale-dump/services.yaml
- Objects:
3scale-dump/services/[object].yaml
- Single:
3scale-dump/images.yaml
- Objects:
3scale-dump/images/[object].yaml
- Single:
3scale-dump/configmaps.yaml
- Objects:
3scale-dump/configmaps/[object].yaml
- Single:
3scale-dump/pv.yaml
and3scale-dump/pv/describe.txt
- Objects:
3scale-dump/pv/[object].yaml
and3scale-dump/pv/describe/[object].txt
- Single:
3scale-dump/pvc.yaml
and3scale-dump/pvc/describe.txt
- Objects:
3scale-dump/pvc/[object].yaml
and3scale-dump/pvc/describe/[object].txt
- Single
3scale-dump/serviceaccounts.yaml
- Objects:
3scale-dump/serviceaccounts/[object].yaml
- File:
/status/node.txt
The directories apicast-staging
and apicast-production
are created inside /status
and should contain information related to both pods (if running). There is also some additional debug (stderr) information from the retrieval process.
- Files:
/status/apicast-[staging|production]/3scale-echo-api-[staging|production].txt
- Files:
/status/apicast-[staging|production]/apicast-[staging|production].json
- Debug:
/status/apicast-[staging|production]/apicast-[staging|production]-json-debug.txt
Depends on the value from the variable APICAST_MANAGEMENT_API
on both the Staging and Production APIcast pods:
-
Management API - Debug:
/status/apicast-[staging|production]/mgmt-api-debug.json
-
Management API - Status:
/status/apicast-[staging|production]/mgmt-api-debug-status-[info|live|ready].txt
NOTE: Shell Script included on
/status/apicast-[staging|production]/python-json.sh
to convert all the.json
files inside the/status/apicast-[staging|production]
directories from a single line into multiple lines in case thepython
utility is installed locally.
- Files:
/status/apicast-[staging|production]/certificate.txt
and/status/apicast-[staging|production]/certificate-showcerts.txt
-
Files:
/status/project.txt
and/status/pods-run-as-user.txt
NOTE: Helps to further troubleshoot database level issues knowing the user that the PV/PVC's will be mounted from the pods.
- File:
/status/sidekiq.txt
-
This project needs to be added to the official 3scale repositores after its proper validation.
-
On
2.6 On-premises
, theapicast-wildcard-router
pod doesn't exist anymore. This is the single pod that contains theopenssl
utility to validate both the APIcast Staging and Production certificates. This process neeeds to be executed from inside a pod, since the OpenShift Node already adds any self-generated certificate as a valid Certificate Authority (CA). -
The script is not tested or validated against OpenShift Container Platform (OCP) 4.X, only 3.11. However, it's still not being widely used.
-
Several items raised on the JIRA THREESCALE-2588 will need to be addressed in a future stable release (most likely
2.0-stable
).