BCDA gives eligible model entities access to Medicare claims data via FHIR resources.
The Beneficiary Claims Data API (BCDA) follows the Bulk FHIR Implementation Guide to provide eligible model entities access to their attributed enrollees’ Medicare Parts A, B, and D claims data, including data from any care received outside their organization.
A list of core team members responsible for the code and documentation in this repository can be found in COMMUNITY.md.
Find API documentation on the BCDA website
The steps below are necessary to run the project.
To get started, install some dependencies:
- Install Go
- Install Docker
- Install Docker Compose
- Install SOPS and related tools for secrets management:
brew install sops yq gettext
- Install Ansible Vault and its dependencies (legacy) For further ansible documentation see: (https://docs.ansible.com/ansible/2.4/vault.html)
- Install Pre-commit with Gitleaks
- Ensure all dependencies installed above are on PATH and can be executed directly from command line.
BCDA uses two approaches for managing secrets and configuration:
For production configuration management, BCDA uses SOPS with AWS KMS encryption to store sensitive and non-sensitive configuration in AWS SSM Parameter Store.
Setup:
- Install required tools:
brew install sops yq gettext - Navigate to the config module:
cd ops/services/10-config - Initialize and deploy:
tofu init && tofu apply -target module.sops.local_file.sopsw[0]
Editing Configuration:
# Edit environment-specific configuration
./bin/sopsw -e values/dev.sopsw.yaml
./bin/sopsw -e values/test.sopsw.yaml
./bin/sopsw -e values/sandbox.sopsw.yaml
./bin/sopsw -e values/prod.sopsw.yamlDeploying Changes:
# Review changes before applying
tofu plan -var env=dev
# Apply changes
tofu apply -var env=devFor detailed instructions, see ops/services/10-config/README.md.
Note: This section is maintained for backward compatibility with existing workflows.
The files committed in the shared_files/encrypted directory hold secret information, and are encrypted with Ansible Vault.
Setup:
- Create a file named
.vault_passwordin the root directory of the repository. - You should have been given access to Box. In Box search for
vault_password.txt, copy the text and paste it into your.vault_passwordfile.
Decrypt/Encrypt Secrets:
# Decrypt secrets
./ops/secrets --decrypt
# Copy to decrypted directory
cp shared_files/encrypted/* shared_files/decrypted/
# After editing, re-encrypt each secret file in the encrypted folder (as these files are committed)
./ops/secrets --encrypt <filename>Never put passwords, keys, or secrets of any kind in application code. Instead, use the strategy outlined here:
For Production Configuration (Recommended): Use the SOPS approach described above to manage configuration in AWS SSM Parameter Store. This provides centralized, encrypted configuration management across environments.
For Local Development:
- In the project root
bcda-app/directory, create a file called.env.sh. This file is ignored by git and will not be committed:
$ touch .env.sh
- Edit
.env.shto include the bash shebang and any necessary environment variables (seeshared_files/decrypted/local.envas well as section titled 'Environment variables'). It should look like this after:
#!/bin/bash
export BCDA_SSAS_CLIENT_ID="<clientID>"
export BCDA_SSAS_SECRET="<clientSecret>"
<other needed env vars>
- Source the file to add the variables to your local development environment:
$ source .env.sh
Optionally, you can edit your ~/.zshrc or ~/.bashrc file to eliminate the need to source the file for each shell start by appending this line:
source [src-path]/bcda-app/.env.sh
[src-path] is your relative path to the bcda-app repo.
Before we can run the application locally, we need to build the docker images and load the fixtures:
make docker-bootstrapAfter that has completed successfully, we can start the containers. Include the --watch flag to automatically rebuild the API and worker containers on code changes.
docker compose up --watchOnce the containers are running, you will need to generate a set of credentials for an ACO so that you can get a token. The loaded fixtures will include some ACOs that have beneficiaries attributed to them already. The ACOs loaded in the previous step are A9994 and A9996, but you can also look in the application database to view and modify more.
ACO_CMS_ID=<> make credentialsThis will generate a client ID and secret that can be used to acquire a token from the System-to-System Authentication Service (SSAS):
curl --location --request POST 'http://localhost:3003/token' \
--header 'Accept: application/json' \
--user '<clientid:secret>'After we successfully retrieve a token, we can make a request to any of the available endpoints. The PostMan collections under test/postman_test/... can be imported into postman directly and used to make requests, or you can use any tool like curl.
Prerequisite: Before running the tests and producing test metrics, you must complete the Build Images step from Start the API section.
Note:
make unit-testwill automatically run the command below, so this step is not necessary if you'd like to just simply run the unit tests.
Spin up the Postgres container & run migrations:
$ make unit-test-dbIf you are running any tests that require localstack, spin up localstack as well:
$ make unit-test-localstackSource the required environment variables from the ./.vscode/settings.json (under go.testEnvVars) and ./shared_files/decrypted/local.env.
Note: Since we're connecting to Postgres externally, we need to use the local host/port instead.
For vscode users, these variables are already by the workspace settings file (./.vscode/settings.json)
Note: If this is the first time running the tests follow instructions in the 'Running unit tests locally' section of this README. Then run:
make load-fixturesIn order to keep the test feedback loop optimized, the following items must be handled by the caller (and are not handled by the test targets):
- Ensuring the compose stack is up and running
- Ensuring the database has been seeded
- Managing images/containers (if Dockerfile changes have occurred, an image rebuild is required and won't occur as part of the test targets)
- Run golang linter and gosec:
make lint- Run unit tests (this places results and a coverage report in test_results/):
make unit-test- Run postman integration tests:
make postman env=local maintenanceMode=""- Run smoke tests:
make smoke-test env=local maintenanceMode=""- Run full test suite (executes all of items in 1-4 above):
make test- Run performance tests (primarily to be utilized by Jenkins in AWS):
make performance-testAfter the user has finished updating the Postgres database used for unit testing with the new data, the user can update the seed data by running the following command:
make unit-test-db-snapshotThis script will update ./db/testing/docker-entrypoint-initdb.d/dump.pgdata file.
This file is used to initialize the Postgres db with all of the necessary data needed for the various unit tests.
For more information on intialization, please see db/testing/docker-entrypoint-initdb.d/01-restore.sh.
This script is executed when the Postgres container is launched.
Note: The updated
dump.pgdatashould be committed with the other associated changes.
This step assumes that the user has installed VSCode, the Go language extension available here, and has successfully imported test data to their local database.
To run tests from within VSCode: In a FILENAME_test.go file, there will be a green arrow to the left of the method name, and clicking this arrow will run a single test locally. Tests should not be dependent upon other tests, but if a known-good test is failing, the user can run all tests in a given file by going to View -> Command Palette -> Go: Test Package, which will run all tests in a given file. Alternatively, in some instances, the init() method can be commented out to enable testing of single functions.
Testify mocks can be automatically be generated using mockery. Installation and other runtime instructions can be found here. Mockery uses the /.mockery.yml file to define configuration including which interfaces to build. To regenerate mocks simply run mockery.
Run FHIR Conformance tests against your local deployment:
make fhir_testingSee FHIR Testing here for more info on the inferno tests and the FHIR Scan workflow.
The various BCDA services (api, worker, ssas) require multiple environment variables and config files.
- Environment variables are injected directly into the container environment. For the local docker container, this is done via docker-compose. Deployed Fargate environments may require a larger superset of environment variables, which are managed in param store and listed explicitly in bcda-ops.
- Configuration files (api yaml, certificates, etc.) are stored in S3 and synced to each container via its entrypoint script. For the local environment, this setup is replicated via localstack.
While both environment variables and config files are managed in shared_files and injected into docker containers, they can be configured for running the bcda and bcdaworker apps ourside of docker by setting the following environment variables. The full list of required variables may be referenced in the docker-compose file for the local environment, and in bcda-ops for deployed environments.
BCDA_ERROR_LOG <file_path>
BCDA_REQUEST_LOG <file_path>
BCDA_BB_LOG <file_path>
BB_CLIENT_CERT_FILE <file_path>
BB_CLIENT_KEY_FILE <file_path>
BB_SERVER_LOCATION <url>
FHIR_PAYLOAD_DIR <directory_path>
JWT_EXPIRATION_DELTA <integer> (time in hours that JWT access tokens are valid for)
BCDA_WORKER_ERROR_LOG <file_path>
BCDA_BB_LOG <file_path>
BB_CLIENT_CERT_FILE <file_path>
BB_CLIENT_KEY_FILE <file_path>
BB_SERVER_LOCATION <url>
FHIR_PAYLOAD_DIR <directory_path>
BB_TIMEOUT_MS <integer>
BCDA maintains database views and automated exports for analytics and metrics tracking. These are designed to provide insights into BCDA usage without exposing PHI/PII or internal database structures.
Location: All insights views and export configurations are managed in the CDAP repository under terraform/services/insights/.
Database views are organized by service (ab2d, bcda) in the views/ directory:
- BCDA Views:
cdap/terraform/services/insights/views/bcda/
Each view is defined in its own file following the naming convention: {env}-{view-name}.view.sql
Note: The original migration file db/migrations/manual/20250331-create_metric_views.up.sql in this repository contains the initial view definitions. However, the authoritative source for these views is now the CDAP repository.
Export configurations are organized by service in the db-exports/ directory:
- BCDA Exports:
cdap/terraform/services/insights/db-exports/bcda/
Each export file schedules a cron job to automatically dump view data to S3 every 6 hours. Export files follow the naming convention: {env}-{view-name}.sql
Note: These S3 buckets are used to feed AWS QuickSight for the aforementioned metrics and analytics.
You can use docker to run commands against the running containers.
Example: Use docker to look at the api database with psql.
docker run --rm --network bcda-app-net -it postgres psql -h bcda-app-db-1 -U postgres bcdaExample: See docker-compose.yml for the password.
Use docker to run the CLI against an API instance
docker exec -it bcda-app-api-1 sh -c 'bcda -h'
Follow installing go + vscode setup guide.
Additional settings found under .vscode/settings.json allow tests to be run within vscode.
To attach to either the api or worker process within its respective docker container and debug, stop at breakpoints, and view local variables during execution, run the appropriate debug command.
make debug-api
or
make debug-worker
Once the containers are up, the program to debug will wait to be connected to dlv. Use dlv directly, or use a remote-attach debugging configuration in vscode.
{
"version": "0.2.0",
"configurations": [
{
"name": "Connect to server",
"type": "go",
"request": "attach",
"mode": "remote",
"port": 4040,
"host": "127.0.0.1"
}
]
}
Thank you for considering contributing to an Open Source project of the US Government! For more information about our contribution guidelines, see CONTRIBUTING.md.
Anyone committing to this repo must use the pre-commit hook to lower the likelihood that secrets will be exposed. You can install pre-commit using the MacOS package manager Homebrew below, or use installation options that can be found in the pre-commit documentation:
brew install pre-commitBefore you can install the hooks, you will need to manually install goimports::
go install golang.org/x/tools/cmd/goimports@latest
After that is installed, we can, install the hooks:
pre-commit installThis will download and install the pre-commit hooks specified in .pre-commit-config.yaml, which includes gitleaks for secret scanning and go-imports to ensure that any added, copied, or modified go files are formatted properly.
The project uses Go Modules allowing you to clone the repo outside of the $GOPATH. This also means that running go get inside the repo will add the dependency to the project, not globally.
The BCDA team is taking a community-first and open source approach to the product development of this tool. We believe government software should be made in the open and be built and licensed such that anyone can download the code, run it themselves without paying money to third parties or using proprietary software, and use it as they will.
We know that we can learn from a wide variety of communities, including those who will use or will be impacted by the tool, who are experts in technology, or who have experience with similar technologies deployed in other spaces. We are dedicated to creating forums for continuous conversation and feedback to help shape the design and development of the tool.
We also recognize capacity building as a key part of involving a diverse open source community. We are doing our best to use accessible language, provide technical and process documents, and offer support to community members with a wide variety of backgrounds and skillsets.
Principles and guidelines for participating in our open source community are can be found in COMMUNITY.md. Please read them before joining or starting a conversation in this repo or one of the channels listed below. All community members and participants are expected to adhere to the community guidelines and code of conduct when participating in community spaces including: code repositories, communication channels and venues, and events.
If you have ideas for how we can improve or add to our capacity building efforts and methods for welcoming people into our community, please let us know at [email protected]. If you would like to comment on the tool itself, please let us know by filing an issue on our GitHub repository.
We adhere to the CMS Open Source Policy. If you have any questions, just shoot us an email.
Submit a vulnerability: Vulnerability reports can be submitted through Bugcrowd. Reports may be submitted anonymously. If you share contact information, we will acknowledge receipt of your report within 3 business days.
For more information about our Security, Vulnerability, and Responsible Disclosure Policies, see SECURITY.md.
A Software Bill of Materials (SBOM) is a formal record containing the details and supply chain relationships of various components used in building software.
In the spirit of Executive Order 14028 - Improving the Nation’s Cyber Security, a SBOM for this repository is provided here: https://github.com/{{ cookiecutter.project_org }}/{{ cookiecutter.project_repo_name }}/network/dependencies.
For more information and resources about SBOMs, visit: https://www.cisa.gov/sbom.
This project is in the public domain within the United States, and copyright and related rights in the work worldwide are waived through the CC0 1.0 Universal public domain dedication as indicated in LICENSE.
All contributions to this project will be released under the CC0 dedication. By submitting a pull request or issue, you are agreeing to comply with this waiver of copyright interest.