S3-compatible object storage powered by NATS JetStream - Lightweight, fast, and cloud-native.
NATS is a high‑performance distributed messaging system with pub/sub at its core and a built‑in persistence layer (JetStream) enabling Streaming, Key‑Value, and Object Store. It includes authentication/authorization, multi‑tenancy, and rich deployment topologies.
+-----------------------+ +-----------------------+
| Clients | | Clients |
| (Publish / Subscribe)| | (Apps/Services) |
+-----------+-----------+ +-----------+-----------+
| |
v v
+-----+-----------------------------------+-----+
| NATS Cluster |
| +-----------+ +-----------+ +-----------+ |
| | Server | | Server | | Server | |
| +-----------+ +-----------+ +-----------+ |
+------------------------------------------------+
Modern object stores like MinIO, SeaweedFS, JuiceFS, and AIStore expose S3‑compatible HTTP APIs for simple integration. NATS‑S3 follows this approach to provide S3 access to NATS JetStream Object Store.
NATS-S3 Gateway
+-------------------+ HTTP (S3 API) +--------------------+
| S3 Clients +------------------------->+ nats-s3 |
| (AWS CLI/SDKs) | | HTTP Gateway |
+-------------------+ +----------+---------+
|
|
+--------------------+
| NATS Cluster |
| JetStream Object |
| Store |
+--------------------+
Follow these steps to spin up NATS-S3, integrated with NATS server, and use AWS CLI to work with a bucket and objects.
- Prerequisites
- Docker (for NATS)
- AWS CLI (v2 recommended)
- Start NATS (with JetStream) via Docker
docker run -p 4222:4222 -ti nats:latest -js- Create a credentials file
cat > credentials.json <<EOF
{
"credentials": [
{
"accessKey": "my-access-key",
"secretKey": "my-secret-key"
}
]
}
EOF- Start the nats-s3 gateway
# In a separate terminal
./nats-s3 \
--listen 0.0.0.0:5222 \
--natsServers nats://127.0.0.1:4222 \
--s3.credentials credentials.json- Configure your AWS CLI to use the same credentials
export AWS_ACCESS_KEY_ID=my-access-key
export AWS_SECRET_ACCESS_KEY=my-secret-key
# SigV4 scope requires a region; us-east-1 is common
export AWS_DEFAULT_REGION=us-east-1- Create a bucket
aws s3 mb s3://bucket1 --endpoint-url=http://localhost:5222- List buckets
aws s3 ls --endpoint-url=http://localhost:5222- Put an object
echo "hello world" > file.txt
aws s3 cp file.txt s3://bucket1/hello.txt --endpoint-url=http://localhost:5222- List bucket contents
aws s3 ls s3://bucket1 --endpoint-url=http://localhost:5222- Download the object
aws s3 cp s3://bucket1/hello.txt ./hello_copy.txt --endpoint-url=http://localhost:5222- Delete the object
aws s3 rm s3://bucket1/hello.txt --endpoint-url=http://localhost:5222Optional: delete the bucket
aws s3 rb s3://bucket1 --endpoint-url=http://localhost:5222- Prereqs: Go 1.22+, a running NATS server (with JetStream enabled for Object Store).
Build
make buildRun
./nats-s3 \
--listen 0.0.0.0:5222 \
--natsServers nats://127.0.0.1:4222 \
--s3.credentials credentials.jsonFlags
--listen: HTTP bind address for the S3 gateway (default0.0.0.0:5222).--natsServers: Comma‑separated NATS server URLs (default fromnats.DefaultURL).--natsUser,--natsPassword: Optional NATS credentials for connecting to NATS server.--natsToken: NATS server token for token-based authentication.--natsNKeyFile: NATS server NKey seed file path for NKey authentication.--natsCredsFile: NATS server credentials file path for JWT authentication.--replicas: Number of NATS replicas for each jetstream element (default 1).--s3.credentials: Path to S3 credentials file (JSON format, required).--log.format: Log output format: logfmt or json (default logfmt).--log.level: Log level: debug, info, warn, error (default info).--http.read-timeout: HTTP server read timeout (default 15m).--http.write-timeout: HTTP server write timeout (default 15m).--http.idle-timeout: HTTP server idle timeout (default 120s).--http.read-header-timeout: HTTP server read header timeout (default 30s).
Generate coverage profile and HTML report locally:
make coverage # writes coverage.out
make coverage-report # prints total coverage summary
make coverage-html # writes coverage.htmlIn CI, coverage is generated and uploaded as an artifact.
Tagged releases are built and published via GoReleaser.
- Create and push a tag like
v0.2.0:
git tag -a v0.2.0 -m "v0.2.0"
git push origin v0.2.0CI will build multi‑platform archives and attach them to the GitHub Release. Local dry run:
goreleaser release --snapshot --cleanBuild the image
docker build -t nats-s3:dev .Run with a locally running NATS on the same Docker network
# Start NATS (JetStream) on the host network
docker run --network host -p 4222:4222 \
nats:latest -js
# Start nats-s3 and expose port 5222
# Note: Mount credentials.json file into the container
docker run --network host -p 5222:5222 \
-v $(pwd)/credentials.json:/credentials.json \
wpnpeiris/nats-s3:latest \
--listen 0.0.0.0:5222 \
--natsServers nats://127.0.0.1:4222 \
--s3.credentials /credentials.jsonTest with AWS CLI
export AWS_ACCESS_KEY_ID=my-access-key
export AWS_SECRET_ACCESS_KEY=my-secret-key
export AWS_DEFAULT_REGION=us-east-1
aws s3 ls --endpoint-url=http://localhost:5222Pull and run the published container image from GitHub Container Registry.
Set a version tag (example: v0.0.2)
IMAGE_TAG=v0.0.2
docker pull ghcr.io/wpnpeiris/nats-s3:${IMAGE_TAG}Run against a host‑running NATS (portable across OSes)
# Start NATS locally (JetStream enabled)
docker run --network host -p 4222:4222 \
nats:latest -js
# Start nats-s3 and point to the host via host-gateway
docker run --network host -p 5222:5222 \
-v $(pwd)/credentials.json:/credentials.json \
ghcr.io/wpnpeiris/nats-s3:${IMAGE_TAG} \
--listen 0.0.0.0:5222 \
--natsServers nats://127.0.0.1:4222 \
--s3.credentials /credentials.jsonTest with AWS CLI
export AWS_ACCESS_KEY_ID=my-access-key
export AWS_SECRET_ACCESS_KEY=my-secret-key
export AWS_DEFAULT_REGION=us-east-1
aws s3 ls --endpoint-url=http://localhost:5222nats-s3 uses AWS Signature Version 4 (SigV4) for S3 API authentication. The gateway loads credentials from a JSON file and supports multiple users.
The credential store is a JSON file containing one or more AWS-style access/secret key pairs:
{
"credentials": [
{
"accessKey": "AKIAIOSFODNN7EXAMPLE",
"secretKey": "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
},
{
"accessKey": "AKIAI44QH8DHBEXAMPLE",
"secretKey": "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY"
}
]
}Requirements:
- Access keys must be at least 3 characters
- Secret keys must be at least 8 characters
- Access keys cannot contain reserved characters (
=or,) - Each access key must be unique
See credentials.example.json for a complete example.
- Start the gateway with a credentials file:
./nats-s3 \
--listen 0.0.0.0:5222 \
--natsServers nats://127.0.0.1:4222 \
--s3.credentials credentials.json- Clients sign S3 requests using SigV4 with any valid credential from the file:
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-east-1
aws s3 ls --endpoint-url=http://localhost:5222- The gateway verifies the SigV4 signature and, on success, processes the request.
- Both header-based SigV4 and presigned URLs (query-string SigV4) are supported.
- Time skew of ±5 minutes is allowed; presigned URLs honor X-Amz-Expires.
- Multiple users can share the same gateway, each with their own credentials.
- NATS server authentication (
--natsUser/--natsPassword) is independent from S3 credentials.
- See ROADMAP.md for planned milestones.
- Contributions welcome! See CONTRIBUTING.md and CODE_OF_CONDUCT.md.
-
See CONTRIBUTING.md for how to get started.
-
Please follow our simple CODE_OF_CONDUCT.md.
NATS-S3 achieves 100% pass rate on core S3 API operations with comprehensive conformance testing. See CONFORMANCE.md for:
- Full test coverage details (25+ tests)
- S3 API feature matrix
- Instructions for running tests locally