A simple object storage system with a FastAPI backend and Python client. The system provides a REST API for storing and retrieving binary objects, with support for pagination, prefix filtering, and proper error handling.
- Simple key-value object storage
- RESTful API with FastAPI backend
- Python client with retry logic and connection pooling
- Support for pagination when listing objects
- Proper error handling and status codes
- Rate limiting support
- API key authentication
pip install .[deploy]The RestStore client provides a simple interface for interacting with the object store:
from client import RestStore
# Initialize the client
store = RestStore.create(
base_url='http://localhost:8000',
api_key='your-api-key'
)
# Store an object
store.put('my-key', b'Hello, World!')
# Check if an object exists
if store.exists('my-key'):
# Retrieve an object
data = store.get('my-key')
print(data.decode()) # Hello, World!
# Delete an object
store.delete('my-key')
# List all objects (with automatic pagination)
for key in store.keys():
print(key)For better resource management, you can use the client as a context manager:
with RestStore.create(base_url='http://localhost:8000', api_key='your-api-key') as store:
store.put('temp-key', b'temporary data')
data = store.get('temp-key')
# Connection is automatically closed after the contextWhen initializing the client, you can configure several parameters:
store = RestStore.create(
base_url='http://localhost:8000',
api_key='your-api-key',
timeout=30.0, # Request timeout in seconds
max_retries=3, # Maximum number of retry attempts
retry_delay=1.0 # Initial delay between retries (doubles after each attempt)
)The client provides proper error handling through exceptions:
try:
data = store.get('nonexistent-key')
except KeyError as e:
print(f"Object not found: {e}")
try:
for key in store.keys():
print(key)
except NotImplementedError as e:
print(f"Listing keys not supported: {e}")Start the FastAPI server using uvicorn:
uvicorn objectstore.app:app --host 0.0.0.0 --port 8000python -m objectstore.auth_tokens add api-user --ttl 365 --scope read --scope write --scope deleteThis creates a tokens.json file with your API credentials.
Create a .env file from the template:
cp dotenv.template .envEdit .env based on your storage backend choice (see below).
Use case: Quick testing, data is NOT persisted
Configuration:
- Leave
STORAGE_CONFIGcommented out in.env - No changes needed to
docker-compose.yml
docker compose up -dUse case: Persistent local file storage
Configuration:
-
In
.env, uncomment and set:STORAGE_CONFIG=./storage.yaml HOST_STORAGE_PATH=./data
-
In
docker-compose.yml, uncomment both volume mounts:# Optional: Uncomment for YAML-based storage config (S3, etc.) - ${STORAGE_CONFIG}:/app/storage.yaml:ro # Optional: Uncomment for filesystem-based storage - ${HOST_STORAGE_PATH}:/data
-
Create storage config:
cp storage.yaml.template storage.yaml
The default config uses
AsyncFilesystemStorewith/dataas the root path. -
Start the service:
docker compose up -d
Use case: AWS S3, MinIO, or other S3-compatible object storage
Configuration:
-
Create an S3 storage config file
storage.yaml:stores: s3: type: AsyncBucketStore config: bucket_name: ${S3_BUCKET} endpoint_url: ${S3_ENDPOINT} s3_access_key: ${S3_ACCESS_KEY} s3_secret_key: ${S3_SECRET_KEY} main: s3
-
In
.env, set:STORAGE_CONFIG=./storage.yaml # S3 credentials (adjust as needed) S3_BUCKET=your-bucket-name S3_ENDPOINT=https://s3.amazonaws.com S3_ACCESS_KEY=your-access-key S3_SECRET_KEY=your-secret-key -
In
docker-compose.yml, uncomment the storage config volume mount:# Optional: Uncomment for YAML-based storage config (S3, etc.) - ${STORAGE_CONFIG}:/app/storage.yaml:ro
-
Start the service:
docker compose up -d
You can use any storage backend provided by amplify-storage-utils by creating a storage.yaml file with the appropriate configuration. See the amplify-storage-utils documentation for available store types and their configuration options. All stores follow the same YAML format shown in the examples above.
# Check service health
curl http://localhost:8000/health
# Check logs
docker compose logs -f object-storeEnvironment variables can be configured in the .env file (copy from dotenv.template).
HOST_PORT- Port exposed on host (default: 8000)TOKENS_FILE- Path to tokens.json on host (default: ./tokens.json)STORAGE_CONFIG- Path to storage YAML on host (optional, requires volume mount)HOST_STORAGE_PATH- Path to data directory on host for filesystem storage (optional, requires volume mount)
WORKERS- Number of uvicorn workers (default: 1)LOG_LEVEL- Logging level: debug, info, warning, error (default: info)STORAGE_NAME- Specific store name from config when multiple stores are defined (optional)
The project includes a comprehensive test suite. To use it, make sure you install the "test" optional dependencies
You will need a running service to test against (see above) and a token with read, write, and delete scopes.
You can create the token as follows:
python -m objectstore.auth_tokens add my-token --ttl 30 --scope read --scope write --scope deleteThis will print a token, which you should then set as the value of the TEST_API_KEY environment variable:
export TEST_API_KEY={your token here}By default, token metadata is stored in tokens.json, which is where the server expects to find it.
Then run tests:
python tests/test.py
python tests/async_test.pyPUT /objects/{key}- Store an objectGET /objects/{key}- Retrieve an objectHEAD /objects/{key}- Check if an object existsDELETE /objects/{key}- Delete an objectGET /objects- List objects (supports pagination and prefix filtering)
Note that keys with special characters must be URL-encoded and decoded. This is handled automatically by the client and server-side implementations.