Quokka Trading Platfo®m is a minimal viable product (MVP) for trading. It is capable of:
-
User management
- registration
- authentication
-
Order management
- creation of
- limit order, filling upon reaching the provided price
- market orders, filling at the current price
- deletion of pending limit orders
- creation of
-
Asset management
- registering an asset for price tracking
- registering an order for price tracking
- connects to an external data provider for tracking the price for the requested assets
-
Postman Collection for API testing purpose
Under the hood, Quokka Trading Platfo®m has a high performance order management engine. It's mission critical state is handled fully in-memory, while every change of that state will eventually get persisted.
- microservice components
- microservice communication (RESTApi, RabbitMQ)
- Makefile for automating more complex housekeeping tasks
- fast multistage dockerbuilds with caching enabled
- Postman collection (in Git)
- helm-chart to run on k8s
- Logging
- Log shipping (ELK stack)
- Visibility
- prometheus metrics
- grafana dashboard
- User registration
- Change password
- Login / Logout
- JWT based authentication / authorization
- Protected routes
- Market order
- Limit order
- Open Position
- Sell position
- Cash deposit - CSD
- Cash withdrawal - CSW
- Accounting/balance tracking
- API request based connection to Alpaca
- Stream based connection to Alpaca
- Price trackers for limit order settlement
- write-ahead log persistent in RabbitMQ
- persister component saves data into relational DB
- reload pending orders from DB into in-memory on app start
- keep and serve pending orders from in-memory
- Minimal frontend
- Login
- Logout
- Responsive web design Tailwind
Quokka Trading Platfo®m has three tiers:
- Frontend
- Backend
- Persistence
All components are dockerized, and docker-compose is used to make it easy to start the myriad of components.
Frontend is implemented using React.
Quokka Trading Platfo®m's backend follows the microservice architecture. There are 4 backend components separated by their main distinct responsibilities:
- API is responsible for exposing account-facing functionalities through RESTful API
- Order Management System (OMS) is responsible for all tracking all trade related actions
- Asset cache is responsible for:
- providing RESTful APIs for asset related functionalities
- subscribing to an external streaming data provider
- for notifying OMS if a limit order's price has been met
- Persister is responsible for persisting any message it consumes through its queue
All of our backend components are written in Java, except asset cache, which is written in python, since the external data provider has no Java SDK.
Persistence is implemented using durable RabbitMQ queues and Postgres relational database.
Frontend:
- React
- Bootstrap
Backend:
- Springboot & spring security
- JPA, Postgres
- python
- Docker and Docker-compose (maybe Helm when run on k8s if time permits)
- RabbitMQ
- gRPC
There are two spring profiles available. the default, and a dev profile.
If you want to run the application in your IDE locally, please activate the dev
profile (like this: https://www.baeldung.com/spring-profiles#4-jvm-system-parameter or any other preferred ways)
After you cloned the github repository, you need to do some initial setup. First you need to create an account to be able to receive data from Alpaca streaming.
Please register here: Create your Trading API account After registration you will receive the following important data: APCA_API_KEY_ID, APCA_API_SECRET_KEY.
You need to create a ".env" file in the assetcache/src folder with the following data:
export APCA_API_KEY_ID={ID you received}
export APCA_API_SECRET_KEY={secret key you received}
export APCA_API_BASE_URL="https://paper-api.alpaca.markets/"
export APCA_API_VERSION={actual version}
export APCA_API_STREAM_URL="https://stream.data.alpaca.markets"
export RABBITMQ_URL="amqp://guest:guest@rabbitmq:5672"
export EXCHANGE="oms_exchange"
export ROUTING_KEY="filled_order_routing_key"
export QUEUE="filled_order_queue"
export PROMETHEUS_MULTIPROC_DIR="/tmp/metrics"
Those setup will allows you to connect to Alpaca streaming, and use the predefined RabbitMQ queues in the application.
When you freshly clone the repository before the first start you need to create the DB. Run the make create-database
command to start postgres container and create the needed database.
Adjust spring.jpa.hibernate.ddl-auto = create
in one of the component's application.properties to auto create the DB schema. When schema is created, change it back to none
or validate
.