This project tests event sourced actors events that are then read by a projection.
It is currently support the following alternative Akka Persistence and Akka Projection setup:
- R2DBC (Postgres) as the event sourcing event store and the projection using R2DBC (Postgres)
- Cassandra as the event sourcing event store and the projection using JDBC (Postgres)
- JDBC (Postgres) as the event sourcing event store and the projection using JDBC (Postgres)
Start a local PostgresSQL server on default port 5432. Note that this is also needed when testing with Cassandra.
docker compose -f docker/docker-compose-postgres.yml up --wait
For testing with R2DBC you need to create the tables with:
docker exec -i postgres-db psql -U postgres -t < ddl-scripts/create_tables_postgres.sql
For testing with Cassandra start a local Cassandra in addition to the PostgresSQL:
docker compose -f docker/docker-compose-cassandra.yml up --wait
Adjust the includes in local.conf
to choose testing with Cassandra, R2DBC or JDBC.
Start the application:
sbt "run 2551"
Start a test run:
curl -X POST --data '{"name":"","nrActors":1000, "messagesPerActor": 100, "concurrentActors": 100, "bytesPerEvent": 100, "timeout": 60000}' --header "content-type: application/json" http://127.0.0.1:8051/test
the params are:
nrActors
how many persistent actors to createmessagesPerActor
how many messages per actor, the total number of messages will benractors * messagesperactor
concurrentActors
how many actors to have persisting events at the same time. set to the same asnractors
to have them all created at once.timeout
how long to wait for all the messages to reach the projection in seconds
the response gives back a test name and an expected event total.
the expected event total is the nrActors
* messagesPeractor
* ${event-processor.nr-projections}
.
{"expectedmessages":200000,"testname":"test-1602762703160"}
Multiple projections can be run to increase the load on the tagging infrastructure while not overloading the normal event log. Each projection gets its own tag for the same reason. A real production application would have different projections use the same tag.
The test checks that every message makes it into the projection. these are stored in the events
table. Duplicated
are detected with a primary key.
To inspect the database:
docker exec -it postgres-db psql -U postgres
Each persistent actor is responsible for persisting the number of events, one at a time, it is instructed to. This means the request to persist the events can be retried meaning that even with failures to the messages table the test will eventually persist the right number of events.
Before every test the messages
and tag_views
test are truncated. Meaning when investigating failures the only messages in these tables
are from that test.
The projection table events
is not cleaned between tests but the table is keyed by a unique test name. To see the events in that table:
select count(*) from events where name = 'test-1602761729929'
To test performance, rather than validation of projection correctness, you can set config:
event-processor.read-only = on
Note that you can test write throughput by only starting nodes with Akka Cluster role "write-model".
The projection can be configured to randomly fail at a given rate of the messages resulting in the projection being restarted from the last saved offset. This helps tests the "exactly once" in the event of failures.
This be enabled with:
event-processor.projection-failure-every = 100
- cassandra on port 9042
- postgres on port 5432 with user and password postgres/postgres. not currently configurable see
Guardian.scala
sbt "run 2551"
sbt "run 2552"
sbt "run 2553"
Typically, multiple nodes are required to re-create issues as while one node is failing other nodes can progress the offset.
the application exposes persistence metrics via cinnamon and prometheus. the cinnamon prometheus sandbox can be used to view the metrics in grafana.
A known edge case is that a projection is restarted and delayed events from before the offset are then missed. This should only happen in when multiple nodes are writing events as delayed event should still be written in offset order.
THIS SECTION MIGHT BE OUTDATED
configure your aws client aws configure
cd terraform
terraform init
terraform plan
then to actually execute
terraform apply
this will create:
- vpc
- rds instance
- eks cluster
- install the metrics server into the eks cluster (requried by the operator)
- configure security groups to allow communication
the outputs printed at the end of terraform apply
give all the information needed to configure the kubernetes/deployment.yaml
db_endpoint = "projection-testing.cgrtpi2lqrw8.us-east-2.rds.amazonaws.com:5432"