-
Notifications
You must be signed in to change notification settings - Fork 205
Sequence Diagrams
JamesPiechota edited this page Mar 15, 2023
·
20 revisions
Intent of these diagrams:
- Show how different modules fit together
- Map to specific functions/files in the codebase
- Be easily updatable to limit rot
That 3rd point forced some compromises, namely using Github's mermaid integration for diagrams rather than building the diagram in another tool and uploading an image. Adapting to mermaid's constraints:
- Vertical gray actors are distinct services
- Black "note" boxes reference specific functions in the codebase
- Horizontal arrows represent process-to-process communication or message passing
-
opt
dotted boxes group related functions and messages for understandability -
loop
dotted boxes are used for loops or repeated processes
%%{
init: {
'theme': 'neutral'
}
}%%
sequenceDiagram
participant ar_nonce_limiter
participant worker
participant ETS
participant ar_events
Note over ar_nonce_limiter: init/1
opt Query the last N blocks
Note over ar_nonce_limiter: ar_node:get_current_block/1
ETS->>ar_nonce_limiter: ets:lookup(node_state, current)
ETS->>ar_nonce_limiter: ets:lookup(node_state, {block, H})
Note over ar_nonce_limiter: ar_node:get_blocks/2
ETS->>ar_nonce_limiter: ets:lookup(node_state, {block, H})
end
Note over ar_nonce_limiter: start_worker/1
ar_nonce_limiter->>worker: spawn(worker/0)
loop Repeat every second (VDF delay)
Note over ar_nonce_limiter: schedule_step/1
ar_nonce_limiter->>worker: send(compute)
Note over worker: compute/2
Note over worker: ar_vdf:compute/3
Note over worker: ar_mine_randomx:vdf_sha2_nif/5
Note over worker: vdf.cpp:vdf_sha2(...)
worker->>ar_nonce_limiter: send(computed)
ar_nonce_limiter->>ar_events: send(nonce_limiter, computed_output)
end
%%{
init: {
'theme': 'neutral'
}
}%%
sequenceDiagram
participant ar_events
participant ar_data_sync
participant ar_packing_server
participant ar_packing_server worker
participant rocksdb
participant filesystem
participant peer
ar_events->>ar_data_sync: send(event, node_state, initialized)
Note over ar_data_sync: run_sync_jobs/0
opt Find intervals of data to sync and the peers to sync from
Note over ar_data_sync: get_unsynced_intervals_from_other_storage_modules/4
Note over ar_data_sync: handle_cast/collect_peer_intervals
Note over ar_data_sync: handle_cast/find_subintervals
Note over ar_data_sync: ar_data_discovery:get_bucket_peers
ETS->>ar_data_sync: ets:member(ar_peers, block_connections)
ETS->>ar_data_sync: ets:next(ar_data_discovery, {Bucket, Key, Peer})
Note over ar_data_sync: ar_data_sync:get_peer_intervals
loop For each peer, query which intervals they have available
ar_data_sync->>peer: GET /data_sync_record
end
end
loop Iteratively sync all intervals, parallel config.sync_jobs threads
opt Get chunks from peers
Note over ar_data_sync: handle_cast/sync_interval
Note over ar_data_sync: handle_cast/sync_chunk
Note over ar_data_sync: ar_tx_blacklist:get_next_not_blacklisted_byte/1
ar_data_sync->>peer: GET /chunk or GET /chunk2
end
Note over ar_data_sync: handle_cast/store_fetched_chunk
Note over ar_data_sync: validate_proof/6
opt Pack each chunk
Note over ar_data_sync: pack_and_store_chunk/2
ar_data_sync->>ar_events: send(repack_request)
ar_events->>ar_packing_server: send(repack_request)
ar_packing_server->>ar_packing_server worker: send(pack)
Note over ar_packing_server worker: pack/6
ar_packing_server worker->>ar_packing_server: send(packed)
ar_packing_server->>ar_events: send(chunk, packed)
ar_events->>ar_data_sync: send(chunk, packed)
end
opt Write each chunk to disk or rocksdb
Note over ar_data_sync: store_chunk/3
Note over ar_data_sync: write_not_blacklisted_chunk.7
Note over ar_data_sync: ar_chunk_storage:put/3
ar_data_sync->>filesystem: file:pwrite/3
Note over ar_data_sync: ar_kv:put/3
ar_data_sync->>rocksdb: rocksdb:put/4
end
end
%%{
init: {
'theme': 'neutral'
}
}%%
sequenceDiagram
participant ar_events
participant ar_mining_server
participant io thread
participant hashing thread
participant ETS
participant filesystem
opt Initialization
Note over ar_mining_server: init/1
ar_mining_server->>ar_events: subscribe(nonce_limiter)
ar_mining_server->>io thread: spawn(io thread) - 1 per storage_module
ar_mining_server->>hashing thread: spawn(hashing thread)
end
opt ar_mining_server maintains a priority queue of tasks and processes them sequentially via handle_task/2
ar_events->>ar_mining_server: send(nonce_limiter, computed_output)
Note over ar_mining_server: handle_task/computed_output
ar_mining_server->>hashing thread: send(compute_h0)
end
opt Compute "h0" - a cryptographic hash used as a source of entropy
Note over hashing thread: ar_block:compute_h0/4
Note over hashing thread: ar_mine_randomx:hash_fast/5
hashing thread->>ar_mining_server: send(mining_thread_computed_h0)
end
opt Read recall range(s)
Note over ar_mining_server: handle_task/mining_thread_computed_h0
Note over ar_mining_server: ar_block:get_recall_range/3
ar_mining_server->>io thread: send(read_recall_range)
ar_mining_server->>io thread: send(read_recall_range2)
loop For each recall range, load all the synced chunks from disk
Note over io thread: ar_sync_record:get_next_synced_interval/5
ETS->>io thread: ets:lookup(sync_records, {ID, StoreID})
ETS->>io thread: ets:lookup(SyncRectordType, NextOffset)
Note over io thread: ar_chunk_storage:get_range/3
Note over io thread: ar_chunk_storage:read_chunk
filesystem->>io thread: file:pread/3
io thread->>ar_mining_server: send(io_thread_recall_range_chunk)
end
end
ar_mining_server->>hashing thread: send(compute_h1)
opt Compute "h1" - either the hash of a solution (1 chunk) or input to the a solution hash (2 chunk)
Note over hashing thread: ar_block:compute_h1/3
Note over hashing thread: crypto:hash/3
hashing thread->>ar_mining_server: send(mining_thread_computed_h1)
end
Note over ar_mining_server: handle_task/mining_thread_computed_h1
opt Prepare 1 chunk solution
Note over ar_mining_server: prepare_solution/3
Note over ar_mining_server: ar_wallet:load_key/1
Note over ar_mining_server: ar_data_sync:get_chunk/2
opt EITHER read chunk from disk
Note over ar_mining_server: ar_chunk_storage:get/2
Note over ar_mining_server: ar_chunk_storage:read_chunk
filesystem->>ar_mining_server: file:pread/3
end
opt OR read chunk from rocksdb
Note over ar_mining_server: ar_data_sync:read_chunk/4
rocksdb->>ar_mining_server: ar_kv:get/2
end
opt Validate chunk
Note over ar_mining_server: ar_data_sync:validate_served_chunk/1
Note over ar_mining_server: ar_data_sync:validate_proof2/1
Note over ar_mining_server: ar_packing_server:unpack/5
Note over ar_mining_server: ar_packing_server:pack/4
end
opt Validate solution
Note over ar_mining_server: ar_nonce_limiter:get_last_step_checkpoints/3
Note over ar_mining_server: validate_solution/1
Note over ar_mining_server: ar_block:compute_h0/4
Note over ar_mining_server: ar_block:compute_h1/3
Note over ar_mining_server: ar_poa:validate/1
end
ar_mining_server->>ar_events: send(found_solution)
end
%%{
init: {
'theme': 'neutral'
}
}%%
sequenceDiagram
participant ar_events
participant ar_nonce_limiter_server_worker
participant VDF Client
loop 1 instance per configured VDF Client
Note over ar_nonce_limiter_server_worker: init/1
ar_nonce_limiter_server_worker->>ar_events: subscribe(nonce_limiter)
end
ar_events->>ar_nonce_limiter_server_worker: send(nonce_limiter, computed_output)
opt POST vdf to a registered client
Note over ar_nonce_limiter_server_worker: push_update/3
Note over ar_nonce_limiter_server_worker: ar_http_iface_cient:push_nonce_limiter_update/2
ar_nonce_limiter_server_worker->>VDF Client: POST /vdf
Note over ar_nonce_limiter_server_worker: push_session/2
Note over ar_nonce_limiter_server_worker: ar_http_iface_cient:push_nonce_limiter_update/2
ar_nonce_limiter_server_worker->>VDF Client: POST /vdf
end