Skip to content

Od/tile prep#355

Open
0w3n-d wants to merge 3 commits intodevelopfrom
od/tile_prep
Open

Od/tile prep#355
0w3n-d wants to merge 3 commits intodevelopfrom
od/tile_prep

Conversation

@0w3n-d
Copy link
Collaborator

@0w3n-d 0w3n-d commented Jan 8, 2026

Picking up from where we left off with the auctioneer work.
This is the first of a number planned PRs to isolate all IO, async, tokio, and so on away from the rest of the code.
The next step then will be to move any remaining non IO dependent code out of these async services into regular threads.
Leaving the async services as very thin wrappers around IO crates like axum, postgres, and so on.
The non async code can then be restructured into tiles.
The final step will be to replace the small set of async tasks remaining with custom networking code.

@0w3n-d 0w3n-d requested review from gd-0 and ltitanb January 8, 2026 09:42
Copy link
Collaborator

@gd-0 gd-0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I don't think this is the actual pattern we want to go for. I don't think we should really have a case where we make a "request" to one of the tiles.

IMO it should look like:

  1. we have a tile that processes something like validator registrations
  2. every new valid registration it will broadcast that reg down the spine queues
  3. any process that needs to maintain a mapping of registrations would listen to this queue then build up its own local memory cache.

I'm also not massively opposed to tokio if we can contain it in it's own process. e.g., the data api boxes. Data API server could easily run in tokio on its own box isolated from anything performant.

When we want to write to that DB on the submission boxes we have a DB write tile that will listen to a channel for all requests to persist. Main processing Tiles can then read any new DB updates from this tile as described above

Copy link
Collaborator

@gd-0 gd-0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks good. just some minor comments

Comment on lines +25 to +29
#[derive(Clone)]
pub struct DbHandle {
sender: crossbeam_channel::Sender<DbRequest>,
batch_sender: crossbeam_channel::Sender<PendingBlockSubmissionValue>,
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lets add a comment to this just explaining that this wrapper around channels is just temporary as we work on incremental changes towartds the flux design


tokio::spawn(async move {
loop {
match receiver.recv() {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

how does this work in tokio btw? IIRC it thread::yields so should be ok. Just not sure if it's yielding to the tokio runtime properly.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if this is an issue could loop through try_recv until the queue is empty and then tokio::sleep for a short time

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants