Skip to content

Sharing model across threads #164

Answered by decahedron1
patcollis34 asked this question in Q&A
Discussion options

You must be logged in to vote

There are 2 main ways to do this:

  1. Wrap Session in an Arc. Session is Send + Sync so you can just wrap it in an Arc and send that to different threads.
    Though if you plan to use GPU acceleration, I've had issues where using a session in multiple threads with CUDA & DirectML on Windows would cause a segfault. This might have been fixed by #160 (which you'll need to add ort as a git dependency to use) but I haven't tested it.
  2. Create a dedicated thread to run inference. The thread receives data from an mpsc channel, runs inference on the session, and sends the results back through oneshot channels.

I don't think there's a need for threadpool if you're only ever using one session (shared bet…

Replies: 1 comment 2 replies

Comment options

You must be logged in to vote
2 replies
@patcollis34
Comment options

@BinaryWarlock
Comment options

Answer selected by patcollis34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
3 participants