-
-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Crates using tokio don't compose with each other #659
Comments
One more question: If as a user, I encounter this collision, can I hack around it? If so, how? |
It's hard for me to provide much commentary about the details of the two libs. However, if a library spawns its own runtime (or tokio_core::Core) it is providing a blocking interface. That would be the only reason to do so. Just like any library w/ a blocking interface, you cannot use it from a non-blocking context. This is why the panic is there. If libraries wish to provide a non-blocking API, they should not create a runtime directly. Instead, just use the misc tokio resources (TcpStream, Timer, ...) and document that the library needs to be used from a tokio runtime. Looking at hyper would be a good example. I hope, in the future, to better document how to author a Tokio compatible library. |
Thanks, that's already helpful. I am considering trying to patch ldap3 so it works with actix-web. ldap3 has an async API that accepts a Does this mean I'll have to migrate |
@masonk it is probable... |
I'm still new to this framework, but would it possible for a Core constructor to accept an other-than-default runtime? Then I could construct a Core using actix-web's runtime and pass it down to ldap3. |
It looks like there's at least one issue against |
@masonk you must note that since |
Actix spawns a 'System' for every worker thread. Each system is powered by a tokio I'd thought that if If that's not the case, how should two libraries that don't know about each other in advance work together to run their tasks on the same executor? |
@masonk if it only uses spawn then it should be ok, I'm only saying that the problem would be if it would need blocking stuff like |
The asynchronous API uses I think the requirement for tokio is that it should be easy for two different crates that have no prior knowledge of each other to spawn tasks on the same executor. When they can do that they're composing. After reading a lot of tokio code the past few weeks, it seems like 'new' tokio uses the DefaultExecutor to get rid of explicit executor passing, and this is good enough for most cases. As long as crates are all using the DefaultExecutor then they're composable and their tasks can notified by the same reactor core and driven to completion by the same executor. |
Is there further action required to close this issue? |
I think just close this one.
…On Mon, Oct 15, 2018 at 10:12 PM Carl Lerche ***@***.***> wrote:
Is there further action required to close this issue?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#659 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AAbdzjzP29ztW_KkEVi5lx8M__Cyl3V-ks5ulWrNgaJpZM4W4vtL>
.
|
This is a followup on tokio-rs/tokio-core#319, an issue which hasn't gone away in the new branch:
Trying to start a runtime from a thread that already has a runtime will result in a panic.
This bug comes up very naturally when using an asynchronous db driver and an asynchronous webserver that are both on tokio. For example, anyone trying to combine the
ldap3
+actix-web
crates will instantly hit this. But so will anyone using any tokio-based webserver with any tokio-based DB driver.Here's a minimal repro:
https://github.com/masonk/tokio-crash/tree/master/src
Note that this issue might have complications due to the fact that
ldap3
is still ontokio-core
, butactix-web
isn't. However, if I am not mistaken, the same issue could have happened if both crates were ontokio
.This kind of thing seems like a big problem that will get bigger as the number of crates using tokio increases.
Stack trace for the repro:
(the other 3 threads have similar traces)
The text was updated successfully, but these errors were encountered: