Skip to content

Conversation

@patrick-kidger
Copy link
Owner

@patrick-kidger patrick-kidger commented Jul 31, 2025

Okay, this is a major refactor of the internals of the loop. It should no longer busy-poll for whether threads or sleeps have completed.

This is accomplished by (a) promoting tinyio.Event.wait from a regular coroutine to instead be special-cased by the event loop, (b) having tinyio.Event.set notify a threading.Event that the loop can unblock, and (c) rewriting our various while cond_not_satisfied: yield operations into a yield some_event.wait() instead.

The extra complexity this implies means that we're now at 262 LOC for the core event loop (pygount _core.py). This is just about acceptable for our '~200 LOC' claim. Let's see if we need anything more in the future! 😁

This is in preparation for a non-busy sleep.
Here we just take the simple ('tiny') approach to scheduling the sleep in a thread. A more sophisticated ('not tiny') implementation might be to pass this sleep back to the loop so that the wake event can use it as a timeout.
This was referenced Jul 31, 2025
@davidbrochart
Copy link
Contributor

This looks good to me, but I'm worried about the precision of sleep now that it uses a thread. Also, it means that it won't work in WASM (e.g. using pyodide), since threads are not supported there.

@patrick-kidger
Copy link
Owner Author

patrick-kidger commented Aug 1, 2025

Got it! Is it specifically threading.Thread (/all of threading) that is not available on WASM, or can threading.Event still be used? FWIW the docs suggest that it may be all of threading that is not available.

If it is the case then I think that might be a hard limiting factor - either we use a busy loop, or we use lower-level primitives like threading.Event. (I note that asyncio also lists its availability as 'not WASI'.)

EDIT: FWIW, if there was a desire to support WASM, and if on this platform a polling loop would be considered acceptable - possibly not at 100% CPU but with e.g. time.sleep(0.01) in between queries - then it would be pretty easy to parameterise over that option. We'd basically just use shims for e.g. threading.Event whose .wait() method just polls its state internally. Then good performance is retained on non-WASM platforms whilst compatibility is there for WASM.

@davidbrochart
Copy link
Contributor

Pyodide allows to run asyncio in the browser event-loop through webloop.py.
I tried their REPL and it seems that threading.Event().wait() works, although it takes up 100% CPU.

possibly not at 100% CPU but with e.g. time.sleep(0.01) in between queries

But wouldn't it block the whole loop during the 0.01s sleeps?

@davidbrochart
Copy link
Contributor

I tried their REPL and it seems that threading.Event().wait() works, although it takes up 100% CPU.

BTW I don't understand how you would use threading.Event in an environment where threads are disabled, wouldn't waiting on an event block the event loop?

@patrick-kidger
Copy link
Owner Author

patrick-kidger commented Aug 1, 2025

BTW I don't understand how you would use threading.Event in an environment where threads are disabled, wouldn't waiting on an event block the event loop?

Ah you're right, I'm not thinking this through.
So I think in short, tinyio could work on WASM, but it would
(a) not be able to use tinyio.run_in_thread
(b) need a busy-loop to implement tinyio.sleep (easy enough)

We'd end up never blocking on our threading.Event, this ultimately only occurs when threading is involved. We could just make our threading.Event.wait shim be an assert False.

(EDIT: actually, the sleeping could probably be an appropriately-calculated time.sleep instead. Detail.)

@davidbrochart
Copy link
Contributor

I don't know if there would be a way to integrate tinyio with the browser's event loop, as they did in pyodide? This is very convenient for JavaScript/Python interactions, as JavaScript can launch a Python task and await it, for instance.

@patrick-kidger
Copy link
Owner Author

Actually, this kind of integration is exactly a use-case I had in mind. It's for this reason that the bulk of our implementation lives in a _step method, so that it would be very easy to create an analogue of Loop.run that looks something like:

async def async_run(self, coro):
    ...
    while ...:
        self._step(...)
        await asyncio.sleep(0)

The same pattern should work for any host event loop.

That said, implementing this isn't high-priority for me. I'm not a user of any other event loop library (at least I'm not any more!) so it'd be hard for me to do due diligence that it works correctly.


Other than that, you might find #8 interesting, which indeed switches sleeping over to use an event timeout rather than a thread. I think that's probably the 'final form' of a non-busy-loop implementation, as it also minimises the number of threads involved. (E.g. no more threads needed for implementing sleeping.)

@patrick-kidger patrick-kidger force-pushed the tweaks branch 2 times, most recently from 94c4b49 to 0bf90a0 Compare August 2, 2025 21:49
@patrick-kidger patrick-kidger merged commit 61c3284 into main Aug 2, 2025
1 check passed
@patrick-kidger patrick-kidger deleted the tweaks branch August 2, 2025 21:55
@patrick-kidger
Copy link
Owner Author

Okay, I think I now have enough confidence in this PR to merge it.
It's definitely more complex than the original design ('300 lines' not '200 lines' now 😅), but I think the performance improvements are worth it.

Rounding out the two discussions above:

  • Threading: in the end, tinyio only uses threads for tinyio.run_in_thread. So other than that function, we should be able to run on WASM. We do still use threading.Event, even when single-threaded, e.g. with a timeout to implement sleeping.
  • Integrating with a host event loop: likely doable without too much difficulty, but not something I'm planning on exploring myself. Tentatively interested in PRs on this if anyone is interested + if it doesn't introduce too much complexity.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants