You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is some consensus that a PointerEvent-style API (like existed in Glazier) is probably a good way to handle touch and mouse in general. This is a tracking issue for that topic.
The text was updated successfully, but these errors were encountered:
There are a few layers to this. The simplest layer is just to enable mouse-like behavior from touch, so that touch can operate widgets like buttons and sliders, and also selection (and focus) in text input.
Another more sophisticated layer is to support multi-touch well. That has potentially far-reaching changes, as multiple widgets can be active at the same time. A careful design is needed.
And yet another more sophisticated layer is gesture disambiguation and recognition. A touch-drag gesture on a slider widget should move the slider, but on mobile, the same gesture not on a suitably sensitive widget should be interpreted as a scroll. There may be platform-specific gesture recognition code we want to call into, or it may be that we want a pure Rust cross-platform implementation of all this, tunable to feel native on the specific platform.
There may also be improvements desired on the winit side. I see a bunch of PRs to add pen support (rust-windowing/winit#2647 is the most recent) but they seem to be stuck in review. rust-windowing/winit#99 seems to be the main tracking issue there.
To be clear, I think we should have the simplest layer first, so we can unblock simple UI interactions on mobile, then carefully design what we want to do for the other two.
There is some consensus that a PointerEvent-style API (like existed in Glazier) is probably a good way to handle touch and mouse in general. This is a tracking issue for that topic.
The text was updated successfully, but these errors were encountered: