-
-
Notifications
You must be signed in to change notification settings - Fork 112
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
epic: new gesture engine 🐵 #7324
Conversation
User Test ResultsTest specification and instructions
Retesting Template
Test Artifacts
|
Note #7741 (comment) when it comes time to merge; I originally did some test index reorganization, but there were a few |
Related: #5511 It'll be lower-priority to implement than the main gesture types we've already added in Developer (multi-tap, output-key flick), but we can probably implement it in the same release cycle. |
if the mutlitap feature is available in 17.0.36 (0.7324.7695), If I can get the keyboard format. I can test also. |
|
fix(web): heterogenous longpress roaming 🐵
…et-context fix(common/models): lm-worker context resets didn't properly manage context cache 🐵
…ure-gestures chore: merge master into feature-gestures 🐵
…review change(web): floating tablet key-previews 🐵
feat(web): multitap key-previews 🐵
fix(web): test-keyboard patchup 🐵
Co-authored-by: Marc Durdin <[email protected]>
Co-authored-by: Marc Durdin <[email protected]>
Co-authored-by: Marc Durdin <[email protected]>
…-optimization feat(web): gesture processing - path memory-use optimization 🐵
chore(web): build script cleanup 🐵
@keymanapp-test-bot retest TEST_REGRESSION_ANDROID_KBD #10136 should be fixed by #10185 now, and I believe that was the reason for the previously-failing user test. |
iOS build failure from the last commit:
For the FirstVoices subproject, as built by |
merge master into your branch |
and I see now that this is a separate failure ... opening as a new issue |
Documented as #10241. I have reproduced this on master - thus it is not a feature-gestures issue. There's a chance this is related to recent ✨ Android optimizations. At any rate... it's not something to block upon. |
…ure-gestures chore: merge master into feature-gestures 🐵
…nature change(web): prep for enhanced flick-resets - alters the contact-path evaluation signature 🐵
…lacement feat(web): enhanced flick resetting via path base-coord replacement 🐵
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I'm the true author, this has long been a "holding" / staging PR for all the accumulated feature changes.
Everything we need to 'release' the feature is now ready.
Changes in this pull request will be available for download in Keyman version 17.0.230-alpha |
Implements #5029, enabling the following types of gestures within KeymanWeb, Keyman for Android, and Keyman for iPhone and iPad.
Multitaps
Main PRs:
Small videos (dropdown)
Phones (animated gif dropdown)
Tablets (animated gif dropdown)
When multitapping across layers...
Layer-crossing previews
Note how the key-hints in the background swap as the key continues to be tapped. (The fact that the shift-key doesn't illuminate/highlight was a keyboard-design bug on my part at the time of recording.)
In 16.0, we supported a single type of multitap - the ability to double-tap the SHIFT key to reach a CAPS layer under certain conditions. This restriction is now fully removed.
Flicks
Main PRs:
Small videos (dropdown)
We set the preview to "move" for certain flick directions because of practicalities - consider where your finger would be during the flick and how it would obstruct your ability to see the preview.
A new gesture type, "flicks" allow you press a key down and drag it in one of the four cardinal directions (up, right, down, left) or one of the four intercardinal directions to select a subkey. You must move a set distance before the flick will fully trigger, with a smaller threshold being required near the start of the flick to prevent being interpreted as a longpress.
There's a "middle-ground" threshold between the two. Once reached, this threshold will "lock" the flick direction in place, allowing for a polished preview animation. Moving the touchpoint back toward the start location will "unlock" the flick and allow a different direction to be selected.
Modifier longpresses - the "modipress"
(Modi-fier long-press.)
Main PRs:
Apologies for the lack of video / animated gifs: it is much harder to clearly record a multi-finger gesture. The approaches I looked at for this don't display circles for each ongoing touch, which is vital for clarity.
Have you ever wanted to type just a key or two from an alternate layer and then automatically return to your original layer, no matter which it was? If you:
... you'll automatically be returned to your original layer, no matter which it was - all without the need for custom
nextlayer
definitions on the keys used in step 2. While it may sound a bit technical, it's something that many people naturally try to do and have gotten used to with other modern touch-keyboard solutions.In essence, it allows you to treat a layer-shifting key as a longpress key, with its destination layer as the subkey menu. That said, you can still use regular longpresses during this gesture!
This gesture type may also combine with multitaps if the base key only changes the active layer. (We strongly recommend that all reachable keys for such a multitap only shift layers.) Multitapping, then holding at the end of such a multitap, will lock the
nextlayer
for the final tap of the multitap in place... until released, as with steps 1-3 listed above.Custom key-hints
Main PR:
Examples from keyboards seen in the recordings above (dropdown)
(defaultHint: 'flick')
(each key has a custom
hint
value)Note that only one hint per key (on its top-right) may be specified at this time. We're strongly considering extending this in a future version of Keyman, but such extensions will not land in 17.0.
This PR also enhances existing gestures:
Longpress
Main PR:
Small videos (dropdown)
Gigantic longpress menu - two rows, second with one less key
Going beneath the longpress's base key (rather than the subkey menu) will cancel selection, allowing the user to abort the longpress if they release the touch in such a location.
Fitts's law - loose selection range
Swift longpresses - working with muscle memory
The 's' longpress is there for comparison on the standard longpress wait timing, while the rest trigger and execute far more quickly. As long as the longpress-shortcut flick is in a generally upward direction - within roughly 67.5 degrees of a purely-upward direction - and the user doesn't shoot extremely far, the shortcut will trigger early.
This shortcut is not enabled on keys that have defined flick gestures in any of the "upward" directions.
Fitts's law for system keyboards
"Roaming touch"
"Roaming touch", the keyboard mode in which the selected key changes as you move your finger from key to key before a longpress timer triggers, was largely broken in recent versions of KeymanWeb, but has been restored.
However, it will be explicitly disabled for keyboards with flicks. It's worth noting that there's not much difference in the motion of a rightward flick and a rightward "roam" - it would be difficult to differentiate between the two on keys with such a flick. We believe consistent treatment of flickless and flick-enabled keys to be vital for clarity and simplicity... and the simplest way forward is to prevent overcomplicating how such a motion is handled by turning off roaming touch.
Oh, and backing all of this:
Abstract gesture engine
In order to implement these gestures, we have implemented a custom engine for gesture-recognition. While research turned up plenty of pre-existing engines for common gesture types - pinch, zoom, drag, etc - we were unable to find anything that clearly supported specifying custom gesture definitions - let alone full sets of them - in a manner that would be compatible with our needs.
Of particular note is that most of our gestures consist of multiple "states" - this gesture engine could be considered a gesture-oriented finite-state machine (FSM). Transitioning along any FSM edge - that is, recognizing a completed, valid gesture state - blocks all other edges and triggers a new 'state' from which different edges (potential follow-up gesture states) are allowed.
This gets further complicated due to some 'fun' interactions that are possible among the gesture types we support. Of particluar note is that in many cases, starting a new touch should "auto-complete" certain types of previous gestures. The new touch should not be considered as part of those gestures, even though it does trigger them. If such a trigger does occur, the pre-existing gesture should fully resolve, if possible, before further processing the triggering touch.
It also gives us some of the benefits described below:
Originally from #6842:
By developing an isolated module for gesture support, we'll be able to demo and test gesture behavior outside of KeymanWeb. One of my design goals with this is actually to enable gesture unit tests by recording tracked input sequence data. This may be achieved by keeping the core logic headless and replicating recorded input sequences in a headless environment.
Related Issues
User Testing
TEST_REGRESSION_WEB: As the branch now seems ready to merge down to
master
Web, we should run a full regression test suite on it.TEST_REGRESSION_ANDROID_KBD: We should also run the SUITE_KEYBOARD_FUNCTIONALITY set of tests from the Android regression-test suite.
TEST_REGRESSION_IOS_PREDTEXT: We should also run the SUITE_PREDICTIVE_TEXT_AND_AUTO_CORRECTION set of tests from the iOS regression-test suite.