Skip to content

Transparent interactions

Uli-K edited this page Jul 2, 2013 · 55 revisions

NULL

The first set of techniques, do not require a model or knowledge of the underlying surface.

Technique Task Requirements Alternative References
Ruler Measuring Direct Input, Pixel Size Use ruler [5]: Transparent ruler; [12]: Comparison Multitouch and ruler tangible; [4]: physical drawing tools on capacitive screens
Ink Tracing Direct Input Take picture -> Trace
Filling Fill on paper, Take picture and Fill
Physical/Virtual Composing Draw on paper, draw on computer, take picture and draw [8]: draw on tablet, AR overlay on paintings, tracked in space; [10]: emphasize/ enhance parts of a painting on the tablet
Stacking Content Sharing Device detection, communication Other non-stacking sharing mechanisms [7, 14]: Device gestures for sync, e.g. shake
Flipping Fixed app Side detection Dual display device, dedicated button TangibleViews[3]
Alternative content/view Navigation, Mode change [6]: no flipping but showing alternate content through transparency

The second set of techniques require having access (via image) of what the surface the device rests on looks like (no model - no semantics)

Technique Task Requirements Alternative References
Surface Capture Grabbing Direct Input, Backside Image Take picture and see PACER[1]; [13]: In LucidTouch similar setup proposed to detect back of device interaction
Scale/Rotate/Translate Take picture and S/R/T PACER[1]
Collecting + Crop Take picture and crop PACER[1]
Bookmarking (spatially un/aligned) + Crop
Recognize Text + Crop, OCR Google Goggles
Scribble Gestures App Triggering Capture Dual display, dedicated button, app browsing [9] Scribble Gestures, older related work; [11] Scribbling triggers application
Command Triggering (Within-App)
Continuos Capture Scan Capture, Stitching Take picture from afar

The third set of techniques require a model of the underlying object. The model should contain at least the physical representation (map) of the object upon which registration can be made. Other properties of the model are anchored content and meta data. The anchored content in the case of a paper document is the text and its location on the page. The meta data includes all other rich content like triggers (e.g. video triggers) and internal and external links.

Technique Application Requirements Alternative References
Registration 2D + 1 Absolute-Anchor Content Capture, Map PACER[1], PBAR[2]
Orientation to Content Coordinate system
Virtual Bookmarks
Area Triggers Meta data PACER[1] (the street view app sounds similar)
Translation Anchored Content
Content Selection Internal Search Anchored Content PACER[1]
External App Invokation PACER[1], PBAR[2]

References

[1] @inproceedings{liao2010pacer, title={Pacer: fine-grained interactive paper via camera-touch hybrid gestures on a cell phone}, author={Liao, Chunyuan and Liu, Qiong and Liew, Bee and Wilcox, Lynn}, booktitle={Proceedings of the SIGCHI Conference on Human Factors in Computing Systems}, pages={2441–2450}, year={2010}, organization={ACM} }

[2]@inproceedings{hull2007paper, title={Paper-based augmented reality}, author={Hull, Jonathan J and Erol, Berna and Graham, Jamey and Ke, Qifa and Kishi, Hidenobu and Moraleda, Jorge and Van Olst, Daniel G}, booktitle={Artificial Reality and Telexistence, 17th International Conference on}, pages={205–209}, year={2007}, organization={IEEE}}

[3]@inproceedings{spindler2010tangible, title={Tangible views for information visualization}, author={Spindler, Martin and Tominski, Christian and Schumann, Heidrun and Dachselt, Raimund}, booktitle={ACM International Conference on Interactive Tabletops and Surfaces}, pages={157–166}, year={2010}, organization={ACM}}

[4] Blagojevic, R., Chen, X., Tan, R., Sheehan, R. and Plimmer, B. Using tangible drawing tools on a capacitive multi-touch display. In Proceedings of the 26th Annual BCS Interaction Specialist Group Conference on People and Computers, British Computer Society (2012), 315–320.

[5] Couture, N., Rivière, G., and Reuter, P., “GeoTUI: a tangible user interface for geoscience,” in Proceedings of the 2nd international conference on Tangible and embedded interaction, 2008, pp. 89–96.

[6] Kim, K. and Elmqvist, N. Embodied Lenses for Collaborative Visual Queries on Tabletop Displays. In Information Visualization, Vol. 11, 4, 2012, pp. 336-355

[7] Kray, C., Nesbitt, D., Dawson, J. and Rohs, M. User-defined gestures for connecting mobile phones, public displays, and tabletops, In Proc. of MobileHCI ‘10, Lisbon, Portugal, September 07-10, 2010.

[8] Lee, S., Jung, J., Hong, J., Ryu, J. B., and Yang, H.S.. 2012. AR paint: a fusion system of a paint tool and AR. In Proceedings of the 11th international conference on Entertainment Computing (ICEC’12), Marc Herrlich, Rainer Malaka, and Maic Masuch (Eds.). Springer-Verlag, Berlin, Heidelberg, 122-129. DOI=10.1007/978-3-642-33542-6_11 http://dx.doi.org/10.1007/978-3-642-33542-6_11

[9] LaViola,Jr., J.J. and Zeleznik, R.C. MathPad2: a system for the creation and exploration of mathematical sketches. ACM Trans. Graph. 23, 3 (Aug. 2004), 432–440.

[10] McNamara, A.M. Enhancing art history education through mobile augmented reality. In Proceedings of the 10th International Conference on Virtual Reality Continuum and Its Applications in Industry, ACM (2011), 507–512.

[11] Ouyang, T. and Li, Y. Bootstrapping personal gesture shortcuts with the wisdom of the crowd and handwriting recognition. In Proc. of CHI ‘12. ACM, New York, NY, USA, 2895-2904

[12] Tuddenham, P., Kirk, D., and Izadi, S., 2010. Graspables revisited: multi-touch vs. tangible input for tabletop displays in acquisition and manipulation tasks. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘10). ACM, New York, NY, USA, 2223-2232. DOI=10.1145/1753326.1753662 http://doi.acm.org/10.1145/1753326.1753662

[13] Wigdor, D., Forlines, C., Baudisch, P., Barnwell, J. and Shen, C. Lucid touch: a see-through mobile device. In Proceedings of the 20th annual ACM symposium on User interface software and technology, ACM (2007), 269–278.

[14] Yatani, K., Tamura, K., Hiroki, K., Sugimoto, M. and Hashizume, H., Toss-it: intuitive information transfer techniques for mobile devices, CHI ‘05 extended abstracts on Human factors in computing systems, Portland, OR, USA, April 02-07, 2005