-
Notifications
You must be signed in to change notification settings - Fork 1
CHI2013 Reviews
CHI 2013 Papers and Notes (Refereed)
Reviews of submission #1231: "CrashAlert: Enhancing Peripheral Alertness for Eyes-Busy Mobile Interaction while Walking"
------------------------ Submission 1231, Review 4 ------------------------
Reviewer: primary
Your Assessment of this Paper's Contribution to HCI
Overall Rating
4.0 - Possibly Accept: I would argue for accepting this paper
Review
Expertise
The Meta-Review
The Note submission presents a prototype system for alerting users engaged in “eyes-busy” mobile interaction, providing a visualization to alert them to potential obstacles, reports on an experiment to assess use of the system, and concludes with basic guidelines for such a system.
The reviewers generally like the work but also point out concerns. They offer three rather different perspectives on the references and discussion of related work, and somewhat different views of the presentation (“extremely well written and clear” and “repetitive and needs much better structuring, leaves out too many details”). It is not unusual for reviewers to view these things differently; the authors should state their position on these topics and planned modifications, if any, in their rebuttal.
Associate Chairs Additional Comments
The Review
Areas for Improvement
------------------------ Submission 1231, Review 1 ------------------------
Reviewer: external
Your Assessment of this Paper's Contribution to HCI
The paper proposes a system for bringing safety (crash avoidance) to the forefront for mobile device users who are walking while their eyes are occupied with a task on their mobile device. The authors were able to create a visualization that provides important safety information to users without utilizing much screen space. They also provide interesting insights from their user study on future features for systems that integrate user safety while navigating. Their work also increases the users’ perception of safety and awareness of their surroundings when walking and performing simple tasks.
Overall Rating
3.5 . . . Between neutral and possibly accept
Review
This paper reports on an important aspect of walking user interfaces: providing the user with information of the real-world environment while the eyes are occupied with a task other than observing their physical surroundings. This idea is interesting and valuable, but there are a few issues. While there are a few minor issues that are discussed later, the biggest is the lack of related work. Several research groups have already explored the problem of providing informative safety cues (obstacle avoidance) but with a focus on those with visual impairments.
For example, [r1] aids users in safe pedestrian navigation by providing support for obstacle avoidance and situational awareness. [r2] created a system that notifies the user of obstacle information through an acoustic signal interface.
It would have also been interesting to read more about how the advances in this other field (navigation support for the blind) could help or influence the design of systems that provide informative safety cues to non-handicapped users. [r3] has begun to look at what the differences between these two groups are, and how this can affect the design of assistive devices. It might make sense for the authors to integrate these papers in their related work section. For example, it was unclear why the authors chose to utilize visual cues rather than modifying some previously used techniques (audio or vibro-tactile feedback). It seems especially curious given that screen space is already limited on a mobile device.
While the above mentioned lack of references is the main issue with the paper, there are also some other minor issues that are not addressed. For one, a whack-the-mole game seems like it would require significantly less cognitive resources than, for example, texting, the activity the participants mentioned attempting to do while on the go. Furthermore, it is unclear that counting the number of times a person looks up is an appropriate measure for where the user’s focus is. In the Color condition especially, it is not necessary to look up to see what is ahead as there is a visual slice at the top of the screen. It therefore seems possible that users could substitute looking up for looking at the top of the screen. The user could also focus entirely on that image slice and instead use their peripheral vision to play the whack-the-mole game.
It also appears from Figure 5 that both the Color and Depth conditions had more Nearly Crash events than the control condition. Since a near collision is less desirable than stopping, looking up, or dodging/slowing down, it is odd that this is not addressed within the paper. It is impossible to tell if the difference is statistically significant or if there is some kind of explanation based on observed behavior to mitigate this increase.
It is also unclear how the authors picture this being used in the future. Even if a depth sensor were standard on mobile phones, it seems likely that it would be oriented similar to the camera rather than roughly perpendicular to it. This difference would make the application as it is significantly less useful as the sensor would be more focused on the user’s feet rather than the direction of travel. The authors should also consider how the fact that people hold their phone differently, could affect the system’s functionality.
Additionally it might have been interesting for the authors to mention how they envision their work functioning in an actual application. Do they see this as a type of service the operating system would provide, automatically resizing all applications for the smaller screen? Or would this be something that application developers would have to integrate themselves, changing the resolution of their layout to fit the addition of the “safety visualization.”
While the paper needs some improvement, the core idea is a good one. Increasing safety while users interact with their cell phones on the go is a very important topic that does deserve more attention and seems to not have been exclusively touched upon in the past. There are also a number of good ideas presented that will make for interesting future investigations. The paper is also extremely well written and clear, and the authors were able to test their prototype in the wild instead of in an artificial lab environment. But the fact that relevant areas of the state of the art were not mentioned or did not seem to influence their design affected the overall grading this paper received.
[r1] J. Wilson, B. N. Walker, J. Lindsay, C. Cambias, and F. Dellaert, "SWAN: System for Wearable AudioNavigation," presented at International Symposium on Wearable Computers, Boston, MA, 2007 [r2] Kim, C.; Song, B. Design of a wearable walking-guide system for the blind. Presented at the Proceedings of the 1st International Convention on Rehabilitation Engineering & Assistive Technology: In Conjunction with 1st Tan Tock Seng Hospital Neurorehabilitation Meeting, Singapore, April, 2007 [r3] T. Miura, T. Muraoka, and T. Ifukube. Comparison of obstacle sense ability between the blind and the sighted: A basic psychophysical study for designs of acoustic assistive devices. Acoust. Sci. Tech., 31(2):137–147, 2010
Expertise
2 (Passing Knowledge)
Areas for Improvement
There are a few places early on, in the Introduction and Informative Field Observations section, where it is unclear what is being said solely because the UI has not been described yet (when talking about unexpected usage strategies and in the first paragraph of CrashAlert Design). The paper would be easier to understand if a brief description of what the user sees is presented earlier on with terms such as “ambient visual band,” explained closer to where they first appear.
------------------------ Submission 1231, Review 2 ------------------------
Reviewer: external
Your Assessment of this Paper's Contribution to HCI
The paper presents work in the field of mobile devices. Walking with a mobile device while using it for some application that involves looking at the display may lead to crashes with objects in the environment such as static objects but specially moving objects. The authors propose a visualization technique that uses a small fraction of the upper screen of a walking user to indicate obstacles coming up in front of the user. For their implementation they use a Kinect to detect the objects in the front of the person. The work is interesting. The authors show the three designs that they are using for their study. It is believable that they have observed users in an informative field study to gain more insights into their goal of avoiding collisions of mobile users. The results of the informative study are very interesting, though probably not very much surprising. The actual experiment is good but the discussion of the work is far too short and leaves out too many interesting details. Shortening introduction and other parts of the paper as suggested below would give more room for the missing details and information.
Overall, the authors are contributing the visualization of off-screen objects- mainly the obstacles in front of a walking user by an in-screen display on the top of the display of the mobile.
Overall Rating
3.5 . . . Between neutral and possibly accept
Review
The introduction is far too long. The authors use the introduction to summarize the paper. <This is redundant. In a short paper, the abstract should summarize the work. The introduction should motivate the work and maybe give an overview. But in the submission the authors repeat the approach and contribution several times, in the abstract, in the introduction, in the lessons learned section and in the conclusion. This ís too much. The paper needs a much better structuring to avoid these repetitions.
The introduction refers to too many references just to motivate and proof that mobile devices can cause accidents. I would say one or two references are enough. Then the paper would have room for other references that are missing - references to the visualization of off-screen objects.
The authors present three different designs as a result of a pilot study with 8 participants. The authors show the remeining three which are ok. What is fully missing is a longer discussion on why the design has been chosen as it is. What was the reason to realize it as a stripe of the display of the mobile device's display. Why was there no other option? There is no a single reference to a large body of work in off-screen visualizations (Halo probably being one of the most prominet works).
The motivation is very much targeted at mobile phones meaning smart phones rather than tablets. When the authors prsent their prototype it is a tablet with a huge Kinect underneath it. So the study is very limited. Even though the authors refer to this I was wondering if this was the only possible set up. Why not use the phone but place the kinect in front of the user in the form of a belt. This would have led at least to a more realisitic phone experience.
The description of the experiment is very nice until the end of paragraph Task and Procedure. The section on results and discussion is too condensed. I have problems following the part from the second paragraph on. The relationship to Figure 5 is not always clear.
Was it fair to normalize the collosion handling maneuvers to 100%? I understand that the different trials have shown different numbers meneuvers. Also tit would have been nice to know how many of the maneuvers actually were taking place. Ho´w many per time slot? Were there vaariations depending on the individual path? The empirical results of the study should be elaborated in more detail.
When it comes to the qualitative results, the paper is interesting and reveals interesting insights into the user study. The lessons learned are nice but as I wrote before, the paper is very repetitive with regards to the contribution and results. Please shorten and give more room for the discussion. At the end lessons learned, future work and conclusions are too much for a 4-page note.
Expertise
4 (Expert )
Areas for Improvement
See the comprehensive comments in the review.
------------------------ Submission 1231, Review 3 ------------------------
Reviewer: external
Your Assessment of this Paper's Contribution to HCI
The contribution of this paper is a tool to help mobile phone users avoid collisions when walking while using a mobile device. This paper specifically focused on the problem of avoiding collisions, as opposed to previous work that focused on improving task performance.
Overall Rating
4.0 - Possibly Accept: I would argue for accepting this paper
Review
The authors present a cursory evaluation of the use of various forms of visual feedback to provide situational awareness to mobile phone users who are using a mobile device while walking. They did this by asking users to play a game while walking through a safe but collision-filled field. A nefarious actor attempted to collide with the participants in a variety of ways, and the study administrator documented participant reaction. The authors had each participant do this for each of 4 conditions of visual feedback. Overall, the authors analyzed the a measure of game success (how many moles whacked), collision avoidance strategies, number of errors (how many moles not whacked), and then some self-report measures about the participant's experience.
The authors did an excellent job reporting on background and related work. Specifically, they were able to succinctly describe the other work, and how it related to the work they were presenting here.
This work should have had a human subjects review, and it is a good thing for the authors to acknowledge that it was actually reviewed by an IRB (of course, only if it actually was, but assuming the best for now...).
The scope of this work is appropriate for this stage of research. It's more of a feasibility study: does this kind of visual feedback provide any benefit, and if so, what are some of the characteristics and technical challenges to address. It's the right size of work for a note.
The result that there were no significant differences in game success, errors, or completion times between conditions is interesting; even more that the effect of feedback or no feedback impacted the number of avoidance techniques.
One thing that is very interesting is that the color image and the masked image were more challenging for people to use (via self-report), while the depth images were more helpful.
Expertise
3 (Knowledgeable)
Areas for Improvement