throbber
2011 15th Annual International Symposium on Wearable Computers
`
`AirTouch: Synchronizing in-air hand gesture and on-body tactile feedback to
`augment mobile gesture interaction
`
`Seungyon Claire Lee
`HP Labs
`Palo Alto CA US
`claire.sylee@ hp.com
`
`Abstract
`
`We present the design and evaluation of AirTouch, a
`wristwatch interface that enables mobile gesture interac(cid:173)
`tion through tactile feedback during limited visual attention
`conditions. Unlike its predecessor, the Gesture Watch, Air(cid:173)
`Touch is supported by a push-to-gesture mechanism (PTG)
`where the user performs a gesture and then confirms it after(cid:173)
`ward with a trigger gesture. The Gesture Watch, in contrast,
`requires the user to hold a trigger gesture while perform(cid:173)
`ing an interaction, and its PTG method does not allow the
`user to preview nor reverse the action. The effect of the new
`PTG mechanism and tactile feedback are evaluated through
`two experiments. The first experiment compares AirTouch's
`PTG mechanism to that of the Gesture Watch both with
`and without visual access to the watch. The second experi(cid:173)
`ment examines mobile gesture interaction with the new PTG
`mechanism in four conditions (with and without tactile feed(cid:173)
`back and with and without visual restriction). We found that
`the new PTG mechanic enabled more accurate and faster
`interaction in the fully visible condition. Additionally, tac(cid:173)
`tile feedback in the limited visual access condition success(cid:173)
`fully compensated for the lack of visual feedback, enabling
`similar performance times and perceived difficulties as in
`the fully visible condition without tactile feedback.
`
`1 Introduction and motivation
`
`Gesture-based interaction is gaining attention in the con(cid:173)
`sumer electronics market (e.g., Nintendo Wii and Microsoft
`Kinect) as a viable mode of interaction to control devices
`in a more natural and intuitive way. Mobile gesture in(cid:173)
`teraction is a logical next step for future investigation. In
`a mobile interaction, users are often on-the-go interacting
`with their mobile device as they navigate the physical envi(cid:173)
`ronment When mobile, users need to split their visual at(cid:173)
`tention between their device and the environment to ensure
`
`Bohao Li and Thad Starner
`GVU Center, Georgia Institute of Technology
`Atlanta GA US
`{bhli1989, thad}@gatech.edu
`
`accurate hand-eye coordination while avoiding obstacles in
`their path. However, interacting with a mobile device while
`in motion raises concerns for safety [16, 19, 22].
`When developing our first wristwatch user interface (UI)
`for mobile gesture interaction, the Gesture Watch [9], we
`observed similar problems. The Gesture Watch captured
`in-air hand gesture through wrist-mounted IR sensors and
`sent the gesture patterns to a recognizer. With the Gesture
`Watch, users could control electronic devices (e.g., MP3
`player) while on-the-go.
`AirTouch pairs each proximity sensor with a vibration
`motor pressed against the user's wrist When the proxim(cid:173)
`ity sensor detects a hand above it, the corresponding mo(cid:173)
`tor buzzes. We designed a new push-to-gesture mechanism
`(PTG) for AirTouch which takes advantage of this tactile
`feedback. The mechanism follows two design principles of
`direct manipulation: representation of the object of interest
`(in this case, the tactile representation of hand movement in
`relation to the device) and reversible operations [21]. With
`AirTouch, a gestural command is implicitly canceled to re(cid:173)
`verse the action when the user does not commit the com(cid:173)
`mand with a trigger gesture. Unlike the PTG mechanism
`of the Gesture Watch, the new PTG of AirTouch provides
`an eyes-free representation of what the sensors are perceiv(cid:173)
`ing and allows the user to cancel the interaction implicitly
`in case the gesture was made in error or the system's per(cid:173)
`ception of the gesture seems likely to be wrong (as judged
`by the user). We hypothesize that tactile feedback and the
`new PTG of AirTouch will assist eyes-free mobile gesture
`interaction when a user's vision is limited.
`In this paper we present the results of two experiments
`that investigate mobile gesture input and vibrotactile feed(cid:173)
`back in conditions of limited visibility. The rest of the paper
`reports related work in mobile gesture interaction and tactile
`perception space, the introduction of the AirTouch system
`and pilot test for PTG design, and two experiments. Our
`first experiment evaluates the appropriateness of the new
`PTG technique while the second experiment evaluates the
`effect of tactile feedback on participants' use of AirTouch.
`
`1550-4816/ ll $26.00 IC 2011 IEEE
`DOI 10.1109/ISWC.201127
`
`3
`
`IEEE
`~ computer
`society
`
`Petitioner Samsung Ex-1022, 0001
`
`

`

`2 Related work
`
`Front
`sensor
`
`As a design principle, direct manipulation has shaped
`graphical user interfaces (Gills) and even physical Uls
`[8, 21]. For example, a text label or tooltip box that changes
`color or appears upon a mouse-over event in a GUI visually
`represents the object of interest and enables reversible ac(cid:173)
`tions as suggested by direct manipulation. By using capac(cid:173)
`itive sensing integrated with physical keys, Rekirnoto and
`colleagues created keyboards which would display what ac(cid:173)
`tion a given button would perform when a user touched the
`button but had not yet pressed it [15]. In this manner, users
`could interact with the physical buttons in smaller steps and
`retreat from an action before committing it
`Touch and sensor-based interactions in mobile and wear(cid:173)
`able computing often raise new concerns on motor perfor(cid:173)
`mance. Proprioception [20] (perception of motion and po(cid:173)
`sition of body parts in accordance with joint and muscle
`control) is essential in motor perfonnance [17] and highly
`affected by force feedback [18] and vision [14]. Even expert
`typists suffer perfonnance difficulties when using touch(cid:173)
`based soft keyboards due to the lack of force feedback [ 4].
`This difficulty can be reduced by providing tactile or au(cid:173)
`ditory feedback [12]. To reduce visual distraction in mo(cid:173)
`bile interactions, researchers have explored the benefit of
`using haptics. Users can perceive tactile patterns on the
`wrist while visually distracted [11], receive tactile direc(cid:173)
`tional cues on the torso while driving [6], and navigate the
`environment helped by navigational cues on the waist [23].
`Motor distraction in mobile computing (e.g., walking)
`can also limit human perfonnance [2, 3]. To compensate
`for limited dexterity (and improve social appropriateness)
`while mobile, Whack Gestures [7] utilized gross gesture for
`inexact and inattentive interaction rather than fine gesture
`that requires precise hand-eye coordination. Ashbrook and
`colleagues tested the importance of device placement in mo(cid:173)
`bile computing. They observed significantly faster interac(cid:173)
`tion time on wrist-worn mobile devices than devices stored
`in pockets [I], suggesting that wrist-worn devices may re(cid:173)
`quire less motor distraction than devices placed elsewhere.
`
`3 Apparatus and configuration
`
`As with the Gesture Watch, AirTouch perfonns ges(cid:173)
`ture recognition using the Gesture and Activity Recognition
`toolkit (GART) [13], which utilizes hidden Markov models
`(HMMs). GART links the patterns of hand gesture to corre(cid:173)
`sponding commands that can be sent to electronic devices.
`
`3.1 Wristwatch interface an d GART
`
`The Air Touch consists of two parts, the sensors in a
`wristwatch UI and a tactile display which are fastened by
`
`Figure 1. AirTouch wristwatch interface
`
`Gesture Watch
`
`AirTouch
`
`Figure 2. Sensor layout comparison
`
`an elastic strap and worn on the dorsal and volar sides of
`the wrist, respectively (Figure 1). Since we observed user
`difficulty in localizing vibrators on the dorsal side of the
`wrist in our previous studies [5], we located the tactile dis(cid:173)
`play on the volar side, where high perception accuracy has
`been shown previously [10, 11]. The size of the wristwatch
`UI is 109mm x 20.5mm x 49.5mm (L x H x W).
`We use four SHARP GP2Y0D340K IR proximity sen(cid:173)
`sors to capture in-air hand gestures between 10 - 60 cm
`above the wrist. Four vibrating motors (Precision Micro(cid:173)
`drives #301-101, 200Hz, d = 10 mm, h = 3.4 mm) are ar(cid:173)
`ranged in a square with 30 mm center-to-center distances.
`A rubber housing ensures constant center-to-center distance
`between the motors. Unlike the cardinal layout of the Ges(cid:173)
`ture Watch, AirTouch sensors are arranged in an orthogonal
`layout (Figure 2) to assist easier perception of tactile feed(cid:173)
`back. The orthogonal motor layout that is synchronous with
`sensor layout enables longer center-to-center distances be(cid:173)
`tween motors compared to the cardinal layout We believe
`that the longer center-to-center distances and the simpler
`grid of AirTouch enable easier perception.
`The microcontroller synchronizes sensors and motors by
`turning on and off vibration motors based on the sensors' in(cid:173)
`put. Users can mentally synchronize the in-air hand gesture
`with on-body tactile feedback. Power is supplied by a 3.7
`
`4
`
`Petitioner Samsung Ex-1022, 0002
`
`

`

`Figure 3. Gestures tested in the user study.
`
`Figure 4. GART GUI: training (left), recogni-
`tion (right)
`
`V Lithium-ion battery. A power regulator is used to guaran-
`tee a stable 3.3 V power supply. The front sensor is placed
`at the side of the watch facing toward the user’s hand. To
`avoid false triggering caused by the hand, the front sensor
`is tilted 20 degrees upward.
`A remote computer receives sensor data from AirTouch
`through Bluetooth and processes the gestures. The GART
`GUI is implemented in Java, and it assists visual training of
`new gestures and recognizes trained gestures during exper-
`iments (Figure 4).
`
`3.2 Push-to-gesture mechanism (PTG)
`
`Once captured, data from the sensors are passed to an
`ATmega 168 microcontroller. The microcontroller stores
`sensor data, turns on and off motors, and waits for user
`confirmation rather than immediately sending the gesture
`to GART. This storage function of the microcontroller is
`added to AirTouch to support our new PTG. The new PTG
`in AirTouch has a time-out period for user confirmation in
`the interaction. Within the time-out period, users can make
`a decision to confirm or abort the gesture (Figure 5). If the
`tactile sensation of the hand gesture does not match user’s
`intention (‘3-1.Receive tactile feedback’ in Figure 5), the
`user can cancel the incorrect gesture by not triggering the
`confirmation sensor within the time-out period (‘4. Con-
`firm or abort’ in Figure 5). The microcontroller will send
`data to GART only when the user confirms the gesture by
`quickly tilting up and down his wrist within the time-out pe-
`riod. Stored sensor data is discarded after the time-out. Fol-
`lowing the principles of direct manipulation [21], the on-off
`status of the motor indicates the state change of the object
`of interest (sensors) and the time-out period that waits for
`user confirmation enables reversible user operations.
`
`Figure 5. AirTouch interactions.
`
`Unlike the PTG of the Gesture Watch which required a
`pair of wrist tilting gestures for segmenting the gesture to
`be recognized (Figure 6), the time-out period of AirTouch
`enables automatic segmentation of the gesture by taking
`advantage of the ‘idle period.’ With this PTG, the semi-
`automatic gesture segmentation in AirTouch is simpler than
`the ‘do-while’ type interaction of the Gesture Watch (i.e.,
`hold segmentation gesture while applying command ges-
`ture).
`To find the appropriate length of the time-out period, we
`performed a pilot test with seven participants. During the
`pilot test, participants were asked to apply four gestures
`(Figure 3) six times (4 gestures x 6 times = 24 trials) in ran-
`dom order while wearing the AirTouch system on their non-
`dominant wrist. Participants listened to voice commands
`from a computer and applied the gesture with the new PTG
`(Figure 5). We measured the time lapse between the last
`data input from the motion sensor and the front sensor acti-
`vation. The result calculated from 168 data points showed
`that the maximum time between the hand gesture and con-
`firmation was 1.3 seconds. Thus, we decided to set the time-
`out period for two seconds in our formal experiment.
`
`4 Study design overview
`
`To investigate the possible benefits of tactile feedback
`with respect to visual attention in mobile gesture interac-
`
`5
`
`Petitioner Samsung Ex-1022, 0003
`
`

`

`quired to gesture (4 gestures x 6 times x 2 conditions x 2
`groups x 14 participants = 1344 trials). Participants com-
`pleted a NASA Task Load Index (TLX) for each condition
`and a survey on their impressions of the interfaces.
`
`5.1 Task and procedure
`
`The experiment was conducted with a mixed between-
`within subject method in a balanced order. We selected
`this method because interactions with different PTG mecha-
`nisms is likely to cause confusion when used consecutively
`by one person. Seven participants (five male) controlled
`the watch with the old PTG method (Figure 6), whereas
`the other seven participants (six male) used the new PTG
`mechanism (Figure 5). No tactile feedback was provided in
`this experiment. Participants in each group had two condi-
`tions, full or limited vision. To simulate limited visual ac-
`cess to the watch, participants wore half-blocked goggles.
`The goggles limited the wearer’s sight below the eye level
`when interacting with the wristwatch interface. However,
`the goggles allowed full vision for walking.
`The experimental procedure was composed of four ses-
`sions: training, walking practice, main test, and survey ses-
`sion. During the training session, participants wore the
`watch UI on the non-dominant wrist, stood in front of the
`computer, and attempted four gestures three times (4 ges-
`tures x 3 times = 12 trials) guided by the researcher. The
`training GUI (Figure 4-left) and a synthetic voice on the
`computer presented the name and direction of the gesture.
`Once the participants heard the voice command for the ges-
`ture, they applied the gesture to the watch UI. This proce-
`dure is identical to the full vision condition in the main ses-
`sion except for the absence of the visual guide (gesture di-
`rection and name) and additional mobility condition (stand-
`ing instead of walking). After finishing the training session,
`participants practiced walking twice along the designated
`path. The researcher led the participant for the first trial and
`followed the participant for the second trial (Figure 7-right).
`In the main test session, participants wore headphones
`and carried a laptop in a backpack (Figure 7-left). On the
`laptop computer, the GART test GUI (Figure 4-right) was
`played to provide tasks for the user. This GUI was also re-
`motely shared to another computer in the lab so that the re-
`searchers could monitor the live status of the performance.
`The voice command from the headphones prompted the
`study by saying ‘Welcome to the user study. Please start
`walking.’ Within 5 seconds, the first task was given by a
`voice command. The name of the gesture was played twice
`to ensure clear delivrey of the voice command (e.g., “For-
`ward gesture. Forward.”). Then the participant applied a
`hand gesture to the wristwatch UI. For the first group that
`used the old PTG mechanism, no feedback for confirmation
`or failure of the gesture was provided as the old system was
`
`Figure 6. Gesture Watch interactions.
`
`Table 1. Conditions for experiments 1 and 2
`Experiment 1
`Experiment 2
`(old PTG vs. new PTG)
`(effect of tactile feedback)
`old PTG
`new PTG
`no feedback
`has feedback
`Old-full
`New-full
`nT-full
`T-full
`Old-limited
`New-limited
`nT-limited
`T-limited
`
`full vision
`limited vision
`
`tions, we conducted two experiments involving limited and
`full visual access to the interface. Experiments addressed
`the following research questions: how are the two PTG
`mechanisms different and what is the impact of the new
`PTG on user performance compared to the previous PTG?
`(experiment 1) and does tactile feedback during gestural
`interaction compensate for limited visual access while the
`user walks (experiment 2)? The study design for each ex-
`periment is composed of 2x2 conditions (Table 1). Tasks in
`all conditions are performed while the participants walk in a
`designated path (Figure 7-right). The walking path is set in
`a quiet lab setting. A pair of orange flags visually mark each
`‘gate’ in the walking path to guide the user. All participants
`were recruited from the Georgia Institute of Technology.
`
`5 Experiment 1
`
`Fourteen participants volunteered for the experiment
`(mean age = 22.36, eleven male). The mean width and cir-
`cumference of their wrists were 54.53 mm and 162.93 mm,
`respectively. All participants were right-handed except one.
`50% of the participants wore a wristwatch daily. During the
`experiment, we measured accuracy and amount of time re-
`
`6
`
`Petitioner Samsung Ex-1022, 0004
`
`

`

`Figure 7. Test setting (left, A: Half-blocked
`goggle, B: Laptop computer, C: Headphone,
`D: AirTouch), walking track (right).
`
`Figure 9. Subjective rating (-2.0 = very diffi-
`cult, -1.0 = difficult, 0.0 = neutral, 1.0 = easy,
`2.0 = very easy.
`
`tion. The effect of visual restriction (full vs. limited vision)
`was not statistically significant in either PTG types (old and
`new).
`Gesturing using the new PTG method was faster than the
`old PTG method with statistical significance in the full vi-
`sion condition (paired t-test, Bonferroni correction, p <.05),
`but not in the limited vision condition. The effect of the
`visual condition (full vs.
`limited vision) was statistically
`significant only in the new PTG method (paired t-test, Bon-
`ferroni correction, p <.05).
`
`Figure 8. Experiment1: accuracy and time.
`
`5.3 Subjective rating and workload
`
`not designed to support it. On the other hand, for the second
`group that used the new PTG method, voice feedback was
`provided to indicate the user’s decision (e.g., “Confirmed”
`or “Aborted, please try again”). The result of the gesture
`performance (correct or incorrect) was not provided for ei-
`ther groups. After a random interval that ranged between
`10 and 20 seconds, another voice command was played to
`guide the next trial.
`While participants walked and performed the task, re-
`searchers sat on the desk next to the walking path. One re-
`searcher controlled the GART GUI, and another researcher
`observed the participants to take notes or help them upon re-
`quest. A subjective rating survey and the NASA-TLX was
`administrated after the main test session.
`
`5.2 Result
`
`The mean accuracy of the training session for the new
`and old PTG methods was 91.96% and 88.10%, respec-
`tively. In the main test, we measured performance accuracy
`(Figure 8-left) and gesture time (Figure 8-right), which is
`the time difference between the first and last incoming sen-
`sor data captured. The mean accuracy of the new PTG sys-
`tem was higher than the old system with statistical signifi-
`cance in the full vision condition (paired t-test with Bonfer-
`roni correction, p <.05), but not in the limited vision condi-
`
`The difficulty (Figure 9-left) of all conditions was per-
`ceived as easy to neutral. The subjective rating of each con-
`dition indicated that the performance with full vision was
`perceived as easy ((cid:25) 1.0) both with the old and new PTG
`methds. The limited vision condition was rated slightly
`lower as neutral to easy (0.0 - 1.0).
`Results from the participants’ self reports that were col-
`lected with the NASA-TLX assessment indicated that the
`design of the gestures was simple and easy, but using the
`system required a bit of familiarization time. Although
`the gesture (command, segmentation, and confirmation ges-
`ture) was awkward at first for some participants, soon they
`felt that the gestures became natural. Additionally, some
`participants reported needing extra effort while applying the
`L-shaped gesture. We will discuss the learnability of com-
`mand gestures and PTG interactions later.
`Ten out of fourteen participants mentioned that they
`could not coordinate hand and sensor correctly in the lim-
`ited vision condition (which indicates the visual attention
`needed during the interaction). This difficulty was consis-
`tently observed both with the old and the new PTG mech-
`anisms. Increased confidence was frequently mentioned in
`the full vision conditions. Some old PTG participants re-
`ported that holding the tilted non-dominant wrist for seg-
`mentation was obtrusive because they put extra effort in to
`avoid hitting the wrist during a dominant hand gesture.
`
`7
`
`Petitioner Samsung Ex-1022, 0005
`
`

`

`In the full vision condition, the effect of tactile feedback (T
`vs. nT) was not statistically significant (p >.05). On the
`other hand, in the limited vision condition, tactile feedback
`enabled faster performance time with statistical significance
`(t-test, Bonferroni correction, p <.05). This result indicated
`that tactile feedback was more beneficial when the user’s vi-
`sual access to the interface was limited. The effect of visual
`restriction (full vs. limited vision) was statistically signif-
`icant only in nT condition (t-test, Bonferroni correction, p
`<.05). This result indicated that the effect of visual restric-
`tion was significant when tactile feedback was not provided,
`yet not significant when the tactile feedback was provided.
`
`6.3 Subjective rating and workload
`
`The subjective rating showed that the full visual access
`condition consistently improved performance regardless of
`tactile feedback (Figure 9-right). Similarly, participants
`reported that tactile feedback made the task easier by in-
`creasing their confidence regardless of the visual condition.
`Although the accuracy result was not given to the partic-
`ipants during the test, most participants reported that the
`tactile feedback clearly represented their hand-sensor coor-
`dination. (i.e., “The tactile feedback tells where my hand
`is.”) With tactile feedback, the perceived difficulty of lim-
`ited vision condition (T-limited) was similar with that of full
`vision condition without tactile feedback (nT-full). This re-
`sult indicated that the presence of tactile feedback compen-
`sated for the visual restriction. This compensation was con-
`sistently observed in the performance time as well as par-
`ticipants’ self report.
`Interestingly, all sixteen participants reported that tactile
`feedback was helpful even though none of the participants
`clearly felt the one-on-one matching of sensors and motors
`to localize the tactile stimulation. Instead of the localized
`buzz from each motor, all sixteen participants perceived the
`tactile clue as a binary alert that indicated presence and ab-
`sence of the dominant hand on top of the sensors. With
`respect to the appropriate sensor-motor layout and gesture
`design, we will revisit the design implication of this finding
`later in the discussion.
`
`7 Discussion and Future Work
`
`In experiment 1, a difference in accuracy and time to ges-
`ture between the old and new PTG methods was observed
`only when the interface was fully visible. In the full vision
`condition, the new PTG method enabled higher accuracy
`and faster gestures. Participants using the old PTG method
`reported extra effort in performing the trigger gesture in
`parallel to the commmand gesture and in avoid hitting the
`tilted non-dominant wrist (trigger gesture) during the inter-
`action. Since we did not observe such extra effort in the
`
`Figure 10. Experiment2: accuracy and time.
`
`6 Experiment 2
`
`Sixteen participants volunteered for the experiment
`(mean age = 24.25, thirteen male). The mean width and
`circumference of their wrists were 56.26 mm and 165.81
`mm, respectively. All participants were right-handed ex-
`cept one. 37.5% of participants wore a wristwatch daily.
`During the experiment, we measured accuracy and perfor-
`mance time (4 gestures x 6 times x 4 conditions x 16 partic-
`ipants = 1536 trials) and recorded subjective feedback from
`the NASA-TLX and our survey.
`
`6.1 Task and procedure
`
`The experiment employed a within subject design in a
`balanced order. All participants had four conditions that
`were composed of 24 trials (4 gestures x 6 times). The 2x2
`conditions of this experiment was determined by the pres-
`ence or absence of tactile feedback (with tactile feedback
`= T, without tactile feedback = nT) and visual restriction
`(full vision, limited vision) (Table 1). The equipment and
`procedure of experiment 2 was identical to experiment 1.
`
`6.2 Results
`
`The mean accuracy of the training session was 93.97%.
`In the main test, we measured performance accuracy (Fig-
`ure 10-left) and performance time (Figure 10-right). The
`performance time was broken down to two parts: gesture
`time and confirmation time. Gesture time indicates the time
`between the first and last sensor data captured, whereas con-
`firmation time means the time between the last sensor data
`and the confirmation or rejection of the event.
`The effect of tactile feedback (T vs. nT) on accuracy was
`not statistically significant (paired t-test, p >.05) regardless
`of the visual restriction. Similarly, the effect of visual re-
`striction (full vs. limited vision) was not statistically signif-
`icant regardless of tactile feedback (p >.05). When exam-
`ining the accuracy to each gesture, the gesture type (e.g.,
`backward, forward, inbound-L, outbound-L) affected the
`accuracy in all four conditions (one-way ANOVA, p <.05).
`In performance time, the effect of tactile feedback was
`observed differently in full and limited vision conditions.
`
`8
`
`Petitioner Samsung Ex-1022, 0006
`
`

`

`new PTG method, we assume that the sequence of actions
`of the dominant hand (segmentation gesture) and the non-
`dominant hand (command gesture) in the new PTG method
`required less effort than the older method.
`In experiment 2, we found slower performance for the
`limited vision situation only in the nT condition (no tactile
`feedback). This finding suggests that tactile feedback pro-
`vides appropriate support when the user’s visual attention
`is limited in mobile interactions. The similarity between T-
`limited condition and nT-full condition for the performance
`times (Figure 10-right) and subjective ratings (Figure 9-
`right) suggests that tactile feedback can compensate for lim-
`ited visual attention in the AirTouch interaction. This result
`indicates that tactile feedback can successfully assist mo-
`bile gesture interaction when the user’s visual attention for
`the interface is limited while walking.
`The NASA-TLX result did not show a difference among
`conditions but revealed general guidelines for further im-
`provement. For some participants, difficulties in matching
`the gesture name and hand movement (mental demand) and
`applying the L-shaped gesture correctly (physical demand)
`were observed. Since these difficulties are closely related to
`the the appropriateness in gesture design and learnability,
`we believe that a longitudinal study would be necessary to
`observe more accurate effects of tactile feedback in mobile
`gesture interaction.
`Participants perceived the tactile feedback as a binary
`signal rather than four localized sensations. This finding
`suggests that our efforts at providing clearly localized tac-
`tile sensations by changing the sensor layout (Figure 2) was
`not meaningful. Thus we decided to re-rotate the sensor lay-
`out to the cardinal arrangement (Figure 2-left). In the new
`arrangement, having a single motor would be sufficient. We
`assume that the 3x3 grid in the cardinal arrangement might
`enable more precise hand gesture sensing and more accurate
`recognition by utilizing a higher density of sensing points.
`During the experiments, we observed three limitations of
`the study that were caused by the test setting, hardware, and
`gesture design. Since the test was performed in a lab set-
`ting, the walking task did not demonstrate the chaos of in-
`the-wild mobile interaction. In the fully visible condition
`that was also supported by tactile feedback (T-full), some
`participants reported that they used their vision only for the
`walking task but not for gesture interaction, whereas other
`participants shifted their visual attention between the walk-
`ing path and the wrist. This result indicates that the user’s
`vision was not as successfully controlled as we intended.
`In addition, we found that the sensor range needed to
`be shortened. Occasionally, the system aborted the attempt
`soon after the voice command even before the participants
`applied a hand gesture to the motion sensors. Through real
`time observation, we found that the false triggering of the
`motion sensor was subject to body posture (e.g., pulling the
`
`Figure 11. New AirTouch UI (black watch).
`
`forearm too close to the body or tilting the wrist and elbow
`toward the chest or face), which was less consistent while
`walking than while standing. When IR sensors accidentally
`captured data from chest or upper forearm of walking par-
`ticipants, the false data was considered as hand gestures and
`unintentionally started the clock to count the time-out pe-
`riod. Although the aborted trial did not directly affect per-
`formance accuracy, participants reported frustration when-
`ever they heard the abort message from the system. To re-
`solve this problem, we decided to use sensors with a much
`shorter detection range in our new design. The new wrist-
`watch UI is smaller both in physical dimensions (46.5mm x
`17mm x 45mm) and sensor range (Figure 11, black wrist-
`watch UI). The proper detection range should be explored
`in future work.
`
`Although the new PTG method allows the canceling of
`a mistaken action upon user perception, none of the partic-
`ipants took advantage of this function. In experiment 2, all
`trials from the sixteen participants were confirmed without
`intentional abortion. We assume that this tendency was due
`to the easy set of gestures that we provided. To learn more
`about the benefit of the new PTG method in AirTouch with
`respect to the capability for reversible action across diverse
`gesture, testing the PTG with more sophisticated set of ges-
`tures would be required.
`Compared to the training session ((cid:25) 90%), the accu-
`racy is relatively low in the main sessions (60-80%). Given
`that the training session eased user performance by pro-
`viding visual guidelines (gesture name and direction) in a
`less mobile condition (standing), we assume that the dif-
`ference was caused by the combination of several factors:
`the walking task added additional workload; false triggering
`was caused by inconsistent body orientation while walking;
`novice users had learnability issues while memorizing and
`matching the gesture name and hand movement; the design
`and naming of the gesture might not be appropriate; and
`the interaction among all factors above with respect to com-
`plex gestures (e.g., L-shaped). Thus, further investigation is
`required to iterate on the design of hardware, gesture, and
`experiment in order to identify the effect of individual fac-
`tors.
`
`9
`
`Petitioner Samsung Ex-1022, 0007
`
`

`

`8 Conclusion
`
`Our new push-to-gesture method, where the user com-
`mits a command gesture after performing it as opposed to
`holding a trigger posture while performing a command ges-
`ture, enables more accurate and faster mobile gesture in-
`teraction with less effort in conditions where the interface
`is visually accessible. Similar performance times and per-
`ceived difficulties were observed in a limited visual access
`condition with tactile feedback (T-limited) and a full visual
`access condition without tactile feedback (nT-full). Thus,
`we conclude that tactile feedback is beneficial for assisting
`mobile gesture interaction where visual access to the inter-
`face is limited.
`
`Acknowledgment
`
`This material is based upon work supported, in part,
`by the National Science Foundation (NSF) under Grant
`#0812281 and Electronics and Telecommunications Re-
`search Institute (ETRI). This research was performed when
`the first author was a student of the Georgia Institute of
`Technology. We thank students of the Georgia Institute of
`Technology for their participation.
`
`References
`
`[1] D. L. Ashbrook, J. R. Clawson, K. Lyons, T. E. Starner,
`and N. Patel. Quickdraw: the impact of mobility and on-
`body placement on device access time. In Proceeding of the
`twenty-sixth annual SIGCHI conference on Human factors
`in computing systems, CHI ’08, pages 219–222, 2008.
`[2] L. Barnard, J. S. Yi, J. A. Jacko, and A. Sears. An empirical
`comparison of use-in-motion evaluation scenarios for mo-
`International Journal of Human-
`bile computing devices.
`Computer Studies, 62(4):487–520, A

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket