throbber
APRIL 13-18, 1996 C~'~ 9~ An Experimental Evaluation of Transparent Menu Usage Beverly L. Harrison Kim J. Vicente Dept. of Industrial Engineering University of Toronto Toronto, Ontario, Canada M5S 3G9 beverly @dgp.utoronto.ca and AliaslWavefront 110 Richmond Street East Toronto, Ontario, Canada M5C 1PI Dept. of Industrial Engineering University of Toronto Toronto, Ontario, Canada M5S 3G9 benfica@ie.utoronto.ca ABSTRACT This paper reports a systematic evaluation of transparent user interfaces. It reflects our progression from theoretically-based experiments in focused attention to more representative application-based experiments on selection response times and error rates. We outline how our previous research relates to both the design and the results reported here. For this study, we used a variably- transparent, text menu superimposed over different backgrounds: text pages, wire-frame images, and solid images. We compared "standard" text (Motif style, Helvetica, 14 point) and a proposed font enhancement technique ("Anti-Interference" outlining). More generally, this experimental evaluation provides information about the interaction between transparency and text legibility. KEYWORDS: display design, evaluation, transparency, user interface design, interaction technology, toolglass INTRODUCTION This paper describes an empirical evaluation using variably-transparent, linear menus superimposed over different background content: text, wire-frame images, and solid images (Figure 1). The menu contains text items presented in either regular Motif-style fonts or our proposed "Anti-Interference" (AI) font. We evaluated the effect of varying transparency levels (from opaque menus to highly-transparent menus), the visual interference produced by different types of background content, and the performance of AI fonts. More generally, this will determine the interaction effect between transparency and text legibility. The technological problem addressed by transparent interfaces is that of screen size constraints. Limited screen real estate combined with graphical interface design has resulted in systems with a proliferation of overlapping windows, menus, dialog boxes, and tool palettes. It is not Permission to make digital/hard copies of all or part of this material for personal or classroom use is granted without fee provided that the copies are not made or distributed for profit or commercial advantage, the copy- right notice, the title of the publication and its date appear, and notice is given that copyright is by permission of the ACM, Inc. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires specific permission and/or fee. CHI 96 Vancouver, BC Canada © 1996 ACM 0-89791-777-4/96/04..$3.50 ~: ..... ii ~i~i i! ¸ i ;!~i~i~ !~ 50% transparent, regular Motif style menu, solid background 100% transparent, AI font style menu, wire frame background FIGURE 1. Experimental Sample Images. (Image resolution degraded due to scaling. Actual screen images were of much higher quality and resolution.) 391
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 391 of 398
`
`

`

`C~'~'H 9~ APRIL 13-18, 1996 feasible to "tile" computer workspaces; there are too many objects. Overlapping opaque objects obscure information we may need to see and therefore may also be undesirable. Transparent interfaces address these issues, but may also introduce new challenges for designers. The associated psychological problem we are addressing is that of focused and divided attention. When there are multiple sources of information we must make choices about what to attend to and when. At times, we need to focus our attention exclusively on a single item without interference from other items. At other times, we may need to time share or divide our attention between two (or more) items of interest. In this case, we rapidly switch attention back and forth between the items (necessitating minimal "switching costs"). There is a trade-off between these attentional requirements. keeps the user's hand and input device in the workspace (using mirrors). In every case transparency provided a more seamless integration between the data or work and the UI tools. The study described in this paper represents an "applied" experiment to evaluate transparency and text menu item selection, but it is intended to inform us about transparency and text legibility in general. Text labels are either selectable in themselves (e.g., as menu items, hypertext links) or they are important cues in differentiating and identifying graphical window items for subsequent selection (e.g., button labels, data entry fields). Also we wish to apply transparency to help systems and on-line documentation. Clearly the effect of transparency on overall text legibility is a critical consideration in these situations. Transparency is perhaps most useful in achieving better integration between task space and tool space, between multiple tools, or between multiple views. Many applications are designed with a large work space or data area which is the primary focus of attention, while the tools to manipulate the data appear in windows and palettes over top of the work area. These tools divert or block our attention from our work, which is often providing feedback about the actions we apply, for example, painting or drawing systems and the traditional UI tools to change paint brushes and colors. However, there are several examples of highly-advanced systems which exemplify more seamless task-tool integration through the application of transparent user interfaces. In Heads Up Display (HUD) design, aircraft instrumentation (a graphical computer interface) is superimposed on the external real world scene, using specially engineered windshields [12]. In the ClearBoard work [7], a large drawing surface is overlayed on a video image of the user's collaborative partner. The TeamWorkStation system [6], predecessor to ClearBoard, created semi-transparent computer work space windows superimposed with video image windows (e.g., a person, an object being discussed). The ToolGlass and MagicLens work [1, 2, 10] reflects a tight coupling between tool function, target object specification, and transparency. Other designs include such things as video overlays like those used in presenting sports scores in broadcast television. Some designs combine transparency and 3-D projected views of the user interface. Several examples are: the work on "3-D silk (volume) cursors" [13], the work by Knowlton [8], which used graphical overlays projected down onto half-silvered mirrors over blank keyboard keys to dynamically re-label buttons and functions keys (e.g., for telephone operators), and the work by Schmandt [9], who built a system to allow users to manually manipulate and interact with objects in a 3-D computer space using a 3-D wand. Again a half-silvered mirror was used to project the computer space over the user's hand(s) and the input device. Disney has also developed a product called the "ImaginEasel" for animators and artists. ImaginEasel PROGRESSION OF RESEARCH We conducted a series of experiments, reflecting the progression from tightly-controlled, theoretically based, empirical work [4] to studies which sacrifice some experimental control in order to increase the realism (and presumably applicability to everyday tasks) [5, and this paper]. In our previous research, we conducted a theoretically motivated experiment [4], which tested a hypothesized model of focused attention and interference. The stimuli for that experiment were specific to the Stroop Effect [12] and therefore were necessarily simplistic and non-representative of real applications, i.e., a text word is seen through a variably transparent colored rectangle. Significant main effects were found for transparency level, word type, and color (all p<.001). Transparency increases resulted in performance improvements in word naming, since the word was more legible (Graph 1). There was a significant interaction between transparency and color F(12, 163)=6.17, p <.0001, suggesting that word legibility is affected not only by level of transparency (i.e., visibility) but also by the properties of the color used (i.e., saturation and luminance). Post-hoc analysis showed 3 significantly different groupings of transparency levels: 5%, 10%, 100%+50%+20%+0%. Most illegible trials occured at 5%; above 10%, subjects made virtually no errors. 700 ,E 675- 65O 625 i- 600 575 550 I:X I I I I I I I I I I 0 10 20 30 40 50 60 70 80 90 10(3 opaque Transparency Level (%) clear Graph 1. Mean Response Time Results from the Stroop Experiment - Word Naming (Background) Task. (Invert X axis to reflect foreground menu task relevant to-.te.xt.,~..tection experiment.) 392
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 392 of 398
`
`

`

`APRIL 13-i8, 1996 CHi 96 We subsequently changed the stimuli to more complex and realistic images. In the place of the Stroop color patch, we inserted either an iconic tool palette [5] or a linear text menu [this paper]. Both of these represent highly realistic foreground tasks in most user interface applications. We replaced verbal response times with mouse click selection times, also highly representative of realistic applications. Finally, we replaced our 78-point Helvetica word from the previous Stroop experiments with complex background images taken from product libraries. These images reflect a "snapshot" in time for several task domains we have targeted for later case study evaluation. While the images used as stimuli are not interactive, they do reasonably reflect a static moment in time for our choice of tasks. As in the Stroop experiment series, we ran both foreground focused attention tasks and background focused attention tasks. These tasks have been carefully matched to allow comparison. The analysis of the Icon Palette Experiment data [5] revealed several points relevant to our Text Legibility Experiment. The type of icon, type of background, and transparency level were all statistically significant (p<.0001), as were the interactions between these factors. Graphs 2 and 3 summarize some of these interactions. Briefly, solid icons and solid image backgrounds are significantly more interference resistant than line art or text, resulting in the best performance. Line art icons and text icons perform equivalently, as do wire frame backgrounds and text page backgrounds. There is no significant performance degradation between 0% (opaque) and 50% transparent. Levels over 75% are error prone. In total 18% of the experimental trials were marked illegible. However, most of these errors occurred at 90% where more than half (55%) of the trials were marked illegible. All icon types over wire frame backgrounds or text pages were illegible at 90%. Finally, we determined the "threshold of frustration", the point at which subjects give up because the effort seems excessive. At 90% subjects marked trials illegible after attempting them for an average of 2.6 seconds, roughly twice the average trial time. 3.5 (9 E 2.5 i- ra 2 = 1.5 O IZ 1 0% I I I 25% 50% 75% 100% Transparency Level (%) El line ........ ~ ....... solid ..... O ~ "- - - text A .E I-- U) C 0 O. (n GRAPH 2. Mean Response Times - Icon Type 3.5 3- 2.5- 2- 1.5 T 1 O .... O .............. O .............. I I I 0.0% 25.0% 50.0% 75.0% 100.0% Transparency Level (%) [] wire ........ O ........ solid .... O .... text GRAPH 3. Mean Response Times- BkgrndTypes EXPERIMENT - TRANSPARENCY & TEXT LEGIBILITY This experiment explores the issue of focused attention and interference in the context of text legibility and item selection. As in the Icon Palette Experiment, this experiment also represents an extension to the Stroop studies. The text menu replaced the Stroop color patch, while the image files replaced the simple Stroop word (e.g., Figures 1, 2). The transparency level varied randomly from 0% (completely opaque) to 100% transparent or clear. We used a "regular" Motif style font (Helvetica, bold, 14 point, italic) and our proposed Anti-Interference (AI) font, which uses luminance values to create a contrasting outline (e.g., Figure 4). All combinations of font styles X background types X transparency levels were run. FIGURE 2. Sample Trial Screen showing target item. 393
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 393 of 398
`
`

`

`C~"~ ~ APRIL 13-18, 1996 Applying the Previous Experiments The Text Selection Experiment represents a foreground focused attention task and, as such, we anticipate a performance curve which resembles those found in the Icon Palette Experiment (i.e., transparency increases should degrade performance) (Graphs 2 and 3). However, unlike the palette selection task where the entire icon was made transparent, the text label remains opaque - only the surface area around the label is made transparent (e.g., Figure 3). In the Icon Palette Experiment, our icons were solid objects (as many icons typically are). In order to achieve a transparency effect, the icon image itself must be made transparent. In the case of text labels however, the text occupies only a small percentage of the "selectable region", therefore we may leave the text opaque and still achieve reasonable transparency using the remainder of the selectable area around it (e.g., Figure 3b). (Both design alternatives are shown in Figure 3. We feel Figure 3b represents the more realistic design choice. This was the method used in our experiment.) The text selection task itself is a legibility task suggesting cut-off points similar to those from the Stroop Word Naming Experiment [4] (Graph 1). Best performance is expected to be maintained from 0% (opaque) to 50% (i.e., interference is no longer an issue). Poor performance and high error rates should occur above 75% (i.e., 25% of the foreground shows, 75% of the background shows). We might anticipate that 90% will be difficult to use (as in the Stroop Experiment). However, note that these cut-off point estimates are based on experiments where the entire target object was transparent. The actual cut off points are more likely to shift right on the predicted curve, given the opaque text labels - implying more resistance to visual interference. This suggests that from 0% to some level > 50%, performance will be roughly equivalent. Opaque text labels might remain usable up to 100% transparent (clear menu area). 3 (a) labels & surface around 3 (b) labels are opaque, labels are both transparent surface around labels is transparent FIGURE 3. Comparison of design alternatives for transparent text items. (Image resolution degraded due to scaling.) Actual screen images were much higher quality.) When applying the Anti-Interference (AI) fonts, we anticipate more interference-resistant images than regular font text (Figure 4). This would give us a flatter curve which is shifted towards better performance. (This is not unlike the effect shown in Graph 2 and 3 when the complexity of the image was simplified, for example, from text to solid). Hypotheses H 1: As transparency level increases visual interference will increase. This will result in poorer performance (i.e., slower response times and increased error rates). H2: Increased complexity or information density on the background will make text legibility decrease. Text backgrounds will have the worst performance, followed by wire-frame, then solid images. H3: AI fonts will significantly improve performance by creating more interference resistant text. 4 (a) Regular fonts, 100% transparent, wire bkgrnd 4 (b) AI fonts,100% transparent, wire bkgrnd 394
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 394 of 398
`
`

`

`APalL 13-18, 1996 C~ ~ Experimental Design A fully randomized, within subject, repeated measures design was used. There were three independent variables: type of font, type of background, and transparency level. A total of 540 trials were run for each subject. Trials were presented in random order. Each session lasted about 45 minutes. Dependent variables of selection response time (based on a mouse click) and errors were logged. Two error conditions were possible: the sulLiect pressed the "can't see" button indicating that the item was not legible, or the subject selected the incorrect menu item. In the latter case, the item selected and it's location were logged. Error trials were removed from subsequent analysis of response time data. Error data were analyzed separately. We used 2 groups of text items within the menu: each group was visually similar to ensure true legibility performance. The menu items were: Revolve X, Revolve Y, Revolve Z, and Dup Curve, Comb Curve, Del Curve. Six other menu items were randomly distributed with the target items. (A 12 item menu was felt to be representative of the average menu/menu size used within the actual product.) Items were randomly assigned positions within the menu for each trial. This was done to ensure the experiment was not confounded by subjects learning the position of items. (We were interested in testing true text legibility rather than menu usability. Randomly ordered menus will produce worst case data which overestimate performance degradation, versus standard menu usage. This gives us a conservative range of transparency levels.) The target item was presented to the subject throughout the trial as a reminder. This was to prevent memory errors (which were not pertinent to the goals of this study). We randomly assigned background images of three types: text pages, wire frame images, and solid images. Three samples of each type were created. Images were 8-bit color rendered images. These backgrounds were aligned such that a major portion of the content was directly under the menu. We randomly assigned the level of transparency to the menu. These levels were based on our previous experimental experience [4, 5] and test pilot results with this experiment. Levels of 0% (traditional opaque menus), 50%, 75%, 90% and 100% (clear) were used. The opaque level represented the baseline condition where the fastest performance was anticipated. Transparency levels were produced using alpha blending of the foreground and background images (as opposed to stippling or masking). A level of 75% transparent would mean that 75% of the background image was combined with 25% of the foreground image, producing the effect of a highly transparent menu. Finally, we randomly assigned either regular font style or our AI font style to the items. Regular fonts were matched to the Motif style menus that appeared from windows on the SGI (Helvetica, 14 point, bold, italic was the best match). We developed Anti-Interference (AI) fonts as a potential interference resistant font technique (Figure 4b). Since an AI tbnt has two opposing color components, it remains visible in any color background. In AI fonts, the opposing outlines of the text are rendered in a color which has the maximal contrast to the color of the text. For any selected text color vector [R, G, B], our AI font algorithm calculates the luminance value Y according to the YIQ color model used in television broadcasting ([3], page 589). Note that the red, green and blue components are not equally weighted in contributing to luminance. I;l 02 9 05 70l41ii ] = 0.596 -0.275 -0.321 / 0.212 -0.528 0.311J Based on the value of Y, our algorithm then determines the outline color with the maximal luminance contrast. In practice, only two color vectors can be the candidates for the solution: [0,0,0] (black) when Y>Ymax/2 or [Rmax,Gma x ,Bmax] when Y<Ymax/2, where Ymax is the maximum luminance value and Rmax,Gmax ,Bmax are the maximum red, green and blue value respectively. Equipment The experiments were conducted on an SGI Indy TM using a 20 inch color monitor. Subjects sat at a fixed distance of 60cm from the screen (average distance when working normally). Procedure Subjects were given 20 practice trials. These trials were randomly selected from the set of 540 possible combinations. For each trial, subjects were shown a target text item to study (lower left corner, Figure 2). When ready, subjects pressed a "next trial" button (not shown) which displayed the menu superimposed over the background at a randomly ordered transparency level. Items were randomly distributed on the menu. Subjects had to locate and click on the target item within the menu. If they could not see the item on the menu (i.e., illegible) they could press a "can't see" button. The target item remained on the screen throughout the trial for reference purposes. Subjects could take short rest breaks whenever necessary. Response times and errors were logged. Response selections were made using the mouse. Subjects were debriefed at the end of the experiment. Open ended comments were recorded. Subjects A total of 10 students from the University of Toronto served as subjects. They were pre-screened for color- blindness and for familiarity with the product from which the images and items were taken. Subjects were paid for their participation and could voluntarily withdraw without penalty at any time. 395
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 395 of 398
`
`

`

`CH] @6 APRIL i3-18, 1995 RESULTS We have categorized our results by Response Time analysis, Error analysis, and comments from the interviews with subjects. Quantitative Statistical Analysis - Response Time Highly significant main effects were found for all of our major variables: background type, transparency level, and font type (Table 1). However, we are primarily interested in the transparency and font effects and their interactions with background type. Statistically significant interaction effects are reported in Table 1. condition df F value 8, 72 1.06 background type transparency level font type bkgrnd type X font type bkgrnd type X transp bkgrnd X font X transp 4, 36 1,9 8, 72 4.12 3.38 1.59 32, 288 2.44 32, 287 3.76 p< .01 .0001 .0001 .001 .01 .001 TABLE 1. Results for Main Effects and Interactions The primary results of interest are plotted below (Graphs 4 and 5a, b, c)(across all subjects and menu items). Graph 4 depicts the interaction between font style and transparency level. Graph 5 a, b, c show the interaction between background types and transparency level. To determine if the differences are significant between the individual lines plotted within each of the graphs, a Student-Newman-Keuls (SNK) test was conducted post- hoc as a comparison of means. (This determined the clustering of items within font type, background type, and transparency level and indicated which items are not statistically different from each other.) G" E I-- u} c 0 Q. u} I=: 2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 = = 0 25 50 opaque Transparency I I 75 100 clear Level (%) [] regular font ........ ~ ....... AI font We conducted subsequent analyses on the font style X background type interactions. Regular menu fonts showed strong interaction effects with the matched text page background and the dense wire frame backgrounds. The solid images with black components also performed poorly. Somewhat surprisingly, the best (most interference resistant) backgrounds were the non-matched text pages. For the regular font style, there were statistically significant differences between the following transparency levels: 100% - poorest, 90%, 75%, and 50%+0% (which performed equivalently well.). (This finding is consistent with our previous experimental results.) AI fonts were relatively insensitive to the type of background. Other background images were not significantly different. For AI fonts, there were statistically significant differences between the following transparency levels: 100% - poorest, 50%+75%+90% (not different), 0% - best. We also conducted a finer grained analysis at each transparency level. At 0% and 50%, there were no statistical differences between background types or between font styles. At 75%, 90%, and 100% transparency the AI font performed significantly faster than the regular font (shown in Graph 4). There are significant differences between backgrounds at these levels, though these differences are not based on the type (text, wire, solid) but rather on the individual image properties. For example, the text pages each used a different font style, one of which was purposely matched to the menu item font style. This page performed significantly slower than the other pages (Graph 5a). The denser wire frame images (i.e., more complex meshes and therefore darker in color) performed significantly slower than the simpler wire frames (Graph 5b). The solid images with black components (the truck and the camcorder), performed significantly slower than the solid multi-colored motorcycle image (Graph 5c). 3 o 2.8- "-" 2.6- ,' .E 2.4- ....Or ..... d ? 2.2 - , ......... ,....~.. / o 1.8.4 1.6- 1.4 i i i J 0 25 50 75 100 opaque clear Transparency Level (%) [] page.courier ........ 'O ........ page.times .... O .... page.helvetica GRAPH 4. Mean Response Times for Transparency Levels X Font Style (across all background types) GRAPH 5a. Mean Response Times for Transparency Levels X Page Background Types (across font types). 396
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 396 of 398
`
`

`

`APRIL 13-18, 1996 OH| 96 -- ~,lrzl~,il~lEi~______~_ A O v E O~ 0 CL (3C 3 - 2.8 2.6 2.4 2.2 2 1.8 1.6 1.4 0 opaque I I I I 25 50 75 100 clear Transparency Level (%) [] wire.truck ........ @ ........ wire.motorcycle .... O .... wire.face GRAPH 5b. Mean Response Times for Transparency Levels X Wire Background Types (across font types). The adjacency item errors are most strongly influenced by the width of the target areas. Since this was designed to match standard Motif menu widths, we did not increase the width size to reduce these errors. However, we are most interested in substitution misses, since these are partially attributed to poor visibility of the target item. These errors were surprisingly evenly distributed across transparency levels. AI fonts made little difference in reducing these errors. Legibility "Error" Results Only about 1% of the experimental trials were marked illegible. Of these, 90% occurred at the 100% level (clear) and 10% occurred at the 90% level. All of these illegible trials were in the regular font condition (i.e., no AI font trials were marked illegible). At the 90% level, mostly text items over the text pages or wire frame backgrounds were illegible. At the 100% level, the two solid backgrounds with black color components accounted for 70% of the illegible trials. (The menu font was black, therefore one would expect these trials to be illegible). Surprisingly, text pages accounted for only 3% of the errors made at 100% level. 3 2.8- 2.6- 2.4 E 2.2 2 = 1.8 o a. 1.6 1.4 9 s ...... .0,, j~L',.O.-'¢ I I I I 0 25 50 75 100 opaque clear Transparency Level (%) [] solid.truck ........ @ ........ solid.motorcycle .... O .... solid.camcorder GRAPH 5c. Mean Response Times for Transparency Levels X Solid Background Types (across font types). Targeting Error Results In total, <1% of the trials resulted in targeting errom or misses. The low number of errors indicates that subjects did not tend to guess. There were two types of targeting error possible: accidental selection of an adjacent menu item (45% of total) and substitution of an incorrect menu item (55%). In the latter case, the user incorrectly identified the target item by replacing it with a similarly named item such as Revolve X instead of Revolve Y. The mean response time for legibility errors was 6.84 seconds (the "threshold of frustration"), almost 3 times the response time for other trials. This implies that subjects exerted substantial effort to respond to each trial before giving up. In effect, this figure represents the "tolerance threshold" beyond which it is too much effort to locate the target. Subjective Comments Subjects commented that wire frame backgrounds seem most difficult, the solid backgrounds were easiest, and highly transparent menus over black backgrounds were very hard. Most subjects commented that even a small change in the transparency level (from 100% clear to 90%), made a substantial difference in these black-on-black conditions. This change allowed subjects to see and select items where previously they marked the trial "can't see". Subjective preference seemed to favor changing the transparency level to improve visibility, as opposed to changing to the AI font. Several subjects commented that they did not like the "outline font" and if given a choice, preferred the 50% transparency level. DISCUSSION Transparency levels significantly affected response time and error rates (independent of font type or background). We found evidence to support our predictions about the relationship between regular font performance and AI font performance. The AI fonts produced a substantially flatter performance curve, shifted towards better (i.e., faster) performance, implying they are more interference resistant. The real advantage of using AI fonts was only realized at higher transparency levels (i.e., over 50%). In fact, AI fonts at 75% and 90% produce results similar to those of using regular fonts at 50%. This might be used as a design trade-off for text-based transparent interfaces. 397
`
`Meta Platforms, Inc.
`Meta Platforms, Inc. v. Angel Technologies Group LLC
`IPR2023-00059
`Exhibit 1045 - Page 397 of 398
`
`

`

`C~'~ 96 APRIL 13-18, 1996 Font type X background content interactions were most strongly affected at highly transparent levels. Performance differences are small between 0% to 50%. (This is consistent with results from our Stroop Ex)eriment and the Icon Palette Experiment.) Surprisingly, the text backgrounds produced much better performance than expected. The most critical dimension of interference with text menu selection tasks was color conflict. The closer in shade and hue the background is to the text color, the higher the interference and the worse the resulting performance. CONCLUSIONS This experiment was designed as a text menu selection task. The results should generalize to text legibility in other UI contexts beyond menus. Performance is actually under estimated here for a menu usage scenario, since items did not appear in the same location each time and hence positional learning would not benefit performance. One variant of the experiment would be to run a fixed menu condition to address this. Both this experiment and the Icon Palette Experiment assume priority is given to selecting items from the foreground and hence they measured this selection criterion only. Clearly to round out the research we need to measure the level of awareness the subjects preserve of the background content. In particular, how are background focused attention tasks affected by transparency? To this end, we have just completed a study which tests selection accuracy of features from background images while icon palettes and text menus are superimposed, varying the level of transparency of the palettes and menus. We believe that this latest experiment, which uses the same stimuli and methodology, provides us with a comparable background task. This measure of background visibility is particularly relevant for tasks like the ToolGlass work [1, 2, 10] or click-through tools which require alignment of the palette item with a specific background object or area. While this paper presented results for using text superimposed over a variety of background images, this methodology can be generalized to other types of interfaces by incorporating images or backgrounds from any target application or working product. The idea is to capture realistic screens at a single moment in time. With these captured images, any sort of menu, window, or palette can be superimposed at varying transparency levels and tested. Using this approach, performance can be predicted and the most appropriate settings can be determined for a variety of target applications. These empirical results can be combined with subjective assessments to provide strong insights about the most and least preferred design solutions in a generalized way. Our long-term goal is in providing user interfaces which improve the fluency of work by more seamlessly integrating the tools with the task space. ACKNOWLEDGMENTS Primary support for this research is gratefully a

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket