`
`Graphical User
`Interface
`Programming
`
`48.1
`48.2
`
`∗ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48-1
`Introduction
`Importance of User Interface Tools . . . . . . . . . . . . . . . . . .48-2
`Overview of User Interface Software Tools
`• Tools for the World Wide Web
`48.3 Models of User Interface Software . . . . . . . . . . . . . . . . . . .48-20
`48.4 Technology Transfer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48-20
`48.5 Research Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48-20
`New Programming Languages • Increased Depth
`• Increased Breadth • End User Programming
`and Customization • Application and User Interface
`Separation • Tools for the Tools
`48.6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .48-22
`
`Brad A. Myers
`Carnegie Mellon University
`
`48.1 Introduction∗
`
`Almost as long as there have been user interfaces, there have been special software systems and tools to
`help design and implement the user interface software. Many of these tools have demonstrated significant
`productivity gains for programmers and have become important commercial products. Others have proved
`less successful at supporting the kinds of user interfaces people want to build. Virtually all applications
`today are built using some form of user interface tool [Myers 2000].
`User interface (UI) software is often large, complex, and difficult to implement, debug, and modify. As
`interfaces become easier to use, they become harder to create [Myers 1994]. Today, direct-manipulation
`interfaces (also called GUIs for graphical user interfaces) are almost universal. These interfaces require
`that the programmer deal with elaborate graphics, multiple ways of giving the same command, multiple
`asynchronous input devices (usually a keyboard and a pointing device such as a mouse), a mode-free in-
`terface where the user can give any command at virtually any time, and rapid “semantic feedback” where
`determining the appropriate response to user actions requires specialized information about the objects
`in the program. Interfaces on handheld devices, such as a Palm organizer or a Microsoft PocketPC device,
`use similar metaphors and implementation strategies. Tomorrow’s user interfaces will provide speech
`
`∗
`
`This chapter is revised from an earlier version: Brad A. Myers. 1995. “User Interface Software Tools,” ACM Trans-
`actions on Computer–Human Interaction. 2(1): 64–103.
`
`1-58488-360-X/$0.00+$1.50
`© 2004 by CRC Press, LLC
`
`48-1
`
`Align EX1043
`Align v. 3Shape
`IPR2022-00144
`
`
`
`48-2
`
`Computer Science Handbook
`
`recognition, vision from cameras, 3-D, intelligent agents, and integrated multimedia, and will probably
`be even more difficult to create. Furthermore, because user interface design is so difficult, the only reliable
`way to get good interfaces is to iteratively redesign (and therefore reimplement) the interfaces after user
`testing, which makes the implementation task even harder.
`Fortunately, there has been significant progress in software tools to help with creating user interfaces.
`Today, virtually all user interface software is created using tools that make the implementation easier.
`For example, the MacApp system from Apple, one of the first GUI frameworks, was reported to reduce
`development time by a factor of four or five [Wilson 1990]. A study commissioned by NeXT claimed that
`the average application programmed using the NeXTStep environment wrote 83% fewer lines of code and
`took one-half the time, compared to applications written using less advanced tools, and some applications
`were completed in one-tenth the time. Over three million programmers use Microsoft’s Visual Basic tool
`because it allows them to create GUIs for Windows significantly more quickly.
`This chapter surveys UI software tools and explains the different types and classifications. However, it is
`now impossible to discuss all UI tools, because there are so many, and new research tools are reported every
`year at conferences such as the annual ACM User Interface Software and Technology Symposium (UIST)
`(see http://www.acm.org/uist/) and the ACM SIGCHI conference (see http://www.acm.org/sigchi/). There
`are also about three Ph.D. theses on UI tools every year. This article provides an overview of the most
`popular approaches, rather than an exhaustive survey. It has been updated from previous versions (e.g.,
`[Myers 1995]).
`
`48.2 Importance of User Interface Tools
`
`There are many advantages to using user interface software tools. These can be classified into two main
`groups. First, the quality of the resulting user interfaces might be higher, for the following reasons:
`
`Designs can be rapidly prototyped and implemented, possibly even before the application code is
`written. This, in turn, enables more rapid prototyping and therefore more iterations of iterative
`design, which is a crucial component of achieving high-quality user interfaces[Nielsen 1993b].
`The reliability of the user interface will be higher, because the code for the user interface is created
`automatically from a higher-level specification.
`Different applications are more likely to have consistent user interfaces if they are created using the same
`UI tool.
`It will be easier for a variety of specialists to be involved in designing the user interface, rather than
`having the user interface created entirely by programmers. Graphic artists, cognitive psychologists,
`and usability specialists may all be involved. In particular, professional user interface designers,
`who may not be programmers, can be in charge of the overall design.
`More effort can be expended on the tool than may be practical on any single user interface, because the
`tool will be used with many different applications.
`Undo, Help, and other features are more likely to be available because the tools might support them.
`
`Second, the UI code might be easier and more economical to create and maintain. This is because of the
`following:
`
`Interface specifications can be represented, validated, and evaluated more easily.
`There will be less code to write, because much is supplied by the tools.
`There will be better modularization, due to the separation of the UI component from the application.
`This should allow the user interface to change without affecting the application, and a large class of
`changes to the application (such as changing the internal algorithms) should be possible without
`affecting the user interface.
`The level of programming expertise of the interface designers and implementers can be lower, because
`the tools hide much of the complexity of the underlying system.
`It will be easier to port an application to different hardware and software environments because the
`device dependencies are isolated in the UI tool.
`
`
`
`Graphical User Interface Programming
`
`48-3
`
`Application
`
`Higher Level Tools
`
`Toolkit
`Windowing System
`
`Operating System
`
`FIGURE 48.1 The components of user interface software.
`
`48.2.1 Overview of User Interface Software Tools
`
`Because user interface software is so difficult to create, it is not surprising that people have been working
`for a long time to create tools to help with it. Today, many of these tools and ideas have progressed from
`research into commercial systems, and their effectiveness has been amply demonstrated. Research systems
`also continue to evolve quickly, and the models that were popular five years ago have been made obsolete
`by more effective tools, changes in the computer market, and the emergence of new styles of user interfaces,
`such as handheld computing and multimedia.
`
`48.2.1.1 Components of User Interface Software
`As shown in Figure 48.1, UI software may be divided into various layers: the windowing system, the
`toolkit, and higher-level tools. Of course, many practical systems span multiple layers.
`The windowing system supports the separation of the screen into different (usually rectangular) regions,
`called windows. The X system [Scheifler 1986] divides window functionality into two layers: the window
`system, which is the functional or programming interface, and the window manager, which is the user
`interface. Thus, the window system provides procedures that allow the application to draw pictures on the
`screen and get input from the user; the window manager allows the end user to move windows around
`and is responsible for displaying the title lines, borders, and icons around the windows. However, many
`people and systems use the name “window manager” to refer to both layers, because systems such as the
`Macintosh and Microsoft Windows do not separate them. This article will use the X terminology, and use
`the term windowing system to refer to both layers.
`Note that Microsoft confusingly calls its entire system Windows (for example, Windows 98 or Windows
`XP). This includes many different functions that here are differentiated into the operating system part
`(which supports memory management, file access, networking, etc.), the windowing system, and higher-
`level tools.
`On top of the windowing system is the toolkit, which contains many commonly used widgets (also
`called controls) such as menus, buttons, scroll bars, and text input fields. On top of the toolkit might be
`higher-level tools, which help the designer to use the toolkit widgets. The following sections discuss each
`of these components in more detail.
`
`48.2.1.2 Windowing Systems
`A windowing system is a software package that helps the user monitor and control different contexts by
`separating them physically onto different parts of one or more display screens [Myers 1988b]. Although
`most of today’s systems provide toolkits on top of the windowing systems, as will be explained later, toolkits
`generally only address the drawing of widgets such as buttons, menus, and scroll bars. Thus, when the
`programmer wants to draw application-specific parts of the interface and allow the user to manipulate
`these, the window system interface must be used directly. Therefore, the windowing system’s programming
`interface has significant impact on most user interface programmers.
`The first windowing systems were implemented as part of a single program or system. For example,
`the EMACs text editor [Stallman 1979], and the Smalltalk [Tesler 1981], and DLISP [Teitelman 1979]
`programming environments had their own windowing systems. Later systems implemented the windowing
`
`
`
`48-4
`
`Computer Science Handbook
`
`User Interface Layer
`
`Presentation
`
`Commands
`
`Window Manager
`
`Base Layer
`
`Output Model
`
`Input Model
`
`Window System
`
`FIGURE 48.2 The windowing system can be divided into two layers, called the base (or window system) layer and
`the user interface (or window manager) layer. Each of these can be divided into parts that handle output and input.
`
`system as an integral part of the operating system, such as Sapphire for PERQs [Myers 1984], SunView
`for Suns, and the Macintosh and Microsoft Windows systems. In order to allow different windowing
`systems to operate on the same operating system, some windowing systems, such as X and Sun’s NeWS
`[Gosling 1986], operate as a separate process and use the operating system’s interprocess communication
`mechanism to connect to application programs.
`
`48.2.1.2.1 Structure of Windowing Systems
`A windowing system can be logically divided into two layers, each of which has two parts (see Figure 48.2).
`The window system, or base layer, implements the basic functionality of the windowing system. The two
`parts of this layer handle the display of graphics in windows (the output model) and the access to the
`various input devices (the input model), which usually includes a keyboard and a pointing device such
`as a mouse. The primary interface of the base layer is procedural and is called the windowing system’s
`application programmer interface (API).
`The other layer of windowing system is the window manager or user interface. This includes all aspects
`that are visible to the user. The two parts of the user interface layer are the presentation, which comprises
`the pictures that the window manager displays, and the commands, which are how the user manipulates
`the windows and their contents.
`
`48.2.1.2.2 Base Layer
`The base layer is the procedural interface to the windowing system. In the 1970s and early 1980s, there were
`a large number of different windowing systems, each with a different procedural interface (at least one for
`each hardware platform). People writing software found this to be unacceptable because they wanted to
`be able to run their software on different platforms, but they would have to rewrite significant amounts
`of code to convert from one window system to another. The X windowing system [Scheifler 1986] was
`created to solve this problem by providing a hardware-independent interface to windowing. X has been
`quite successful at this, and it drove all other windowing systems out of the workstation hardware market.
`X continues to be popular as the windowing system for Linux and all other UNIX implementations. In
`the rest of the computer market, most machines use some version of Microsoft Windows, with the Apple
`Macintosh computers having their own windowing system.
`
`48.2.1.2.3 Output Model
`The output model is the set of procedures that an application can use to draw pictures on the screen. It
`is important that all output be directed through the window system so that the graphics primitives can
`be clipped to the window’s borders. For example, if a program draws a line that would extend beyond
`a window’s borders, it must be clipped so that the contents of other, independent, windows are not
`overwritten. Most computers provide graphics hardware that is optimized to work efficiently with the
`window system.
`In early windowing systems, such as Smalltalk [Tesler 1981] and Sapphire [Myers 1986], the primary
`output operation was BitBlt (also called RasterOp, and now sometimes CopyArea or CopyRectangle). These
`early systems primarily supported monochrome screens (each pixel is either black or white). BitBlt takes
`
`
`
`Graphical User Interface Programming
`
`48-5
`
`a rectangle of pixels from one part of the screen and copies it to another part. Various Boolean operations
`can be specified for combining the pixel values of the source and destination rectangles. For example, the
`source rectangle can simply replace the destination, or it might be XORed with the destination. BitBlt
`can be used to draw solid rectangles in either black or white, display text, scroll windows, and perform
`many other effects [Tesler 1981]. The only additional drawing operation typically supported by these early
`systems was drawing straight lines.
`Later windowing systems, such as the Macintosh and X, added a full set of drawing operations, such as
`filled and unfilled polygons, text, lines, arcs, etc. These cannot be implemented using the BitBlt operator.
`With the growing popularity of color screens and nonrectangular primitives (such as rounded rectangles),
`the use of BitBlt has significantly decreased. Now, it is primarily used for scrolling and copying off-screen
`pictures onto the screen (e.g., to implement double-buffering).
`A few windowing systems allowed the full PostScript imaging model [Adobe Systems Inc. 1985] to
`be used to create images on the screen. PostScript provides device-independent coordinate systems and
`arbitrary rotations and scaling for all objects, including text. Another advantage of using PostScript for the
`screen is that the same language can be used to print the windows on paper (because many printers accept
`PostScript). Sun created a version used in the NeWS windowing system, and then Adobe (the creator
`of PostScript) came out with an official version called Display PostScript, which was used in the NeXT
`windowing system. A similar imaging model is provided by Java 2D [Sun Microsystems 2002], which
`works on top of (and hides) the underlying windowing system’s output model.
`All of the standard output models only contain drawing operations for two-dimensional objects. Exten-
`sions to support 3-D objects include PEX, OpenGL, and Direct3-D. PEX [Gaskins 1992] is an extension
`to the X windowing system that incorporates much of the PHIGS graphics standard. OpenGL [Silicon
`Graphics Inc. 1993] is based on the GL programming interface that has been used for many years on
`Silicon Graphics machines. OpenGL provides some machine independence for 3-D because it is available
`for various X and Windows platforms. Microsoft supplies its own 3-D graphics model, called Direct3-D,
`as part of Windows.
`As shown in Figure 48.3, the earlier windowing systems assumed that a graphics package would be
`implemented using the windowing system. See Figure 48.3a. For example, the CORE graphics package was
`implemented on top of the SunView windowing system. Next, systems such as the Macintosh, X, NeWS,
`NeXT, and Microsoft Windows implemented a sophisticated graphics system as part of the windowing
`system. See Figure 48.3b and Figure 48.3c. Now, with Java2D and Java3-D, as well as Web-based graphics
`systems such as VRML for 3-D programming on the Web [Web3-D Consortium 1997], we are seeing a
`return to a model similar to the one shown in Figure 48.3a, with the graphics on top of the windowing
`system. See Figure 48.3-D.
`
`48.2.1.2.4 Input Model
`The early graphics standards, such as CORE and PHIGS, provided an input model that does not support
`the modern, direct-manipulation style of interfaces. In those standards, the programmer calls a routine to
`request the value of a virtual device, such as a locator (pointing device position), string (edited text string),
`choice (selection from a menu), or pick (selection of a graphical object). The program would then pause,
`waiting for the user to take action. This is clearly at odds with the direct-manipulation mode-free style, in
`which the user can decide whether to make a menu choice, select an object, or type something.
`With the advent of modern windowing systems, a new model was provided: a stream of event records
`is sent to the window that is currently accepting input. The user can select which window is getting events
`using various commands, described subsequently. Each event record typically contains the type and value
`of the event (e.g., which key was pressed), the window to which the event was directed, a timestamp, and the
`x and y coordinates of the mouse. The windowing system queues keyboard events, mouse button events,
`and mouse movement events together (along with other special events), and programs must dequeue the
`events and process them. It is somewhat surprising that, although there has been substantial progress in
`the output model for windowing systems (from BitBlt to complex 2-D primitives to 3-D), input is still
`
`
`
`48-6
`
`Computer Science Handbook
`
`Sapphire, SunWindows:
`
`Macintosh, MS Windows:
`
`Application
`Programs
`
`Graphics
`Package
`
`Toolkit
`
`Window
`System
`
`&
`
`User Interface
`of W.M.
`
`(a)
`
`NeWS, X:
`
`Application
`Programs
`
`User Interface
`of W.M.
`
`Application
`Programs
`
`Window
`System
`
`&
`
`User Interface
`of W.M.
`
`Toolkit
`
`Graphics
`Package
`
`(b)
`
`Java, VRML:
`
`Application
`Programs
`
`Toolkit
`
`Toolkit
`
`Window
`System
`
`Graphics
`Package
`
`(c)
`
`Graphics
`Package
`
`Window
`System
`
`&
`
`User Interface
`of W.M.
`
`(d)
`
`FIGURE 48.3 Various organizations that have been used by windowing systems. Boxes with extra borders represent
`systems that can be replaced by users. Early systems (a) tightly coupled the window manager and the window system,
`and assumed that sophisticated graphics and toolkits would be built on top. The next step in designs (b) was to
`incorporate into the windowing system the graphics and toolkits, so that the window manager itself could have a more
`sophisticated look and feel, and so applications would be more consistent. Other systems (c) allow different window
`managers and different toolkits, while still embedding sophisticated graphics packages. Newer systems (d) hark back
`to the original design (a) and implement the graphics and toolkit on top of the window system.
`
`handled in essentially the same way today as it was in the original windowing systems, even though there
`are some well –known, unsolved problems with this model:
`
`There is no provision for special stop-output (Ctrl+S) or abort (Ctrl+C, command-dot) events, so these
`will be queued with the other input events.
`The same event mechanism is used to pass special messages from the windowing system to the applica-
`tion. When a window gets larger or becomes uncovered, the application must usually be notified
`
`
`
`Graphical User Interface Programming
`
`48-7
`
`so it can adjust or redraw the picture in the window. Most window systems communicate this by
`queuing special events into the event stream, which the program must then handle.
`The application must always be willing to accept events in order to process aborts and redrawing requests.
`If not, then long operations cannot be aborted, and the screen may have blank areas while they are
`being processed.
`The model is device-dependent, because the event record has fixed fields for the expected incoming
`events. If a 3-D pointing device or one with more than the standard number of buttons is used
`instead of a mouse, then the standard event mechanism cannot handle it.
`Because the events are handled asynchronously, there are many race conditions that can cause programs
`to get out of synchronization with the window system. For example, in the X windowing system, if
`you press inside a window and release outside, under certain conditions the program will think that
`the mouse button is still depressed. Another example is that refresh requests from the windowing
`system specify a rectangle for the window that needs to be redrawn, but if the program is changing
`the contents of the window, the wrong area may be redrawn by the time the event is processed. This
`problem can occur when the window is scrolled.
`
`Although these problems have been known for a long time, there has been little research on new input
`models (an exception is the Garnet Interactors model [Myers 1990b]).
`
`48.2.1.2.5 Communication
`In the X windowing system and NeWS, all communication between applications and the window system
`uses interprocess communication through a network protocol. This means that the application program
`can be on a different computer from its windows. In all other windowing systems, operations are imple-
`mented by directly calling the window manager procedures or through special traps into the operating
`system. The primary advantage of the X mechanism is that it makes it easier for a person to utilize multiple
`machines with all their windows appearing on a single machine. Another advantage is that it is easier to
`provide interfaces for different programming languages: for example, the C interface (called xlib) and the
`Lisp interface (called CLX) send the appropriate messages through the network protocol. The primary
`disadvantage is efficiency, because each window request will typically be encoded, passed to the transport
`layer, and then decoded, even when the computation and windows are on the same machine.
`
`48.2.1.2.6 User Interface Layer
`The user interface of the windowing system allows the user to control the windows. In X, the user can
`easily switch user interfaces, by killing one window manager and starting another. Some of the original
`window managers under X included uwm (with no title lines and borders), twm, mwm (the Motif win-
`dow manager), and olwm (the OpenLook window manager). Newer choices include complete desktop
`environments that combine a window manager with a file browser and other GUI utilities (to better
`match the capabilities found in Windows and the Macintosh). Two popular desktop environments are
`KDE (K Desktop Environment — http://www.kde.org) with its window manager KWin, and Gnome
`(http://www.gnome.org), which provides a variety of window manager choices. X provides a standard
`protocol through which programs and the base layer communicate to the window manager, so that all
`programs continue to run without change when the window manager is switched. It is possible, for ex-
`ample, to run applications that use Motif widgets inside the windows controlled by the KWin window
`manager.
`A discussion of the options for the user interfaces of window managers was previously published [Myers
`1988b]. Also, the video All the Widgets [Myers 1990a] has a 30-minute segment showing many different
`forms of window manager user interfaces.
`Some parts of the user interface of a windowing system, which is sometimes called its look and feel, can
`apparently be copyrighted and patented. Which parts is a highly complex issue, and the status changes
`with decisions in various court cases [Samuelson 1993].
`
`
`
`48-8
`
`Computer Science Handbook
`
`FIGURE 48.4 A screen from the original Macintosh showing three windows covering each other and some icons
`along the right margin.
`
`48.2.1.2.7 Presentation
`The presentation of the windows defines how the screen looks. One very important aspect of the presen-
`tation of windows is whether or not they can overlap. Overlapping windows, sometimes called covered
`windows, allow one window to be partially or totally on top of another window, as shown in Figure 48.4.
`This is also sometimes called the desktop metaphor, because windows can cover each other as pieces of
`paper can cover each other on a desk. There are usually other aspects to the desktop metaphor, how-
`ever, such as presenting file operations in a way that mimics office operations, as originated in the Star
`office workstation [Smith 1982]. The alternative is tiled windows, which means that windows are not
`allowed to cover each other. Obviously, a window manager that supports covered windows can also
`allow them to be side by side, but not vice versa. Therefore, a window manager is classified as “cov-
`ered” if it allows windows to overlap. The tiled style was popular for a while and was used by Cedar
`[Swinehart 1986] and by early versions of Star [Smith 1982], Andrew [Palay 1988], and even Microsoft
`Windows. A study even suggested that using tiled windows was more efficient for users [Bly 1986]. How-
`ever, today tiled windows are rarely seen on conventional window systems, because users generally prefer
`overlapping.
`Modern browsers for the World-Wide Web, such as Netscape and Microsoft’s Internet Explorer, provide
`a windowing environment inside the computer’s main windowing system. Newer versions of browsers
`support frames containing multiple scrollable panes, which are a form of tiled window. In addition, if
`an application written in Java is downloaded (see Section 48.2.1.3.4), it can create multiple, overlapping
`windows like conventional GUI applications.
`Another important aspect of the presentation of windows is the use of icons. These are small pictures
`that represent windows (or sometimes files). They are used because there would otherwise be too many
`windows to fit conveniently on the screen and to manage. Sapphire was the first window manager to group
`the icons into a window [Myers 1984], a format which was picked up by the Motif window manager.
`Now, the taskbar provides the icons and names of running and available processes in Windows and other
`modern window managers. Other aspects of the presentation include whether or not the window has a
`
`
`
`Graphical User Interface Programming
`
`48-9
`
`title line, what the background (where there are no windows) looks like, and whether the title and borders
`have control areas for performing window operations.
`
`48.2.1.2.8 Commands
`Because computers typically have multiple windows and only one mouse and keyboard, there must be a
`way for the user to control which window is getting keyboard input. This window is called the input (or
`keyboard) focus. Another term is the listener, because it is listening to the user’s typing. Some systems called
`the focus the active window or current window, but these are poor terms because, in a multiprocessing
`system, many windows can be actively outputting information at the same time. Window managers
`provide various ways to specify and show which window is the listener. The most important options are
`the following:
`
`Click-to-type — This means that the user must click the mouse button in a window before typing to
`it. This is used by the Macintosh and Microsoft Windows.
`Move-to-type — This means that the mouse only has to move over a window to allow typing to it.
`This is usually faster for the user, but it may cause input to go to the wrong window if the user
`accidentally knocks the mouse.
`
`Some X window managers (including the Motif window manager, mwm) allow the user to choose the
`desired method. However, the choice can have significant impact on the user interface of applications. For
`example, because the Macintosh requires click-to-type, it can provide a single menubar at the top, and
`the commands can always operate on the focused window. With move-to-type, the user might have to
`pass through various windows (thus giving them the focus) on the way to the top of the screen. Therefore,
`Motif applications must have a menubar in each window so the commands will know which window to
`operate on.
`All covered window systems allow the user to bring a window to the top (not covered by other windows),
`and some allow sending a window to the bottom (covered by all other windows). Other commands allow
`windows to be changed in size, moved, shrunk to an icon, made full-size, and destroyed.
`
`48.2.1.3 Toolkits
`A toolkit is a library of widgets that can be called by application programs. As mentioned previously, a
`widget (also called a control) is a way of using a physical input device to input a certain type of value.
`Typically, widgets in toolkits include menus, buttons, scroll bars, text type-in fields, etc. Figure 48.5 shows
`some examples of widgets. Creating an interface using a toolkit can only be done by programmers, because
`toolkits only have a procedural interface.
`Using a toolkit has the advantage that the final UI will look and act similarly to other UIs created
`using the same toolkit, and each application does not have to rewrite the standard functions, such as
`menus. A problem with toolkits is that the styles of interaction are limited to those provided. For example,
`it is difficult to create a single slider that contains two indicators, which might be useful to input the
`upper and lower bounds of a range. In addition, the toolkits themselves are often expensive to create:
`“The primitives never seem complex in principle, but the programs that implement them are surprisingly
`intricate” [Cardelli 1985, p. 199]. Another problem with toolkits is that they are often difficult to use: they
`may contain hundreds of procedures, and it is often not clear how to use the procedures to create a desired
`interface.
`As with the graphics package, the toolkit can be implemented either using or being used by the windowing
`system (see Figure 48.3). Early systems provided only minimal widgets (e.g., just a menu) and expected
`applications to provide others, as shown in Figure 48.3a. In the Macintosh and in Microsoft Windows,
`the toolkit is at a low level, and the window manager user interface is built using it. The advantage
`of this is that the window manager can then use the same sophisticated toolkit routines for its user
`interface. See Figure 48.3b. When the X system was being developed, the developers could not agree on
`a single toolkit, so they left the toolkit to be on top of the windowing system. In X, programmers can
`use a variety of toolkits (for example, the Motif, InterViews [Linton 1989], Amulet [Myers 1997], tcl/tk
`
`
`
`48-10
`
`Computer Science Handbook
`
`FIGURE 48.5 Some of the widgets with a Motif look and feel provided by the Garnet toolkit.
`
`Athena
`
`Motif
`
`Open-
`Look
`
`Xtk Intrinsics
`
`(a)
`
`Motif
`
`Xtk
`
`Motif
`
`Motif
`
`Interviews
`
`Amulet
`
`(b)
`
`FIGURE 48.6 (a) At least three different widget sets that have different looks and feels were implemented on top of
`the Xt intrinsics. (b) The Motif look and feel has been implemented on many different intrinsics.
`
`[Ousterhout 1991], and Gnome GTK+ [GNOME 2002] toolkits can be used on top of X), but the window
`manager must usually implement its user interface without using the toolkit, as in Figure 48.3c. The Java
`Swing toolkit is implemented on top of the Java 2-D graphics package, which in turn is on top of the
`windowing system. See Figure 48.3-D.
`Because the designers of X could not agree on a single look and feel, they created an intrinsics layer
`on which to build different widget sets, which they called Xt [McCormack 1988]. This layer provides the
`common services, such as techniques for object-oriented programming and layout control. The widget
`set layer is the collection of widgets implemented using the intrinsics. Multiple