`
`Volume 2. Number 1. 1987
`
`Interacting with Future Computers
`
`DAV|D R_ HILL
`
`Man—Machine Systems Laboratory,
`Department of Computer Science,
`The University of Calgary,
`Calgary, Alberta, Canada T2N 1 N4
`
`Abstract
`
`Many problems that have to be solved in present day humancomputer inter—
`faces arise from technology limitations, quite apart from those arising from
`lack of appropriate knowledge. Some of the progress we see in the most re-
`cently developed interfaces has occurred simply because bit-mapped screens,
`large memories, colour, compute-power appropriate to local intelligence, and
`the like, have all become inexpensive at the same time as rising human costs
`have finally been appreciated, and deprecated, by those who pay the bills.
`The new technical possibilities, and the now obvious economic advantages
`of providing good interactive computer support to enhance human produc-
`tivity in all areas of endeavour has created tremendous pressure to improve
`the human-computer interface. This pressure, in turn, has dramatically high-
`lighted our lack of fundamental knowledge and methodologies concerning
`interactive systems design, human problem solving, interaction techniques,
`dialogue prototyping and management, and system evaluation. The design of
`human computer interfaces is still more of an art than a science. Furthermore,
`the knowledge and methodologies that do exist often turn out to fall short of
`what is needed to match computer methods or to serve as a basis for detailed
`algorithm design.
`The paper is addressed to a mixed audience, with the purpose of re-
`viewing the background and current state of human-computer in-
`teraction, touching on the social and ethical responsibility of the de-
`signer, and picking out some of the central ideas that seem likely to
`shape the development of interaction and interface design in future
`computer systems. Areas are suggested in which advances in fun-
`damental knowledge and in our understanding of how to apply that
`knowledge seem to be needed to support interaction in future com-
`puter systems. Such systems are seen as having their roots in the vi-
`sionary work of Sutherland (1963), Englebart and English (1968),
`Kay (1969), Winograd (1970), Hansen (1971), Papert (1973),'Foley
`and Wallace (1974), and D. C. Smith (1975). Their emphasis on natu-
`ral dialogue, ease of use for the task, creativity, problem solving, ap—
`propriate division of labour, and powerful machine help available in
`the user’s terms will still be crucial in the future: However, the abil-
`
`ity to form, communicate, manipulate and use models effectively
`
`© 1987 Oxford University Press and Maruzen Company Limited
`
`SKYHAWKE Ex. 1028, page 1
`
`SKYHAWKE Ex. 1028, page 1
`
`
`
`84
`
`Future Computing Systems
`
`1 . Introduction
`
`1. 1. A prospectus
`
`1.2. Why better
`interfaces?
`
`will come to dominate interaction with future computer systems as
`the focus of interactive systems shifts to knowledge—based perform-
`ance. Human—computer interaction must be regarded as the amplification of
`an individual’s intellectual productivity by graceful determination and satis—
`faction of every need that is amenable to algorithmic solution, without any
`disturbance of the overall problemsolving process.
`
`A non-technical vision of the future possibilities for human interac-
`tion with computers has been provided in a variety of media includ—
`ing several recent movies. The story that really centered on this inter-
`action and interplay was that involving HAL, the shipboard control
`computer for a voyage to Jupiter, following the summons of an alien
`intelligence (200 I: a Space Odyssey, by Arthur C. Clarke). More tech-
`nical views have been provided, at least in part, by developments in the
`field, documented in the technical literature, but on a piecemeal, scat-
`tered basis. Two recent surveys of directions in humancomputer inter—
`action concentrate on the application of Artificial Intelligence (A1) to
`interactive interfaces (Rissland 1984, Vickery 1984) and highlight the
`increasingly important role seen for AI in future human-computer in—
`teraction. The Architecture Machine Group (AMG) project, which has
`been underway at MIT since 1976, provides one of the more ambitious
`non-fictional Views of future interaction. It is based on the exploita-
`tion of spatiality and other normal properties of evolved human per-
`ceptual motor performance in a computersimulated ‘Dataland’, and is
`intended to complement more conventional forms of interaction (Bolt
`1979, 1980, 1982, 1984). However, HAL serves as an important dif-
`ferent view of possible integrated interfaces of the future, all the more
`powerful because the View is set in the context of a real task, but forms
`the background and plausible context for action, rather than being the
`focus. As in the past (with submarines, space flight, and the weapons of
`war) art suggests and defines the future goals of our technology.
`
`In the last year or two, there has been an upsurge of interest in
`providing better ways for people to interact with information pro—
`cessing systems. There are at least two reasons for this. First, it has
`become apparent that poor interfaces make it more difficult for us-
`ers of computer systems (including computer science experts) to
`do their job. Better interfaces improve productivity, reduce errors,
`and allow higher quality results. They give a competitive edge to
`their suppliers and, incidentally, make the users more comfortable
`in their work. With falling hardware costs and rising labour costs,
`the emphasis has changed from utilizing machines to their maxi-
`mum capacity to utilizing their human users and operators to best
`
`SKYHAWKE Ex. 1028, page 2
`
`SKYHAWKE Ex. 1028, page 2
`
`
`
`Interacting with future computers
`
`85
`
`effect. For once, this is a trend that also benefits these people directly.
`Secondly, computers are becoming very widely used, even in areas
`and in equipment that have previously not been associated with com-
`puters. The users of computers, in these circumstances, frequently have
`little or no computer training and, collectively, may exhibit the whole
`gamut of educational and career achievement in their various speciali—
`ties. For such people, the computer should appear as a tool, interfaced
`in such a way that the user can think about the task goals for which
`the system is used, rather than the characteristics of the computer tool
`used to achieve these goals. Some systems must carry the computer
`power so deeply embedded that it is effectively hidden, just as the elec—
`tric motor in a dishwasher or clock is hidden. The interface seen by
`the user is completely task-oriented, and the internal logic of the sys—
`tem (programmed, even in the case of non—computer equipment these
`days) translates the user’s needs into the control and/or power signals
`required to employ the technology as a subsystem. Of course, the user
`may well be aware that a computer (or motor) is in there doing essen—
`tial things, but does not have to be concerned with its characteristics.1
`
`Thus, so—called user friendly interfaces have become the touchstone
`for the more widespread and effective use of computer power. Such
`interfaces have a direct economic and social impact, to the extent they
`succeed or fail. They allow the computer industry generally to expand
`markets, hence creating new jobs within the computer industry. Good
`interfaces also allow other companies that use the new computer
`power to be more productive and competitive, which may not only
`expand their existing market shares but also lead to new markets for
`information technology in previously untouched application areas.
`There is a warning here for those societies that feel they can remain
`as mere users of the new technology. Future markets will increasingly
`deal in the products of the new information technology industry, with
`employment in traditional areas declining as the new machines make
`the remaining employees more productive. Balance of payments prob-
`lems will explode for those countries that face the need to import the
`new technology to remain competitive, through failure to develop it
`themselves.
`
`13. Th economic
`imperative
`
`1,4, The basis for
`progress
`
`A few years ago the graphics area in computer science ex-
`panded dramatically as
`the need,
`the methodology, and the
`technology
`appeared
`or were
`generated. Advertising,
`film-
`
`1 The analogy to embedded motors was first suggested by Weizenbaum 1975).
`
`SKYHAWKE Ex. 1028, page 3
`
`SKYHAWKE Ex. 1028, page 3
`
`
`
`86
`
`Future Computing Systems
`
`1.5. The promise
`and the problem
`
`making, and design have provided much of the finance and incentive
`to the graphics expansion. Now that costs have fallen (as research has
`been amortized, as mass-market software has been developed, and as
`mass—produced hardware tailored to the specific needs of computer
`graphics has started to appear), computer graphics is providing part
`of the base for better human-computer interface design. Other tech-
`nologies are starting to mature: expert systems; low-cost very power-
`ful desktop computers with high-resolution colour displays; dialogue
`prototyping and management systems; databases and database ac—
`cess methods (especially limited natural-Ianguage-based access); new
`kinds of input—output devices that are also inexpensive (speech input—
`output devices, innovative direct manipulation media, etc.); and so on.
`It is now commonplace to do things that were not possible even as
`recently as two years ago. Not only does this allow new approaches to
`human-computer interfacing but it also allows sophisticated interfaces
`to be created quickly and at low cost. This, in turn, facilitates better and
`more diverse experimentation related to human—computer interaction,
`as part of the research needed to expand the body of knowledge con-
`cerning the methods and goals of human-computer interface practice.
`
`The Apple Macintosh, developed from the Lisa (Williams 1983,
`Morgan et al. 1983) is an example of a current popular application
`of both new technology and new knowledge. The technology and
`experience that made this approach to computing possible has its
`roots in the Visionary work of Sutherland (1963) who invented the
`first
`‘graphics—land’, with elegant graphical
`interaction techniques,
`employing unobtrusive machine assistance, to amplify the drawing
`skills of the draughts-person unconcerned with the technicalities of
`computers; of Englebart (1968), who originated the mouse and com—
`puter-augmented human reasoning at SR1; 0f Kay (1969, 1972) who
`developed the first higher—level personal computer, object-oriented
`programming with windows and multiple views, systems based on
`message-passing primitives, and simple personal programming sys—
`tems of great power; of Papert (1973, 1980) who, following in the tra—
`ditions of Piaget and Montessori, used computers to show how com—
`plex ideas could be taught easily when translated into concrete terms
`in an environment in which it was easy and enjoyable to experiment,
`catering to the growth of the child rather than mere provison of in-
`formation; of Foley and Wallace (1974), who made a notable early
`statement of rules for natural graphical ‘conversation’; and of D. C.
`Smith (1975) who developed direct manipulation and the ‘icon’ as the
`basis for computer—aided thought using ‘visualization’, inspired by the
`
`SKYHAWKE Ex. 1028, page 4
`
`SKYHAWKE Ex. 1028, page 4
`
`
`
`Interacting with future computers
`
`87
`
`visual simulations and animations of Smalltalk, Kay’s system. But the
`Macintosh would not have been possible as a popular personal com-
`puter without technological advances in microchip design and fabri-
`cation, allowing cheap memory and processing power as a basis for
`bit-mapped graphics, speed, and powerful interactive software. Now
`we have the Atari 1040 ST that offers similar facilities not for US$2500,
`
`but for US$900, and the Commodore Amiga at US$1200, both with
`higher resolution and excellent expansion capabilities.
`In the face of this technological cornucopia coupled with an abun—
`dance of relevant ideas, it is becoming increasingly clear that inter-
`face design is still an art, and that art is being severely taxed as the
`purely technological limitations disappear and as an increasingly large
`number of would—be users are able to afford the hardware to support
`their activities. The remainder of this paper leads up to a discussion
`of themes and ideas that will be important in interacting with future
`computer systems (in Section 6). In preparation for this, three impor—
`tant issues are addressed: (a) the ethical and practical constraints on
`the application of future computers, since these form the context and
`rationale for interaction; (b) the distinction between programmers and
`users, and the nature of the programming task, since programming is
`an important form of interaction with computers; and (c) the game
`element in human-computer interaction, because evidence suggests it
`may be possible to improve interfaces by exploiting some features of
`games. In Section 5, a futuristic database access system (Rabbit, Wil-
`liams 1984) is described, because it begins to incorporate ideas that
`seem crucial in future computer systems interfaces. Finally, there is
`the discussion. The central theme in future humancomputer interac—
`tion will be the formation, representation, communication, manipula-
`tion and use of models. Other important themes comprise redundant,
`multi-modal interaction techniques; and the specification and man-
`agement of interaction. These are addressed.
`
`The easiest way to get something done is to ask a competent loyal
`assistant or colleague to do it for you or, if your involvement is nec-
`essary, to assist you in doing it. Given appropriate talent, this may
`be even more effective than doing it yourself. The metaphor has
`been used before in the context of a programmer’s assistant (Teitel—
`man 1972, 1977), and tends towards one extreme in the continuum
`of views of the user interface. This extreme looks for an active, in-
`
`telligent, reasoning mediator that lies between the user and what is
`to be done. The other regards the interface as a simple passive ‘gate-
`way’ or membrane between a user and the application (Rissland
`1984), that can be tailored to particular needs, perhaps, but is simply
`
`SKYHAWKE Ex. 1028, page 5
`
`2. A context for future
`interactive systems
`
`Introduction: the
`2.1,
`‘do it’, or abdication
`model of interaction
`
`SKYHAWKE Ex. 1028, page 5
`
`
`
`88
`
`Future Computing Systems
`
`a personalizable tool—not even a good servant, let alone an assistant or
`colleague. The issues involved are: where is control located; and how
`much expertise can be built into the interface management? However,
`these questions are bound up with questions about the structure of
`User Interface Management Systems (UIMSs) and about task alloca-
`tion in humancomputer systems—why is it necessary for humans and
`computers to co—operate?
`If we naively assume that the interface for future computer systems
`will comprise a voice input natural language ‘Check this task definition
`and do what I mean’ (DWIM) command system, perhaps with graphi-
`cal aids to help in task definition, we are overlooking certain funda-
`mental facts concerning the reasons for using computers, as well as
`both current and absolute limitations on their use. We are also under-
`
`estimating the problems involved in communicating with colleagues
`and assistants. True, there will undoubtedly be an increasing number
`of tasks for which the relevant experience and applicability criteria can
`be defined to allow something approaching this type of interaction.
`However, even these systems, which already exist in embryonic form,
`must place a certain emphasis on the metacommentary aspects of the
`dialogue involved, and respond to questions or comments related to
`their internal workings and dialogue construction, as well as to specific
`task goal elements. Control ultimately resides with the human, and hu-
`man goals must be satisfied. Issues of communication and metacom-
`mentary, as well as the efiect of conflict between the goals of the com—
`municants, are nicely summarized in the work of Thomas and Carroll
`(Thomas 1978, Thomas and Carroll t 981 ).
`The problems of task expression and refinement, of knowledge ac-
`quisition and retrieval, of reasoning and planning, and of goal reso—
`lution make the DWIM type of interface a remote dream as a gen-
`eral-purpose form of interaction with computers. It was this kind of
`interface that was portrayed in the movia 2001: a Space Odyssey, and
`the system ultimately broke down due to the conflict of goals at several
`levels. Moreover, in the end, the computer was obliged to lie in an—
`swer to questions at the metacommentary level, and finally to attempt
`to take over absolute control, a strategy only thwarted by the creative
`problem-solving performance of the remaining crew member. Rass-
`mussen (1983) refers to this kind of performance as knowledge-based
`performance, as opposed to rule—based performance in which situa—
`tion-action rules are remembered from previous experience, selected
`as appropriate, and applied. Weizenbaum (1975) has argued forceful-
`ly against the belief that, in principle, computers may be able to take
`over the running of society completely, and his arguments bear on the
`
`SKYHAWKE Ex. 1028, page 6
`
`SKYHAWKE Ex. 1028, page 6
`
`
`
`2.2. Ethical and practical
`constraints on abdication:
`the knowledge interface
`
`Interacting with future computers
`
`89
`
`topic raised above concerning absolute limits on what. computers
`should do. Since his view requires human involvement in certain kinds
`of activity, it also requires human-computer interaction, no matter how
`sophisticated our computer systems become. There is of course the
`question, if HAL—like interaction is feasible, is it desirable or even pref-
`erable to more traditional forms?
`
`Society’s reaction to the modest progress in applying computers
`alarms Weizenbaum, who sees the potential for dehumanization, in—
`flexibility, control, and oversimplification inherent in the unwise and
`over-hasty application of computers in areas we either do not under—
`stand well enough, or from which we should exclude computers for
`ethical reasons.
`
`Much of the force of Weizenbaum’s case derives from arguments
`about the level of understanding required to model situations or sys-
`tems as a basis for solving problems, and from arguments about our
`ability (or, more likely, lack of ability) to implement such models as
`computer programs. Both sets of arguments centre on problems cre-
`ated by the complexity involved, as well as the character of the entities
`being modeled. These, in turn, affect the questions we ask, can ask, or
`should ask in order to formulate the model in the first place. From
`this ground, Weizenbaum argues that computers are being applied in
`harmful ways for a variety of derivative reasons. First, inflexible solu-
`tions to problems are created because complex programs--especially
`those written by a team-are themselves not understood well enough
`to permit changes to them, even to correct known errors. Secondly,
`solutions are based on incomplete models and data, due to our lack of
`understanding, and our lack of ability to formulate adequate questions
`to illuminate even those aspects we are aware of, let alone all the ques-
`tions that we should ask, if we had God—like insight. There is also the
`question as to whether all relevant matters could be covered by such
`a factual approach. As riders to this, Weizenbaum points out that (a)
`data may be ignored simply because ‘it is not in the right form’, and
`(b) oversimplified solutions will be produced based only upon those
`aspects of the problems that we can formalize. A third harmful effect
`of computers, he argues, is that they act as a conservative force in so—
`ciety, partly by providing the means of sustaining outdated methods
`of running an increasingly complex society, and partly because, once
`programs are written, they are so resistant to change, for practical
`as well as economic reasons. Finally, he argues that computers have
`made society more vulnerable. With continued centralization of con—
`trol (such centralization itself outdated), errors and disturbances have
`far-flung and unpredictable consequences as they propagate through a
`
`SKYHAWKE Ex. 1028, page 7
`
`SKYHAWKE Ex. 1028, page 7
`
`
`
`90
`
`Future Computing Systems
`
`homogeneous system, optimized for economy rather than stability.
`The scheduling of airline flights is an example of such a system in
`which unplanned hijacking incidents have propagated their dislocat-
`ing effects on a world-wide scale, by domino action in a system with
`inadequate flexibility. Equally, the recent mini—crash of the Wall Street
`stock market (September 1986), which rippled around the world, is at-
`tributed by experts to slavish adherence to predictions and recommen-
`dations generated by computer models of stock market performance
`that were inflexible and incomplete. It is also increasingly obvious that,
`as in all human activity, economic considerations tend to act in such a
`way as to simplify solutions and to inhibit improvements that cannot
`be proved to bring directly measurable financial or political benefits.
`Such attitudes are much harder to attack when entombed in the amber
`
`of computer software.
`Alongside this technical theme to Weizenbaum’s book, there runs
`a strong philosophical argument against the dehumanization of life
`and society. The most important point is this: by insisting that logi-
`cal2 solutions to problems are equivalent to rational2 solutions to
`problems, one is defining out of existence the possibility of conflict—
`ing human values, and hence the human values themselves. Here can
`be seen the basis of conflict with many researchers in Artifical Intel—
`ligence, for the whole philosophical thrust of the book is against the
`view that the human being is just a computer, with mechanisms and
`rules that can be understood and transferred to a machine. In Ras—
`
`mussen’s terms, computers may assume a major share of the perfor-
`mance burden at the rule-based level, minimizing and simplifying the
`interaction required in the process, but the real challenge for future
`computer systems will be to facilitate human-computer interaction in
`a knowledge-based performance mode. If Weizenbaum is right, and I
`believe he is, knowledge—based performance can never be completely
`taken over by the computer because it is neither possible, nor ethical.
`The computer must remain a smart tool in the search for formaliza-
`tions of useful new knowledge, or of new insights into old knowledge,
`conditioned by the goals and needs of humans. However, real progress
`is possible, in terms of the acquisition and application of knowledge,
`if we can solve the problems associated with the formation, represen—
`tation, communication, manipulation and use of models in interac—
`tive problem—solving and task execution. In this way, the ethical and
`practical objections can be overcome, whilst still maximizing the
`
`2Webster defines rational as having reason or understanding; being reasonable; whilst logical
`means formally true. Logic is, ultimately, tautologous, and denies conflict. By denying conflict-
`ing human values, logic. in essence, denies the reality of the values themselves.
`
`SKYHAWKE Ex. 1028, page 8
`
`SKYHAWKE Ex. 1028, page 8
`
`
`
`2_3_ In conclusion
`
`Interacting with future computers
`
`91
`
`support to the human. This is why human—computer interaction will
`be so important in future computer systems, and why models will
`feature so prominently and importantly. Shared models will form the
`knowledge interface between computers and people.
`
`There are really two kinds of question raised by Weizenbaum’s book.
`One kind is technically oriented; questions about the best division
`of labour in a system involving both humans and computers; ques—
`tions about the practicality, validity and utility of partial solutions to
`problems we do not fully understand; and questions about the state of
`our knowledge concerning how to implement certain kinds of solu—
`tions adequately. There is also the question as to whether some kinds
`of problems are amenable to programmed solution at all. These are all
`valid research questions that cannot be ignored as we design increas—
`ingly complex systems. We should not get carried away by the modest
`success in improving knowledge access that has been achieved on the
`basis of rulebased ‘expert systems’.
`The other kind of question begs the reader to step outside the con-
`ventional framework of disinterested science and ask questions about
`the value and ethics of what is being done with computers in terms
`of replacing people and running society. The underlying, but unstated
`message here seems to be that, if we are approaching God—like powers
`with our technology, we need God-like wisdom and restraint in the ex-
`ercise of these powers. The implication is that the only viable basis for
`restraint and wisdom, on the scale required, is for each individual in
`the technological and scientific areas concerned to take some personal
`responsibility for the consequences of exercising his or her profession-
`al skills. This is the context within which we should contemplate the
`creation of future computer systems, and the context which constrains
`the character of our interaction with them. This is why the human-
`computer interface will grow more complex and demanding, rather
`than less, as our knowledge increases. Understanding such interfaces
`becomes tantamount to understanding ourselves, yet considerable un-
`derstanding is required as a basis for design.
`If the user must continue to be an active participant in increasingly
`sophisticated future computer systems, which is the logical and ethical
`conclusion from the foregoing discussion, then the human—comput-
`er interface is not only here to stay, but must develop appropriately.
`Furthermore, whatever the status of the user as a computer specialist,
`the user must have some task-relevant knowledge. It is the unifica—
`tion of the two sources of knowledge, human and computer, in the
`
`SKYHAWKE Ex. 1028, page 9
`
`SKYHAWKE Ex. 1028, page 9
`
`
`
`92
`
`Future Computing Systems
`
`problem solution, that is the ultimate goal of humancomputer interaction.
`Williams (1984) points out that the knowledge brought to the task by humans
`very likely differs from that brought by machines. That brought by the hu-
`man is high-level generic knowledge whilst that brought by the machine is
`the lower-level, physical—particulars kind of knowledge. Again in Rasmussen’s
`terms, the human tends to have a model at an intentional level of ends, whilst
`
`the machine is able to provide models at the physical level of means. The map—
`
`ping between them is many to one in both directions. Interaction applies the
`means to the ends by forming or invoking particular functional models that
`connect the two. For this to work, mechanisms must be available to allow the
`
`participants to explain themselves to each other and form the connections.
`Furthermore, any such process should result in the creation of, or accom—
`plishment of, something relatively perfect and formally correct (the solution
`to the original problem) from an error-prone sketchy interaction. Interac-
`tion must be regarded as amplifying an individual’s intellectual productivity
`by graceful determination and satisfaction of every need that is amenable to
`algorithmic solution, without disruption of the overall, usually knowledge-
`based performance of the human. The key to this is the effortless sharing of
`the models that embody the various kinds of knowledge involved: their for-
`mation, representation, communication, manipulation and use. Where those
`models are partially or completely inaccessible behind the human cognitive
`veil, for whatever reason, then the interface must support the elicitation and
`communication of incomplete constructs and informal descriptions based on
`the results of using those models covertly.
`
`Experience has shown that good interfaces make it easier for computer
`users to do their job. Even computer experts show increased produc-
`tivity, reduced errors, and higher quality work when they are provided
`with a better programming environment and more powerful tools that
`are easy to apply to their work. Furthermore, falling hardware costs
`and rising labour costs are shifting the emphasis from machine uti—
`lization to human productivity,
`in terms of increased throughput,
`reduced errors, shorter training periods, and lower staff turnover,
`whilst still maintaining or preferably improving the quality of work
`produced. With the increasingly widespread use of computers by non-
`experts for a variety of economic and practical reasons, this situation
`(as already noted) has led to a dramatic surge in the attention given to
`the human-computer interface in applications areas. (Unfortunately,
`when advertising products, all too often the attention is mere lip ser—
`vice.) However, the corresponding rise in our knowledge of how to
`
`SKYHAWKE Ex. 1028, page 10
`
`3_ The programmer
`as a user
`
`3_1_ Current support for
`programmers
`
`SKYHAWKE Ex. 1028, page 10
`
`
`
`Interacting with future computers
`
`93
`
`design good interfaces, even for well—defined applications tasks, has
`been far less than dramatic, again as noted.
`Surprisingly little has been achieved in terms of providing good sup-
`portive interfaces for programmers (programming environments) de-
`spite the fact that, in a very real sense, one of the most important ap-
`plications of computers is to programming. Not that the problem has
`been ignored by researchers. There have been studies and experiments
`concerned with various aspects of the psychology of programming
`and much written about the value of structured program design and
`the relative merits of various kinds and levels of languages. Problems
`of specification, program comprehension and debugging have been
`considered. Curtis (1981) provides a useful selection of papers up to
`1981 but, for example, in the classification system for human-comput-
`er interaction literature appearing in the special issue of Ergonomics
`Abstracts devoted to human-computer interaction (Megaw and Lloyd
`1984), the word programming (or anything like it) appears only twice.
`Even then it is only in connection with languages, and with ‘aspects’
`which turn out to be mostly the psychology of programming. There
`has apparently been little success in integrating some of the available
`knowledge into programmer interfaces (programming environments—
`systems or applications oriented) comparable to those available for end
`users. The sum total seems to be a collection of fourth-generation tools
`to assist in screen management and Unix. Programmers are still largely
`left to look after themselves, which may boost their egos, but hardly
`boosts their productivity or the quality of their prod ucts.
`
`A more comprehensive approach to meeting the programmer’s
`needs seems reasonable. Thus, a future programmer’s environment
`should allow different parts of programs to be implemented in what—
`ever languages are appropriate, and run on arbitrary machines in a
`distributed system according to the best match between algorithm
`and machine, the latter without requiring any intervention by the
`programmer or (if there is one) the end-user. This requires smooth,
`languageindependent module interfaces. Debugging tools should
`understand a lot about program structure and behaviour, as well as
`about data structures and how they are used, providing a higher level
`interface together with expert help for the programmer looking for
`faults. Structure editors (DonzeauGouge et al. 1975, Neal 1980) em-
`bodying the syntax and character of any programming language or
`document in use should be available. File systems present a partic-
`ular problem and probably require progress in expert file manage—
`ment to help the user (programmer) manage and retrieve files. The
`
`SKYHAWKE Ex. 1028, page 11
`
`3.2. The programmer’s
`needs
`
`SKYHAWKE Ex. 1028, page 11
`
`
`
`94
`
`Future Computing Systems
`
`spectacle of a productive programmer searching an extensive hierar—
`chical file structure for a lost file of uncertain appellation is sad to see.
`An integrated applications programming environment would, in addi—
`tion, place the computational tools needed to support an application at
`the same level as the interaction tools needed to support the user, with
`control residing at a task management level integrated within the op-
`erating system that allowed the programmer to concentrate on goals,
`functions and solution strategies rather than mechanisms and house—
`keeping.
`A computer user (including a programmer) thinks and/or learns
`about the solution to a problem with computer a