`[CARD 80]
`
`Page 1 of 16
`
`TRADING TECH EXHIBIT 2183
`IBG ET AL. v. TRADING TECH
`CBM2016-00032
`
`
`
`Graphics and
`JD. Foley
`Image Processing
`Editor
`The Ke stroke-Level
`Model or User
`
`Performance Time
`with Interactive
`Systems
`
`Stuart K. Card and Thomas P. Moran
`Xerox Palo Alto Research Center
`
`Allen Neweil
`Carnegie-"Mellon University
`
`
`
`There are seven! aspects oi’ user-computer
`perfonnance that system oesiignrers should
`syste.matica.l3}' consider. This article proposes a simple
`model. the Iieystrokeievel Mo-éed. for predicting one
`aspect of performance: the time it takes an expert user
`to perform a given task on it given computer system.
`Tltemodseliebasedorrcotmtingkeystrokesandother
`tow-level operations. including me user's mental
`weperations and the systenfs rem. Perfortoance
`is coded in term ofthese operations and ooeretor
`tiflsumm tog'repraethc‘®os.Heuns'ticr1.tiesat‘c
`given for predicfmg where rnenta! preparations oecmr.
`when tested against are on to different system. the
`model‘: wedictiom error is 21 percent {or indiviamul
`tasks.AnexnmeAeisgi7entoili:estrateho°Hmemodel
`can bensetltopromzceparansetricpredictiousamdhow
`sensitivity anatysh can be used to redeem concimsions
`inthefeceotuncertaitrassumti-oos.F”realiy,theesodet
`is compared to several simpler versions. The potential
`-role for the Keystroke-Level Model ‘to system design is
`discussed.
`
`Key Words and Phrases: wter to-terface,
`human-comter interaction. user modei. user
`performance. eoflitire prsychm. ergooomks, limo
`factors, systems dfi
`Ci? Categories: 3.36, 4.6. 8.1
`
`
`granted provided that the DUPE5 are not made or dmnbuwd for dtrnct
`co-rnmacrc-"mt advantage. the ACM oopynght notice and the titte of the
`pnhhcaoon and in date appear. and notice is given that copymg is by
`permission w the Aaactctation for Compittmg Machinery. To copy
`othermttse or to repubtiah. zeqnrm 22 fee and/or specific
`Authors’ present addresses: S.K. Card and TJ’. Moran. Xetm
`Cmporatksri. Palo Aito Rm Center. 333} Coyote Hill Road. Pain
`Alto. CA ‘M304: is Nowell. flcparuncnt of Computer Science. Car-
`l'|‘¢g,1&~§t‘£.l§Ol! 'Umvamty. Pinslnirgb, PA. 152%}.
`€‘
`194339 ACM (X30! -£37.82.f80,1Cl7tX}~0396 $80.75.
`
`no
`Page 2 of 16
`
`I. Introduction
`
`The design and evaluation of inlcrilctivi: cmnptltcr
`systems should take into account the total pcrforrn;incc
`of the cnmhmcd user-conipirlcr svstcrii. Such an ziccount
`would reflect the psychological characteristics of users
`and their interaction with the task and the coniputer.
`This rarclv occurs in any svsttrrnatic and explicit way.
`The causes of this failure may be partly in attitudes
`toward the possibiiil)-' of dealing successfttlly with psy-
`chological factors. such as the belief that intuition. sub-
`jective experience. and anecdote form the only possible
`bases for dealing with them. Whatever may be true of
`these more global issues. one major cause is the absence
`of good analysis tools for assessing combined u3er-i:on1-
`putet“ performance.
`There exists quite a bit of research relevant to the
`area of user—cou.‘tputer performance. but most of it
`is
`preliminary in nature. Pew et al. [14]. in a review ol'40
`potentialiy relevant human-system performance models.
`conclude “that integrative models ofhurnan performance
`compatible with the requirements for representing com-
`mand and control system performance do not exist at the
`present time." Ramsey and Atwood [55]. after reviewing
`the human factors literature pertinent to computer sys-
`tems. conclude that while there exists enough material to
`develop a qualitative "human factors design guide."
`there is insufficient material for a “quantitative reference
`handbook."
`
`This paper presents one specific quantitative analysis
`tool: 3 simple model for the time it
`takes a user to
`perform a task with a given method on an interactive
`computer system. This model appears to us to be simple
`enough, accurate enough. and flexible enough to be
`applied in practical design and evaluation situations.
`The model addresses only a single aspect of perform-
`ance. To put this aspect into perspective, note that there
`are many different dimensions to the performance of a
`user-computer system:
`
`—Ti'me. How long does it take a user to accomplish a
`given set of tasks using the system‘?
`—-Errors. How many errors does a user make and how
`serious are they?
`-Leanu'ng. How long does it take a novice user to
`team how to use the system [0 do a given set of
`tasks‘?
`
`“Functionality. What range of tasks can a user do in
`practice with the system?
`—.RecaH. How easy is it for a user to rectal! how to use
`the system on a task that he has not done for some
`time‘?
`
`The authors at this report are listed in alphabetical order. A.
`Newell in I. cmuuhant to Xerox ?r\RC. This paper :3 a review version
`OM33. For at View of the larger reacatcit program of which the study
`«inscribed in this paper is a part. see I51.
`
`Cemmuntttaoom
`of
`the MGM
`
`1950
`Jul
`23
`V
`Number 7
`
`Page 2 of 16
`
`
`
`('oncenrrun'on. How many things does a user have
`to keep in mind while using the system’?
`I"iir:'gue. How tired do users get when they use the
`systcni for extended periods?
`.~'lra=;2rah:‘h‘r_i-'. How do users subjectively evaluate
`the sys.tern‘_’
`
`Next. note that there is no single kind of user. Users
`vary along many dimensions:
`
`Their e.n‘£'m pfkmnvledgc of the different tasks.
`-Their l'mr.rwt’edge 49/" other .sy.srem.s, which may have
`positive or negative effects on the performance in
`the system of interest.
`------’l“neir motor ski'lr'.r on various input devices (e.g.,
`typing speed}.
`.- Their general technical ability in using systems (e.g.,
`programmers vs. nor1programrncrs_).
`...-Their experience with the system. i.e., whether they
`are novice risers, who know little about the system;
`caxim! users. who know a moderate amount about
`
`the system and use it at irregular intervals; or expert
`users. who know the system intimately and use it
`frequently.
`
`Finally, note that there is no single kind oftask. This
`is especially true in interactive systems, which are cx~
`pressly built around a command language to permit a
`wide diversity of tasks to be accomplished. The number
`of qualitatively different tasks performable by a modern
`text editor, for instance, runs to the hundreds.
`
`All aspects of performance, all types of users, and all
`kinds of tasks are important. However, no uniform ap-
`proach to modeling the entire range of factors in a simple
`way appears possible at this time. Thus. of necessity, the
`model to be presented is specific to one aspect of the
`total user-computer system: How long it takes expert
`users to perform routine tasks.
`
`The "model we present here is simple, yet effective.
`The central idea behind the model is that the time for an
`
`expert to do a task on an interactive system is determined
`by the time it takes to do the keystrokes. Therefore, just
`write down the method for the task, count the number
`
`of keystrokes required, and multiply by the time per
`keystroke to get the total time. This idea is a little too
`simplistic. Operations other than keystrokes must be
`added to the model. Since these other operations are at
`about the same level (time grain) as keystrokes, we dub
`it the “Keystrolceievel Model.” (The only other similar
`proposal we know of is that of Ernbley et al. [9], which
`we discuss in Section 6.l.)
`
`The structure of this paper is as follows: Section 2
`formulates the time prediction problem more precisely.
`Section 3 lays out the Keystroke-Ixvel Model. Section 4
`provides some empirical validation for the model. Sec-
`tion 5 illustrates how the model can be applied in prac-
`tice. And Section 6 analyzes some simpler versions of the
`model.
`
`M Page 3 of 16
`
`2. The '1”ime Prediction Problem
`
`The prediction problem that we will address is as
`follows:
`
`Given: A task (possibly involving several subtasks);
`the command language of a system;
`the motor skill
`parameters of the user; the response time parameters of
`the system; the method used for the task.
`Pred:’cr: The time an expert user will take to execute
`the task using the system, providing he uses the method
`without error.
`
`Several aspects of this formulation need explication,
`especially the stipulations about execution, methods, and
`the absence of error.
`
`2.! Unit Tasks and Execution Time
`
`Given a large task, such as editing a large document,
`a user will break it into a series of small, cognitively
`manageable. quasi~indc_pendent tasks, which we call um‘:
`tasks {4; 5, ch. ill. The task and the interactive system
`influence the structure of these unit tasks, but unit tasks
`
`appear to owe their existence primarily to the memory
`limits on human cognition. The irnportance of unit tasks
`for our analysis is that they permit the time to do a large
`task to be decomposed into the sum of the times to do its
`constituent unit tasks. Note that not all tasks have a unit-
`
`task substructure. For example, inputting an entire man-
`uscript by typing permits a continuous throughput or-
`ganiaation.
`For our purposes here, a unit task has two pans: (l)
`acquisition of the task and (2) execution of the task
`acquired. During acquisition the user builds a mental
`representation of the task, and during execution the user
`calls on the system facilities to accomplish the task. The
`total time to do a unit task is the sum of the time for
`
`these two parts:
`
`Trash =
`
`+ Team-rste
`
`The acquisition time for a unit task depends on the
`cha.racte.ristics of the larger task situation in which it
`occurs. In a manuscript interpretation situation, -in which
`unit tasks are read from a marked-up page or from
`written instructions,
`it takes about 2 to 3 seconds to
`
`acquire each unit task. In a routine design situation, in
`which unit tasks are generated in the users mind, it takes
`about 5 to 30 seconds to acquire each unit task. In a
`creative composition situation, it can take even longer.
`The execution of a unit task involves" calling the
`appropriate system commands. This rarely takes over 20
`seconds (assuming the system has a reasonably efficient
`command syntax). If a task requires a longer execution
`time, the user will likely break it into smaller unit tasks.
`We have formulated the prediction problem to pre-
`dict only the execution time of unit tasks, not the acqui-
`sition time. This is the part of the task over which the
`system designer has most direct ‘control (i.e.. by'manip-
`ulatirtg the systems command language). so its predic-
`tion suffices for many practical purposes. Task acquisb
`
`Communications
`
`July 1980
`Volume 23
`Number '1'
`
`Page 3 of 16
`
`
`
`two times are highly variable. except in special situations
`{such its the manuscript interpretation situation}; and we
`can say little yet about predicting them.
`Two important assumptions underlie our treatment
`of execution ttttte. First. eztecution time is the stttnc no
`
`matter how 3 taslt is acquired. Second. acquisition tune
`and execution time are independent {e.g.. reductttg exc-
`cutiott
`time by making the command language tttore
`efficient does not affect acquisition time). These assump-
`tions are no doubt fztisae -at a fine level of detail. but the
`
`error they produce is probably well below the threshold
`of concern in practical work.
`
`1.2 Methods
`
`A method is a sequence of systetn cotntnands for
`executing a unit task that forms it well-integrated ("cont-
`piied") segment of a user's behavior. It is characteristic
`of an expert user that he has one or more methods for
`each type of unit task that he encounters and that he can
`quickly {in about a second} choose the appropriate
`method in any instan~ce. This is what makes expert user
`behavior routine. as opposed to n-ovioe user behavior.
`which is distinctly nonroutinc.
`llvlethods can be specified at several levels. A user
`actually knows a method at all its levels. from 3 general
`system-independent
`functional
`specification.
`down
`through the commands in the language of the computer
`system. to the keystrokes and device manipulations that
`actually eottimtmimte the method to the system. Models
`can deal with ntethods defined at any of these levels I4.
`H}. The Keystroke-Level Model adopts one specific
`level—t}:te keystroke levelwto formalize the notion of a
`method, leaving all the other levels to be treated infor-
`mail)‘.
`Many methods that achieve a given task an exist. In
`general such tuetho-ds bar no systematic relationship to
`each other (except that of attaining the same end). Each
`can take 3 different amount of time to execute. and the
`differences can be large. Thus, in general, if the method
`is unknown, reasonab.le predictions ofexecutiou time are
`not pomible. For this reason. the proper prediction prob-
`lem is the one posed at the beginning of the section:
`Predict the time given the method.
`
`2.3 Error-Free Execution
`
`The Keystroke-Level Model assumm that the user
`faithfully exeettta the given method. The user deviates
`from a postulated method when he makes an error. Up
`to a fourth of an expert‘: time can be spent correcting
`errors. though users vary in their trade-off between speed
`and errors. We are simply ignoring the tasks containing
`errors and only predicting the crror~£ree tasks, for we do
`not knowltowtoprediet whereandltowofienerrors
`occur. But. if the ntetltod for -correcting an error is given,
`thentodeleanbeuaedtoprediethowlongitwilltaketo
`make thccorrectton. ittdc¢d,exp¢i'I$ handtemosterrots
`in routine ways. i.e.. oeeording to fixed. available meth-
`ods.
`
`an
`
`Page 4 of 16
`
`3. The lieystroke-L.et-cl We-tie!
`
`we lay‘ out the primitive operators for the Ke_t'sttokt,--
`Level Mo<.‘lel and give a set of l1Cl.1l'l.Vllt.‘:e.
`lot coding
`tttethods in terms of these ttpcrattttrs. "Then we present it
`few examples of ntethtad encoding,
`
`3.1 Operators
`the execu-
`The Keystroke-l.evel Model t.i.'.\S-£I’1h that
`tion part of at
`task can be dcsctthed in terms of four
`different ph_v.\'t'cal-ntotor operators: K tkeysltttktrtg). P
`(pointing). H {hotmttg}. and D(dt'.tw'1ttg). and one mental
`operator. M. by the user. plus 51 response operator. R. by
`the system. These operators are ltsted in Figure 1. Erie
`ctttion time is simply the sum ofthe ttme For each of the
`operators.
`
`Tpgnmgy ‘ll’ T; + TI! '9‘ T}; 4* Tu + T” 4' TR.
`
`Most operators are assumed to take a constant time for
`each occurrence. e.g.. TA’ * nit-Ix. where rm is the number
`of keystrokes and hr is the time per keystroke. (Operators
`D and R are treated somewhat differently.)
`The most frequently used operator is K, which rep-
`nesents at keystroke or it button push {on a typewriter
`keyboard or any other button device). K refers to keys.
`not characters (e.g., hitting the SHIFT key counts as at
`separate K). The average time for K. my. will be talten to
`be the standard typing rate. as determined by standard
`one-minute typing tests. This is an approximation in two
`respects. First. keying time is different for different keys
`and key devices. Second. the time for immediately caught
`typing errors (involving aacttsrace and relteying} should
`be folded into Ix. Thus. the preferred way to calculate lg
`from a typing test is to divide the total time taken in the
`test by the total number of nonerror keystrokes. which
`gives the gfl"ec!r've keying time. We accept both these
`approxirrtations in the interest of simplicity.
`Users can differ in their typing rates by as much as
`a factor of 15. The range of typing speeds is given in
`Figure 1. Given 3 population of users. an appropriate Ix
`oan be selected from this range. If a user population has
`users with large (3: differences.
`then the population
`should be partitioned and analyzed separately. sinoe the
`different classes of users will be likely to use different
`methods.
`
`The operator P represents pointing to at target on a
`display with a "mouse." a wheeled device that is rolled
`around on a table to guide the displays cursor. Pointing
`time for the mouse varies as a function of the distance to
`
`the target. a‘. and the size of the target. 3. according to
`Fitts’s Law {Z}:
`
`1';-"'.8+ .l logg(d/s+.5)sec.
`
`The famest time according to this equation is .8 sec. and
`the longest likely time (d/.r - I28) is 1.5 sec. Again. to
`keep the model simple. we will use a constant time of 1.!
`sec for 1;». Oman. pointing with the mouse is followed by
`pressing one of the buttons on the mouse. This key press
`is not part of P; it is represented by a K following the P.
`
`Cootmunttattoru
`of
`the ACM
`
`July 1980
`Volume 23
`Number 7
`
`Page 4 of 16
`
`
`
`Fig.
`
`I
`
`I'ht: Operator.-4 of mt: Kcjrstrttkc Model.
`
`Ooorotor
`
`{inscription and Remarks
`
`Time {sec}
`
`K
`
`P
`
`H
`
`Keystroke or button press.
`pft’_“3E‘.lrt«g the SHIF r or CONTt3tOl. key counts as a
`separate K ope-rrtt-on Time varies with the typing 51-tittoi
`the user; lhe fotiowing shows the range of typical vaiuegj
`
`Best typist (135 wpm)
`Good typist (90 wprrt}
`Average skilled typist [55 wpm)
`Average nonvsecrelary tvpist {40 wpm)
`Typing random letters
`Typing complex codes
`Worst typist {unfamiliar with keyboard)
`
`Pointang to a target on a display with a mouse.
`The ttrnc lo point varies with distance and target alze accordirtg
`to FitI'.s's Law The time ranges trorn 8 to 1 5 sec.
`with I
`I being an average IIf]"PE'. Thlfi operator does not
`include the button prom that otten follows (. 2 sec).
`
`‘033
`.12‘
`.203
`.25‘?
`.50‘
`_758
`1 103
`
`1.10‘
`
`Homing the hand{s) on the keyboard or other device.
`
`.40‘
`
`Dlt-to,!D)
`
`Drawing (manually) no straight-line segments
`having a total length at loom.
`This is 3 very restricteto operator; it assumes that drawing is
`done with the mouse on a system that conolrains ail tines to
`iatl on a square .56 cm grid Users vary in moi: drawing skiliz
`the time given is an average value.
`
`_9,.-30+ , 1 3100
`
`M
`
`Hit)
`
`Mentally preparing for executing physical actions.
`
`Response or tsec by the system.
`This takes different times tor oitterent cot-nmands in the try-sttun.
`These times must be input to the rnodet. The response time
`counts only ll it causes the user to wait.
`
`1 .35’
`
`r
`
`' Soc 13].
`"This is the average typing rate of the nonsecretary subjects in the experiment described in
`Section -DH.
`“ See [2].
`‘ Sec [1, 4].
`' The drawing time function and th: coeflicients were dzrivcd from least squares fits on the
`drawing test data from the four MARKUP subjects. Soc Sections 3.1 and 4.1.
`‘The time for M was estimated from the data from experiment described in Section 4.1. See
`Section 4.21.
`
`The mouse is an optimal pointing device as far as time
`is concerned; but the 1;» is about the same for other
`analog pointing devices, such as lightpens and some
`joysticks [2].
`When there are different physical devices for the user
`to operate. he will move his hands between them as
`needed. This hand movement, including the fine posi-
`tioning adjustment of the hand on the device, is rcpre~
`sented by the H (“homing“) operator. From previous
`studies [2, 4], we assume a constant In of .4 sec for
`movement between any two devices.
`The D operator represents manually drawing a set of
`straight-line segments using the mouse. D takes two
`parameters, the number of segments (up) and the total
`length of ail segments (in). !p(P'ln. la) is a linear function
`of these two parameters. The coefficients of this function
`are dilfe-rent for different users; Figure I gives an average
`value for them. Note that
`this is a very specialized
`operator. Not only is it restricted to the mouse, but also
`it assumes that the drawing system constrains the cursor
`to lie on a .56 cm grid. This allows the user to draw
`399
`
`Page 5 of 16
`
`straight lines fairly easily, but we would expect 19 to be
`different for different grid sizes. We make no claim for
`the generality of these times or for the form of the
`drawing time function. However. inclusion of one in-
`stance of a drawing operator serves to indicate the wide
`scope of the model.
`The user spends some time “mentally preparing" to
`execute many of the physical operators just. described;
`e.g., he decides which command to call or whether to
`terminate an argument string. Those mental preparations
`are represented by the M operator, which we estimate to
`take 1.35 sec on the average (see Section 4.2.1). The use
`of a single mental operator is. again, a deliberate simpli-
`ftcation.
`
`Finally. the Keystroke«Level Model represents the
`system response time by the R operator. This operator
`has one parameter, t, which is just the response time in
`seconds. Response times are different for dittcrent sys-
`tems, for different commands within a system, and for
`different contexts of a given command. The Keystroke-
`Level Model does not embody a theory of system to-
`
`C.ommunicatiorts
`of
`that AC M
`
`July I980
`Volume 2.3
`Number ‘I
`
`Page 5 of 16
`
`
`
`
`
`sponse time. The response times must be input to the
`model by giving specific values for the parameter r,
`which is a placeholder for these input times.
`The R times are counted only when they require the
`user to wait for the system. For example, a system re-
`sponse counts as an it when it is followed by a K and the
`system does not allow type-ahead, and the user must
`wait until the response is complete. However, when an
`M operation follows a response, the response time is not
`counted unless it is over 1.35 sec, since the expert user
`can completely overlap the M operation with the re-
`sponse time. Response times can also overlap with task
`acquisition. When a response is counted as an R, only
`the nonoverlapping portion of the response time is given
`as the parameter to R.
`
`3.2 Encoding Methods
`Methods are represented as sequences of Keystroke-
`Level operations. We will introduce the notation with
`examples. Suppose that there is a command named PUT
`in some system and that the method for calling it is to
`type its name followed by the RETURN key. This method
`is coded by simply listing the operations in sequence:
`MK[P] K[U] K['t"] K[RETURN], which we abbreviate as M
`4K[P U T RETURN]. In this notation we allow descriptive
`notes (such as key names) in square brackets. If, on the
`other hand, the method to call the PUT command is to
`point to its name in a menu and press the RED mouse
`button, we have: H[mouse] MP[PUT] K[R_ED] H[keyboard].
`As another example, consider the text editing task
`(called T1) of replacing a S-letter word with another 5-
`letter word, where this replacement takes place one line
`below the previous modification. The method for exe-
`cuting task Tl in a line-oriented editor called POET (see
`Section 4) can be described as follows:
`Method for Task Tl-Poet:
`
`Jump to next line
`Call Substitute command
`Specify new 5-digit word
`Terminate argument
`Specify old 5~digit word
`Terminate argument
`Terminate command
`
`MK[LlNEFEED]
`MK[S]
`5l([word]
`MK[RE'l'URN]
`5l([word]
`MK[RE'I‘URN]
`K[RETURN]
`
`Using the operator times from Figure l and assuming
`the user is an average skilled typist (i.e., 1;; = .2 see), we
`can predict the time it will take to execute this method:
`Tmcmg = 4!,“ + 151;; = 8.4 sec.
`
`This method can be compared to the method for
`executing task TI on another editor, a display-based
`system called DISPED (see Section 4):
`Method for Task Tl-Disped:
`Reach for mouse
`Point to word
`Select word
`Home on keyboard
`Call Replace command
`Type new 5-digit word
`Terrninate type-in
`
`I-l[mouse]
`P[word]
`K[YELLOW]
`H[l:eybc-ard]
`MKIRJ
`5K[word]
`MI([ESC]
`
`irexetute = ZIM ‘l" 83:5’ + my + [P = 6.2 SEC.
`
`400
`
`Page 6 of 16
`
`Fig. 2. Heuristic rules for placing the M operations.
`
`Begin with a method encoding that includes all physical operations and
`resnonse operations. Use Flute U to place candidate M5. and then cycle
`through Flutes I
`lo 4 for each M to see whether it should be deleted.
`
`Rule 0.
`
`Rule 1.
`
`Rule 2.
`
`Rule 3.
`
`Rule 4.
`
`all Ks that are not part of argument
`Insert Me in front of
`strings proper l'e.g_, text strings or numbers]. Place Ms in front
`of
`all P5 that select commands (nol arguments).
`If an operator following an M is fully anh'ci';ialr:rl in the operator
`just previous lo M.
`then delete the M leg.
`l‘-‘MK —» PK).
`
`If a string cl MKS belong to a cognitive um‘! te.g.. the name of
`a command].
`then delete all Ms but
`the first.
`
`the terminator 01‘ a
`is a redundant rerrrrlnaror (e.g..
`If a It
`terminator
`of
`its
`command
`immediately
`following
`the
`argument).
`then delete the M in Iron! o!
`Ihe K.
`
`II‘ a K terminates a constant string (a.g.. a command name).
`then delete the M in iron! ol the K: but if the K terminates El
`variable string [e.g.. an argument string),
`then keep the M.
`
`Thus, we predict that the task will take about two seconds
`longer on PIJET than on DISPED. The accuracy of such
`predictions is discussed in Section 4.
`The methods above are simple unconditional se-
`quences. More complex or more general tasks are likely
`to have multiple methods and/ or conditionalities within
`methods for accomplishing different versions of the task.
`For example, in a n1sPi3D—like system the user often has
`to “scroll” the text on the display before being able to
`point to the desired target. We can represent this method
`as follows:
`
`.4(MP[SCROLL-[CON] K[REo] R(.5)) P[word] K[vr.r.r.ow].
`
`Here we assume a specific situation where the average
`number of scroll jumps per selection is .4 and that the
`average system response time for a scroll jump is .5 see.
`From this we can predict the average selection time:
`
`T.,m.,.,. = .4t,-,1 + 1.41;; + l.4rp + .4(.5) = 2.6 see.
`
`For more complex contingencies, we can put the opera-
`tions on a flowchart and label
`the paths with their
`frequencies.
`When there are alternative methods for doing a
`specific task in a given system, we have found [4] that
`expert users will,
`in general, use the most efficient
`method, i.e., the method taking the least time. Thus, in
`making predictions we can use the model to compute the
`times for the alternative methods and predict that the
`fastest method will be used. (If the alternatives take
`about the same time, it does not matter which method
`we predict.) The optimality assumption holds, of course,
`only if the users are familiar with the alternatives, which
`is usually true of experts (excepting the more esoteric
`alternatives). This assumption is helped by the tendency
`of optimal methods to be the simplest.
`
`3.3 Heuristics for the M Operator
`M operations represent acts of mental preparation
`for applying subsequent physical operations. Their oc-
`currence does not follow directly from the method as
`
`Communications
`of
`the ACM
`
`July 1980
`Volume 23
`Number '3'
`
`Page 6 of 16
`
`
`
`defined by the command language of the system, but
`from the specific knowledge and skill of the user. The
`Kcystroltc-Level Model provides a set of rules (Figure 2)
`for placing M18 in the method encodings. These rules
`embody psychological assumptions about the user and
`are necessarily heuristic, especially given the simplicity
`of the model. They should be viewed simply as guide-
`lines.
`
`The rules in Figure 2 define a procedure. The pro-
`cedure begins with an encoding that contains only the
`physical operations (K. P, H, and D). First, all candidate
`M‘s are inserted into the encoding according to Rule 0.
`which is a heuristic for identifying all possible decision
`points in the method. Rules l
`to 4 are then applied to
`each candidate M to see if it should be deleted.
`
`There is a single psychological principle behind all
`the deletion heuristics. Methods are composed of highly
`integrated submetltods (“subrouti.nes")
`that show up
`over and over again in different methods. We will call
`them method chunks or just chunks. a term common in
`cognitive psychology {I7}. The user cognitively organizes
`his methods according to chunks. which usually reflect
`syntactic constituents ofthe systern’s command language.
`Hence. the user mentally prepares for the next chunk,
`not just the next operation. It follows that in executing
`methods the user is more likely to pause between chunks
`than within chunks. The rules attempt to identify method
`chunks.
`
`Rule I asserts that when an operation is fully antici-
`pated in another operation, they belong in a chunk- A
`common example is pointing with the mouse and then
`pressing the mouse button to indicate a selection. The
`button press is fully anticipated during the pointing
`operation. and there is no pause between them (i.e.,
`PMK becomes PK according to Rule 1). This anticipa-
`tion holds even if the selection indication is done on
`
`another device (e.g., the keyboard or a foot pedal). Rule
`2 asserts that an obvious syntactic unit, such as a com-
`mand name, constitutes a chunk when it must be typed
`out in full.
`
`The last two heuristics deal with syntactic termina-
`tors. Rule 3 asserts that the user will bundle up redundant
`terminators into a single chunk. For example, in the
`P01-‘fl’ example in Section 3.2, a aarunn is required to
`terminate the second argument and then another RETURN
`to terminate the command: but any user will quickly
`learn to simply hit a double rtE'rutti~t after the second
`argument (i.e., MKMK becomes MKK according to Rule
`3). Rule 4 asserts that a terminator of a constant-string
`chunk will be assimilated to that chunk. The most com-
`
`mon example of this is in systems that require a termi-
`nator. such as RETURN, after each command name; the
`
`user learns to immediately follow the command name
`with atrruim.
`
`is clear that these heuristics do not capture the
`It
`notion of method chunks precisely, but are only rough
`approximations. Further, their application is ambiguous
`in many situations. e.g., whether something is “fully
`
`anticipated” or is a “cognitive unit.” What can we do
`about this ambiguity? Better general heuristics will help
`in reducing this ambiguity. However. some of the vari-
`ability in what are chunks stems from a corresponding
`variability in expertness.
`Individuals differ widely in
`their behavior; their categorization into novice, casual’,
`and expert users provides only a crude separation and
`leaves wide variation within each category. One way that
`experts differ is in what chunks they have (see {6} for
`related evidence). Thus. some of the difficulties in plac-
`ing M°s is unavoidable because not enough is known (or
`can be known in practical work) about
`the experts
`involved. Part of the variability in expertness can be
`represented by the Keystroke-"Level Model as encodings
`with different placements of M operations.
`
`4. Empirical Validation of the Keystroke-Level Model
`
`To determine how well the Keystroke-Level Model
`actually predicts performance times. we ran an experi-
`ment in which calculations from the model were com-
`
`pared against measured tirries for a number of different
`tasks, systems, and users.
`
`4.] Description of the Experiment
`
`A total of 1,280 user-system-task interactions were
`observed. comprised of various combinations of 28 users,
`it} systems, and 14 tasks.
`
`Systems. The systems were all typical application
`programs available locally (at Xerox PARC) and widely
`used by both technical and nonteclmical users. Some of
`the systems are also widely used nationally. Three of the
`systems were text editors. three were graphics editors,
`and five were executive subsystems. The systems are
`briefly described in Figure 3.
`Together. these systems display a considerable diver-
`sity of user interface techniques. For example, POET, one
`of the text editors, is a typical ljne—orientcd system, which
`uses first-letter mnemonics to specify commands and
`search strings to locate lines. In contrast, DRAW, one of
`the graphics systems, displays a menu of graphic icons
`on the CRT display to represent the commands, which
`the user selects by pointing with the mouse.
`
`Tasks. The 14 tasks performed by the users (see
`Figure 4) were also diverse, but typical. Users of the
`editing systems were given tasks ranging from a simple
`word substitution to the more difficult task of moving a
`sentence from the middle to the end of a paragraph.
`Users of the graphics systems were given tasks such as
`adding a box to a diagram or deleting a box (but keeping
`a line which overlapped it). Users of the executive sub-
`systems were given tasks such as transferring a file
`between computers or examining part of a file directory.
`Task-system methods. In all there were 32 task-system
`combinations: 4x3 == 12 for the text editors, SX3 == l5
`for the graphics systems. and one task each for the five
`
`in
`
`Page 7 of 16
`
`Communications
`of
`the ACM
`
`July I980
`Volume 23
`Number 7
`
`Page 7 of 16
`
`
`
`executive subsystems. For each task-system combination,
`the most efficient “natural” method was determined (by
`consulting experts) and then coded in Keystroke—Level
`Model operations. For example, the methods for Tl-
`P01-LT and T I-orsreo are given in Section 3.2. (A complete
`listing of all the methods can be found in [3].)
`Experimental design. The basic design of the experiw
`ment was to have ten versions of each task on each
`system done by four different users, giving 40 observed
`instances per task-system. No user was observed on more
`than one system to avoid transfer effects. Four tasks were
`observed