`
`293
`
`Theoretical and Empirical Studies of Program Testing
`
`WILLIAM E. HOWDEN
`
`Abstract—Two approaches to the study of program testing are de-
`scribed. One approach is theoretical and the other empirical.
`In the
`theoretical approach situations are characterized in which it is possible
`to use testing to formally prove the correctness of programs or the
`correctness of properties of programs.
`In the empirical approach
`statistics are collected which record the frequency with which different
`testing strategies reveal the errors in a collection of programs. A sum-
`mary of the results of two research projects which investigated these
`approaches are presented. The differences between the two approaches
`are discussed and their
`relative advantages and disadvantages are
`compared.
`
`sponding substructures in P*. A test oracle for a program P
`can be thought of as a source of information about P*.
`The empirical approach involves the compilation of statistics
`on the relative frequency with which selected testing strategies
`are reliable in discovering the errors in a program. The use of
`the approach requires the availability of a collection of incor-
`rect programs whose bugs are known in advance. The ap-
`proach is more convincing if it is applied to programs with
`naturally occurring errors rather than seeded errors.
`
`Index Terms—Algebraic, graph theory, program equivalence, proofs
`of correctness, reliability, statistics, testing, traces.
`
`II. THEORETICAL APPROACH
`
`Mathematical Basis
`
`I.
`
`INTRODUCTION
`
`Reliability ofProgram Testing
`COMMON problem in program testing is the lack of a
`A well-defined, reliable method for selecting tests. This
`paper discusses a theoretical and an empirical approach to the
`problem.
`In order to use testing to validate a program it is necessary to
`assume the existence of a test oracle which can be used to
`
`check the correctness of test ontput. The most common kind
`of test oracle is one which can be used to check the correct—
`
`ness of output values for a given set of input values. Other
`kinds of test oracles can be used to check the correctness of
`
`value traces of selected program variables. Formally defined
`oracles may consist of tables of values, algorithms for hand
`computation, or formulas in the predicate calculus.
`Infor-
`mally defined oracles are often simply the assumed ability of
`the programmer to recognize correct output.
`
`Theoretical and Empirical Approaches
`
`The theoretical approach involves the characterization of
`situations in which testing can be used to prove the correctness
`of programs or the correctness of selected computational sub-
`structures.
`In the approach described below, a program P is
`proved correct by proving that it is equivalent to a hypothet-
`ical correct program P“. Computational substructures of P are
`proved correct by proving that they are equivalent to corre-
`
`Manuscript received October 25, 1977; revised February 16, 1978.
`The research on the theoretical approach was funded by the National
`Science Foundation. The empirical research was funded by the Na-
`tional Bureau of Standards under the supervision of Dr. D. Fife.
`The author is with the Department of Mathematics, University of
`Victoria, Victoria, B.C., Canada.
`
`If a computer program has a small finite domain, and a test
`oracle is available which can be used to check the correctness
`
`of output values, then the use of testing to prove correctness
`is straightforward. When the domain is large it is necessary to
`rely on mathematical theory. The following examples illus-
`trate the use of two kinds of mathematical theories.
`
`Graph—Theoretic Methods
`
`Symbolic traces are generated by traversing paths through
`the source code of a program. The simplest kind of symbolic
`trace consists of the source statements which occur along a
`
`path through the program. It is often desirable to omit certain
`kinds of statements from a symbolic trace or to print informa-
`tion derived from a statement rather than the statement itself.
`
`Fig. 2 contains a symbolic trace of a path through the program
`
`fragment in Fig. 1. go to’s, begin’s, and end’s are omitted
`from the trace. Each time—a branch from a conditional state-
`ment is followed, the branch predicate is included in the trace.
`The trace element “until il found” is used to trace the loop
`
`exit branch from the repeat until loop.
`The program fragment in F1751 is derived from the polyno-
`mial interpolation routine INTERP in [1] .
`Input to INTERP
`includes two vectors x and y which contain a set of npts pairs
`of values (xi,yi). The pairs are part of a function f for which
`f(xi) = yi. The routine uses nterms of the pairs to define a
`polynomial which is used to compute the value of yout at the
`point xin. The program fragment finds the set of nterms pairs
`which “straddles” xin as evenly as possible. The nterms pairs
`are selected by computing values for i] and i2. The set of
`pairs that is used is defined to be
`
`{(x(il),y(i1)), (x(i1+1),y(il+1)), .
`
`.
`
`.
`
`, (x(i2),y(i2))}.
`
`nterms is equal to i2 - i1 +1.
`Symbolic traces like that in Fig. 2 are often useful in analyz-
`
`0098-5589/78/0700-0293$00.75 © 1978 IEEE
`
`CONFIGIT 1031
`
`CONFIGIT 1031
`
`1
`
`
`
`294
`
`IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-4, NO. 4, JULY 1978
`
`;
`l + l
`
`ilfound <— false; youtfound + false;
`
`regeat until ilfound;
`if xin = x(i)
`then begin yout + yli);
`
`'ilfound * true;
`
`youtfound 4— true:
`
`then begin il 4» i-nterms/Z;
`
`i_f
`i150 thenil+l7
`
`ilfound + true;
`
`fl:
`
`else i_f xin < x(i)
`
`
`else E i = npts
`
`end;
`then begin
`
`subroutine pdiv(idimx,x,idimy,y,idimwm);
`dimension
`x(idi.mx). y<idimy), w(idimw);
`idimx 4- idimx - idimy + 1;
`idimx + idimy — 1;
`M f°_r
`i + idiw I31
`11 + i + idimx;
`w(i) <> x(i1)/Y(idimY);
`1 £2 idimy
`regeat jg k <- 1
`131
`j + k - 1 + i;
`x(j) 4- x(j) - w(i) + y(k);
`
`-1t_01
`
`end;
`end ;
`
`ii <- npts -nterms+l;
`
`ilfound <— true;
`
`Fig. 3. PDIV program.
`
`
`else
`
`i 4- i+1;
`
`e_nc1:
`
`end;
`
`then
`if 2957 youtfound
`begin
`i2 <> i1 + ntems — l;
`
`g npts < i2
`then
`begin
`12 + npts;
`i1 <— i2 - ntems+17
`
`E i1 S 0 then begin i1 + l;
`nterms + i2 - i1+l;
`end;
`—
`
`fl;
`else return;
`
`and:
`
`Fig. l. Fragment of INTERP program.
`
`i + 1;
`
`ilfound <— false; youtfound 4» false;
`
`regeat until ilfound;
`xin f x(i);
`xin < x(i);
`il <— 1 — nterms/Z;
`i1 > 0;
`
`ilfound <- true;
`
`until ilfound;
`DEE youtfound;
`i2 4- i1 + nterms - 1;
`i2 S npts;
`
`Fig. 2. Elementary trace of INTERP.
`
`ing the different sequences of operations that can be carried
`out by a program. The following theorem proves that it is pos-
`sible to deduce the correctness of the complete set of symbolic
`traces of a program from the correctness of a well-defined
`finite subset. A program is correct if all of its symbolic traces
`are correct.
`
`Definition: Let P be a program and let T be the set of all
`symbolic traces of P. Let E be the set of all traces in T which
`are generated by paths through P that traverse loops zero or
`one times. E is the set of elementary traces of P.
`A path traverses a loop zero times if it follows the loop exit
`branch from the loop without traversing the code inside the
`loop. Repeat-until loops are always traversed at least once be-
`cause the loop exit branch occurs at the end of the loop.
`Theorem 1: Suppose that P and P* are two programs with
`sets of symbolic traces T and T* and sets of elementary traces
`Eand E“. IfEC E“ then T = T“.
`
`The proof of Theorem 1 is contained in [2]. The proof is
`graph—theoretic in the sense that it involves proving that the
`paths through two labeled graphlike structures are identical.
`The proof of the theorem depends on the type of information
`recorded in the traces and on certain properties of the pro-
`gramming language in which the traced programs are written.
`It is assumed that the information recorded in the traces is like
`
`that in Fig. 2 and that the traced programs are written in a
`
`idimy = 2
`
`<- x(3)/Y(2)
`
`idimx = 3,
`i = 2
`w(2)
`k = l
`x(2) 4— x(2) - H(2)"y(1)
`k = 2
`x(3) 4' x(3) - "(2)'y(2)
`
`i = 1
`Wu) + x(2)/Y(2)
`k = l
`x(1) <- x(1) - W(1)*Y(1)
`k = 2
`x(2) + x(2) - w(1)‘y(2)
`
`Fig. 4. Value trace of PDIV.
`
`structured Algol-like language in which there are restrictions
`on the use ofg_o t_o’s.
`
`Algebraic Methods
`
`A value trace is a trace of the values which are assigned to a
`selected set of variables during an execution of a program.
`Value traces usually contain both values and other information
`which can be used to identify the values. Fig. 4 contains a
`value trace of the program in Fig. 3. The trace contains a rec-
`ord of the values used to index the arrays in the program.
`In
`addition to the array indices, the trace also contains a record
`of the values of the variables used to compute the array in-
`dices. The array indexing values are identified in the trace by
`reproducing the statements in which they occur.
`The PDIV program in Fig. 3 is from the IBM scientific sub-
`routine package. The program can be used to carry out poly-
`nomial division. The array variables x and y, of length‘idimx
`and idimy, are used to store the coefficients of the poly-
`nomials. The coefficients of the quotient are returned in
`the variable w of dimension idimw. The coefficients of the
`
`It is assumed that idimx >
`remainder are returned in array x.
`idirny > 0. The division process is carried out by subtracting
`successive multiples of the divisor from the dividend.
`Value traces like that in Fig. 4 can be used to check the va-
`lidity of simple algebraic computations. The following theo-
`rem proves that it is possible to deduce the correctness of
`simple computations from the correctness of a finite set of
`value traces.
`
`Theorem 2: Suppose that f and g are linear functions of the
`variables xl,x2, '
`,xs_1. Letxu, 1<i<s,1<j<s- 1 be
`s sets of values for the variables xi, 1 <j <8 - 1, and let T be
`the matrix in Fig. 5. Suppose that f = g over each test Ti,
`1 < i < s, where
`
`2
`
`
`
`HOWDEN: STUDIES OF PROGRAM TESTING
`
`295
`
`bound on the reliability of techniques that require the testing
`of a special subset of the set of all paths.
`Branch Testing [51—[7] : This technique requires that each
`branch be tested at least once.
`
`Structured Testing [8], [9] : In order to use this technique
`is necessary to assume that the program consists of a hi-
`it
`erarchial structure of small functional modules. Each path
`through a functional module which executes loops less than k
`times is tested at least once. Usually k = 2.
`Special Values Testing [10]: Experience indicates that it is
`important to test programs on certain kinds of special values.
`The “distinct-values” rule, for example, requires that different
`elementary items in an input data structure be assigned differ-
`ent values. The “zero-values” rule requires that tests be car-
`ried out in which zero values are assigned to or computed for
`variables that appear in arithmetic expressions. The “input-
`classes” rule is useful for programs for which the input values
`divide into several different classes of data. A program will
`typically carry out different kinds of processing for each class.
`The rule requires that the program be tested for each class or
`combination of classes of input data.
`Symbolic Testing [11] -[1 7] : The techniques just described
`are usually associated with the testing of programs over “ac-
`tual data.”
`It is also possible to apply the techniques using
`symbolic data.
`“Symbolic structured testing” is like struc-
`tured testing, except that paths are executed over symbolic
`data rather than actual data.
`
`Program Analysis Techniques
`
`A number of program analysis techniques which do not in-
`volve the execution of a program can be used to examine a
`program for errors.
`Interface Consistency [18]: This term is used to refer to
`rules which require an analysis of the interface between rou-
`tines or between routines and an external data set. An exam-
`
`ple is a rule for checking the consistency of the format of
`structured input records with the format of the variables used
`to read in the records.
`
`Anomaly Analysis [191-[21]: Anomaly analysis involves
`the examination of a program for suspicious looking con-
`structs. The use of an uninitialized variable is one example of
`an anomaly. Another is the use of a fixed-point (PL-1) vari-
`able in a situation where it can assume arbitrarily large values.
`Specification Requirements: Some errors can be avoided if
`certain restrictions are imposed on program specifications.
`It
`may be required, for example, that the specifications contain
`a description of the type and range of each variable.
`
`Effectiveness of Testing Strategies
`
`The reliability of testing and program analysis methods can
`be measured by applying them to programs containing known
`errors. The methods described above were applied to a collec-
`tion of six programs containing 28 errors. Two of the pro-
`grams were written in Algol, and one each in Cobol, PLl,
`Fortran, and PL360. Descriptions of the programs and the
`types of errors that were analyzed can be found in [10] .
`The table in Fig. 6 describes the results of the analysis. Two
`of the references [10], [22] contain a more detailed account
`
`1
`
`1 1
`
`x1,1 x1,2
`
`T = x2,1 x2,2
`
`x1.5—1
`
`x2,5-1
`
`x2,1 x5,2
`
`X5,5—1
`
`Fig. 5. Test values matrix.
`
`Ti = (XL! ’Xl,2 ’
`
`.
`
`.
`
`. axi,s-l)'
`
`Then, if T is nonsingular, f and g are identical polynomials.
`Theorem 2 is a special case of a more general theorem which
`is described in a more detailed discussion of the theoretical
`
`approach [2]. The more general theorem is not limited to
`linear functions. Additional general theorems, similar to The-
`orem 2, are proved in [3] .
`In order to use Theorem 2 to prove the correctness of a
`linear computation f in a program P it is necessary to assume
`that P has certain properties in common with the correct pro-
`gram P*.
`In particular, it is necessary to assume that there is
`a “corresponding” linear function f * in P* which is defined
`over the same domain as f.
`
`Two assumptions are sufficient to allow the application of
`Theorem 2 to the PDIV example. The first is that the array
`indexing computations in PDIV*, like those in PDIV, can be
`written as linear functions of the input variables idimx and
`idimy and of the loop indices of the loops containing the
`array-indexing computations. The second assumption is that
`PDIV and PDIV* are identical except for possibly the array
`indexing computations. This second assumption makes it pos-
`sible to match each array index in PDIV with a corresponding
`array index in PDIV*.
`The trace in Fig. 4 contains examples of input—output values
`for each of the array-indexing computations in PDIV. A value
`of 3, for example, is generated for the index in the array ref-
`erence x(j) in the inner loop of PDIV when the values of
`idimx, idimy,
`i, and k are 3,2,2, and 2.
`In order to apply
`Theorem 2 to the indexing computations in the inner loop
`of PDIV, it is necessary to generate five examples of input—
`output pairs. The required pairs can be generated by testing
`PDIV for (idimx, idimy) = (1,1), (3,2), and (2,1). The test
`values matrix T which can be constructed from those tests is
`
`nonsingular. The same set of tests also generates sufficient
`input-output pairs for the verification of the array indices in
`the outer loop.
`
`111. EMPIRICAL APPROACH
`
`Program Testing and Program Analysis Techniques
`
`A program testing or analysis technique is reliable for an
`error in a program if its use is guaranteed to result in the dis-
`covery of the error. A study was carried out in which the re-
`liability of the following techniques was analyzed.
`
`Testing Techniques
`
`Path Testing [4]: This technique requires that each execut-
`able path through a program be tested at least once. The tech-
`nique is not practical since a program may have an infinite
`number of paths. The reliability of the technique is an upper
`
`3
`
`
`
`296
`
`IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE—4, NO. 4, JULY 1978
`
`Path
`Branch
`Structured
`Special Values
`Symbolic structured
`
`18
`6
`12
`17
`17
`
`Interface
`Anomaly
`Specifications
`Combined Non—
`Symbolic
`
`N
`A
`7
`25
`
`Fig. 6. Reliability statistics.
`
`wordz=emptz string;
`
`repgat
`i; input = emptxstring then input:=read+'
`letter:=first(input);
`input:=rest(input)t
`word:=word+1etter;
`
`until
`letter = '
`';
`
`';
`
`Fig. 7. Fragment of Algol string processing program.
`
`of the results as well as results of the analysis of additional
`methods. The results in Fig. 6 indicate that path testing is re-
`liable for 18 of the 28 errors. Branch testing was reliable for
`only 6 of the 28 errors. This low figure indicates that many of
`the errors that are reliably discovered by path testing depend
`on the traversal of combinations of branches rather than single
`branches. Structured testing is reliable for 12 of the 28 errors.
`Structured testing increases the reliability of branch testing
`by forcing the testing of important combinations of branches.
`Structured testing reveals all of the errors in three of the six
`programs. Branch testing is completely reliable for only one
`program. The combined use of the structured, special values,
`interface, anomaly, and specifications techniques is reliable for
`25 of the 28 errors.
`
`Structured testing was unreliable for a significant number of
`the errors in one of the larger programs, a PL-l program called
`TEST. The most reliable method for TEST was the special
`values method. The reliability experiments with TEST indi-
`cate the importance of program testing methods which, unlike
`structured testing, are not defined in terms of the control
`structure of a program.
`On the sample programs, symbolic structured testing was
`found to be 10—20 percent more reliable than structured test-
`ing over actual data. The five errors that are reliably dis-
`covered by symbolic structured testing but not structured
`testing over actual data all involve the use of an incorrect vari-
`able.
`In all five errors it is possible for the incorrect variable
`to take on the values of the correct variable during testing on
`actual data, thus hiding the presence of the error.
`If the nor-
`mal practice of using variable names for symbolic values is
`followed, then the use of the incorrect variables is clearly re-
`vealed by symbolic testing.
`Symbolic testing revealed one error which was not revealed
`by any of the other techniques. The combined use of all of
`the listed techniques results in the guaranteed discovery
`of 26 of the 28 errors.
`
`The following example illustrates a typical situation in
`which one technique is reliable and another unreliable.
`The program fragment in Fig. 7 is taken from the telegram
`program described in [23]. The program incorrectly fails to
`delete leading blanks whenever a new buffer of text is read
`with the “input := read+‘ ’” statement. This can result in the
`
`In order to
`generation of a word consisting of a single blank.
`force the error in the fragment, it is necessary to follow the
`true branch from the conditional statement and then exit from
`
`the loop on some first iteration of the loop. Structured test-
`ing, but not branch testing, will force the testing of this com-
`bination of circumstances.
`
`IV. CONCLUSIONS
`
`Theoretical Approach —Advan rages and Disadvan rages
`
`If a
`The advantage of the theoretical approach is obvious.
`particular testing strategy has a sound theoretical basis, then
`the user of the strategy knows what has been proved and what
`has not been proved.
`There are several disadvantages to the theoretical approach
`that limit its applicability. The proofs of the theorems that
`are used in the theoretical approach depend on the availabil-
`ity of a test oracle which can recognize the correctness of
`specific kinds of test output. There are many programs for
`which it is reasonable to assume the existence of a test oracle,
`but it may not be the same kind of test oracle as that which is
`required in the program testing theorems. The theorems may
`also require the use of certain assumptions about the correct
`version P“ of a program P that are difficult to justify.
`In order to apply Theorem 1 to a program, it is necessary to
`assume the existence of an oracle which can recognize the
`correctness of elementary symbolic traces. This is a stronger
`assumption than it may at first appear. Stated differently,
`it is necessary to assume that there is an external mechanism
`which can determine if the elementary symbolic traces of a
`program P are also symbolic traces of a correct program P*.
`In order to apply theorems like Theorem 2 to an algebraic
`computation in a program P, it is necessary to assume that
`there is a “matching” computation in the correct program P*.
`It is also necessary to assume that the matching computation
`has the same input variables and that there is a known upper
`bound on its degree.
`The theoretical approach to the study of program testing
`has resulted in additional insight
`into the program testing
`process.
`In certain isolated cases, such as the testing of simple
`algebraic computations or the symbolic tracing of complex
`control logic, theoretical results can be used to guide the se-
`
`4
`
`
`
`HOWDEN: STUDIES OF PROGRAM TESTING
`
`lection of tests. The study has not resulted in any general,
`useful method for testing programs.
`
`Empirical Approach -—Advan tages and Disadvantages
`
`The advantage of the empirical approach to the study of
`testing is that it can be applied to the study of any testing
`methodology. The theoretical approach can only be applied
`to those methods for which it is possible to prove theorems.
`The major disadvantage of the empirical approach is its lack
`of a sound mathematical basis. The fact that a particular strat-
`egy is reliable for discovering 35 percent of the known errors
`in a specific collection of programs is no guarantee that it will
`be that reliable on some other, previously unanalyzed pro-
`gram.
`A particular combination of different
`testing and
`program analysis methods may be the most effective for dis-
`covering the errors in one program but not in another.
`
`Future Work
`
`It is the author’s opinion that the greatest practical benefits
`are to be gained from continued empirical rather than theo-
`retical studies.
`Ideally, patterns will emerge from the empir-
`ical study which can be used as the basis of future theoretical
`work.
`
`V. RELATED WORK
`
`The use of testing in proofs of correctness has been proposed
`and studied by several people [2], [3], [24]-[26]. Refer-
`ence [4] describes a theoretical approach to the reliability of
`a particular testing strategy. Goodenough and Gerhart pub-
`lished a paper on what might be called a “theory of testing”
`[26] . The use of algebra in proving the equivalence of simple
`computations is described, but not developed, by Geller [25] .
`The use of symbolic evaluation in proving correctness of pro-
`grams and program paths is discussed in [11], [31], [27]-
`[29] . The relationship between the study of formal grammars
`and proofs of equivalence of symbolically evaluated programs
`is explored in [30]. The material on symbolic traces in Sec-
`tion 11 is related to research on symbolic testing and auto-
`matic test data generation systems [10] -[17] .
`The theoretical results in Section II are closely related to
`work that has been described by the author in several technical
`reports [3] , [24]. Reference [2] contains a more detailed de-
`scription of the examples in Section II and contains the proofs
`of the theorems. Reference [24] contains a program equiva-
`lence theorem which involves the combined use of symbolic
`and value traces [6] .
`Few empirical studies of program testing have been reported
`in the literature. Reference [12] describes the results of a
`smaller, more limited study that used the same research meth-
`odology as the study reported in this paper. An alternative ap-
`proach is described by Hetzel in his dissertation [31] . Hetzel
`carried out a number of statistical experiments in which dif-
`ferent groups of programmers used three different testing and
`program analysis techniques to debug three programs con-
`taining known errors. Hetzel’s approach is suitable for the
`analysis of ill-defined techniques such as “code-reading” and
`“specifications testing.” The approach described in this paper
`is suitable for well-defined techniques such as structured test-
`
`297
`
`ing and interface analysis. The reliability of these techniques
`can be analyzed without the need for statistical experiments.
`The empirical research described in Section III is described
`in more detail
`in [22]. Complete details of the empirical
`studies can be found in [10] .
`
`ACKNOWLEDGMENT
`
`P. Eichhorst, of the University of California at San Diego,
`participated in the theoretical research for this paper.
`
`REFERENCES
`
`[3]
`
`[10]
`
`[11]
`
`[l] P. R. Bennington, Data Reduction and Error Analysis for the
`Physical Sciences. New York: McGraw-Hill, 1969.
`[2] W. E. Howden and P. Eichhorst, “Proving properties of programs
`from program traces,” Univ. California, San Diego, Computer
`Science Tech. Rep. 18, 1977.
`W. E. Howden, “Elementary algebraic program testing tech-
`niques,” Univ. California, San Diego, Computer Science Tech.
`Rep. 12, 1976.
`[4] —, “Reliability of the path analysis testing strategy,” IEEE
`Trans. Software Eng, vol. SE-2, pp. 208-214, 1976.
`[5] K. W. Krause, R. W. Smith, and M. A. Goodwin, “Optimal soft-
`ware test planning through automated network analysis,” in
`Proc. 1 973 IEEE Symp. Computer Software Reliability (IEEE
`Computer Society, Long Beach, CA), pp. 18-22.
`[6] L. G. Stucki, “Automatic generation of self-metric software,” in
`Proc. 1 973 IEEE Symp. Computer Software Reliability, IEEE
`Computer Society, Long Beach, CA, pp. 94-100.
`[7] R. E. Fairley, “An experimental program testing facility,” IEEE
`Trans. Software Eng, vol. SE-l , pp. 350—357, 1975.
`[8] H. D. Mills, “Top down programming in large systems,” in De-
`bugging Techniques in Large Systems, R. Rustin Ed. Englewood
`Cliffs, NJ: Prentice-Hall, 1971, pp. 41-55.
`[9] C. V. Ramamoorthy, K. H. Kim, and W. T. Chen, “Optimal
`placement of software monitors aiding systematic testing,” IEEE
`Trans. Software Eng. , vol. SE-l, pp. 46—58, 1975.
`W. E. Howden, “Symbolic testing—Design techniques, costs and
`effectiveness,” U.S. National Bureau of Standards GCR77-89,
`National Technical Information Service PB268517, Springfield,
`VA, 1977.
`R. S. Boyer, B. Elspas, and K. N. Levitt, “SELECT—A formal
`system for testing and debugging programs by symbolic execu-
`tion,” in Proc. 1975 Int. Conf. Software Reliability, IEEE Com-
`puter Society, Long Beach, CA, pp. 234-245.
`W. E. Howden, “Symbolic testing and the DISSECT symbolic
`evaluation system,” IEEE Trans. Software Eng. vol. SE-3,
`pp. 266-278, 1977.
`J. C. King, “Symbolic execution and program testing,” Commun.
`Ass. Comput. Mach., vol. 19, pp. 385-394, 1976.
`L. Clarke, “A system to generate test data and symbolically ex-
`ecute programs,” IEEE Trans. Software Eng, vol. SE—2, pp. 215-
`222, 1976.
`E. F. Miller and R. A. Melton, “Automated generation of test
`case data sets,” in Proc. 1975 Int. Conf. Reliable Software, IEEE
`Computer Society, Long Beach, CA.
`J. C. Huang, “An approach to program testing,” Comput. Sur-
`veys, vol. 7, pp. 113—128, 1975.
`C. V. Ramamoorthy, S. F. Ho, and W. T. Chen, “0n the auto-
`mated generation of program test data,” IEEE Trans. Software
`Eng, vol. SE-2, pp. 293-300, 1976.
`B. C. Hodges and J. P. Ryan, “A system for automatic software
`evaluation,” in Proc. 2nd Int. Conf. Software Engineering, 1976.
`C. V. Ramamoorthy and S. F. Ho, “Testing large software with
`automated software evaluation systems,” IEEE Trans. Software
`Eng,vol. SE-l, pp. 46—58, 1975.
`L. J. Osterweil and L. D. Fosdick, “DAVE—A validation, error
`detection and documentation system for Fortran programs,”
`Software—Practice Experience, vol. 6, pp. 473—486, 1976.
`L. D. Fosdick and L. J. Osterweil, “The detection of anomalous
`interprocedural data flow,” in Proc. 2nd Int. Conf. Software
`Engineering, 1976.
`[22] W. E. Howden, “An evaluation of the effectiveness of symbolic
`
`[12]
`
`[13]
`
`[14]
`
`[15]
`
`[15]
`
`[17]
`
`[18]
`
`[19]
`
`[20]
`
`[21]
`
`5
`
`
`
`298
`
`IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. SE-4, NO. 4, JULY 1978
`
`[23]
`
`ing Societies 74. Amsterdam, The Netherlands: North Holland,
`1974, pp. 308-312.
`[29] L. P. Deutsch, “An interactive program verifier,” Ph.D. disserta-
`tion, Univ. California, Berkeley, 1973.
`[30] W. E. Howden, “Lindenmayer grammars and symbolic testing,”
`Inform. Process. Lett., to be published.
`[31] W. Hetzel, “An experimental analysis of program verification
`methods,” Ph.D. dissertation, Univ. North Carolina, 1976.
`
`William E. Howden, for a photograph and biography, see p. 73 of the
`January 1978 issue of this TRANSACTIONS.
`
`testing,” Univ. California, San Diego, Computer Science Tech.
`Rep. 16,1977.
`P. Henderson and R. Snowden, “An experiment in structured
`programming," BIT, vol. 12, pp. 38—53, 1972.
`[24] W. E. Howden, “Algebraic program testing,” Univ. California,
`San Diego, Computer Science Tech. Rep. 14, 1976.
`M. Geller, “Test data as an aid in proving program correctness,”
`in Proc. 2nd Symp. Principles of Programming Languages (1976),
`pp. 209—218.
`J. B. Goodenough and S. L. Gerhart, “Toward a theory of test
`data selection,” IEEE Trans. Software Eng., vol. SE-l, pp. 156—
`173, 1975.
`W. E. Howden, “Methodology for thehgenerration of program test
`data,” IEEE Trans. Comput., vol. C-24, pp. 554—559, 1975.
`R. M. Burstall, “Proving correctness as hand simulation with a
`little induction,” in Proc. Int. Federation ofInformation Process-
`
`[25]
`
`[25]
`
`[27]
`
`[28]
`
`Dynamic Restructuring in an Experimental
`Operating System
`
`HANNES GOULLON, RAINER ISLE, AND KLAUS-PETER LOHR
`
`Abstract—A well-structured system can easily be understood and
`modified. Moreover, it may lend itself even to dynamic modification:
`under special conditions, the possibility of changing system parts while
`the system is running can be provided at little additional cost. Our
`approach to the design of dynamically modifiable systems is based on
`the principle of data abstraction applied to types and modules.
`It
`allows for dynamic replacement or restructuring of a module’s imple-
`mentation if this does not affect its specification (or if it leads to some
`kind of compatible specification). The fundamental principles of such‘
`“replugging” are exhibited, and the implementation of a replugging
`facility for an experimental operating system on a PDP-ll/40E is
`described.
`
`Index Terms—Data abstraction, domain architecture, dynamic address
`spaces, dynamic modification, modules, replugging.
`
`I.
`
`INTRODUCTION
`
`NE of the fundamental facts in software engineering is
`that large systems, once put into operation, are usually
`subject
`to frequent modification. A system is modified
`for several possible reasons, e.g., error correction, efficiency
`improvement, changes
`in or amplification of
`functional
`
`Manuscript received November 15, 1977.
`The authors are with the Technische Universitfit Berlin, Berlin,
`Germany.
`
`Thus, to produce high-quality software, such
`capabilities.
`modifications must be anticipated. The notion of modifi—
`ability comprises structural properties of software that facili-
`tate fast and secure modification.
`
`While requirements for software modifiability have been
`studied to some extent (cf. Parnas [ll] , Bauer [1]), the prob-
`lem of how to modify a long-running system “smoothly,”
`i.e., without stopping it and replacing it by a new version, has
`attracted only little attention. Shaw has mentioned the prob-
`lem from the viewpoint of data abstraction [15] . Fabry was
`motivated to attack the problem by considering data base
`systems; he uses the term “changing modules on the fly” [2] .
`Throughout
`this paper, program modification during run-
`time will be called dynamic modification (as opposed to
`“static modification”) in order to stress the analogies to simi-
`lar uses of “dynamic” versus “static,” e.g., static/dynamic
`memory management,
`linking. We were led to study, in a
`real
`system, dynamic modification and the design of a
`dynamic modification facility by a project
`to construct an
`experimental operating system. The system is to be used in
`operating systems research and education at a university.
`It is called a DAS (dynamically glterable system), and is being
`implemented on a PDP-ll/40E. The system is designed in
`such a way that dynamic modifications are possible in all
`
`0098-5589/78/0700-0298$00.75 © 1978 IEEE
`
`6
`
`6
`
`