`
`An Intelligent Approach to Verification and Testing of the Configurator
`
`Pei—Lei Tut, Jen—Yao Chung”; and Christos N. Nikolaoui
`
`”(IBM Enterprise Systems
`Poughkeepsie, NY 12602
`
`iIBM Thomas J. Watson Research Center, PO. Box 704
`Yorktown Heights, NY 10598
`
`Abstract
`
`basic
`three
`requires
`verification
`configurator
`The
`processes: generating the test data, analyzing the actual
`and expected outputs, and fixing the deviations The
`typical verification processes of today's configurators are
`to manually provide the test case and expected output
`and then analyze the differences between the expected
`and actual test outputs. This manual approach not only
`constrains the testing and verification of the configurator
`but
`also
`is
`not
`feasible
`for
`the
`large
`computer
`configurator program such as
`IBM PIS/90007“.
`An
`intelligent approach to verification and testing of large
`computer configurators is introduced where test data are
`generated automatically and test
`results are analyzed
`intelligently with
`little human
`intervention.
`This
`approach utilizes a generic configurator model where the
`a priori computer configurator knowledge is captured
`and applied to generate a large number of potential test
`data and to analyze the test results automatically. With
`this intelligent approach, human experts‘ knowledge and
`skills can be better utilized to initialize the configurator
`model
`and to review the potential
`faults of
`the
`configurator program as revealed by the testing results
`
`1. Introduction
`
`Verification and testing of a computer configurator is
`different from the typical hardware and software testing.
`A hardware testing is complicated due to numerous
`testing conditions
`for a large number of electronic
`components but
`there usually exist some “gold units"
`that can be used as a basis to determine if the test results
`
`are accurate. The testing of the software program is
`complicated since the implementation and the logic flow
`of each software program is very flexible.
`But
`for
`software programs such as arithmetic comparisons and
`functional
`subroutines,
`the
`expected
`outputs
`are
`predictable so the accuracy of the testing outputs can be
`checked automatically.
`Traditionally, verifying the
`accuracy of a computer configurator program requires a
`product expert providing the test case and its expected
`output, comparing the expected and actual outputs, and
`analyzing the discrepancy. This configurator verification
`process is as complicated as the typical hardware and
`software testing processes since a large computer usually
`has numerous components, multiple options for each
`component, and enormous combinations of components.
`The expected output of a configurator should be a valid
`customer configuration which is projected based on
`individual customer‘s requirements. As opposed to the
`hardware and software
`testing where
`the expected
`outputs might be already known,
`the expected test
`outputs of the configurator are not easily predictable but
`are projected on each test
`input.
`Therefore,
`the
`verification and testing of the configurator are usually
`done manually.
`
`computer
`as we know, most
`far
`as
`Currently,
`configurators operate with manual verification process
`with limited testing capabilities.
`Some configurator
`verification might add structured design to the manual
`testing process [3] but the entire manual testing process
`is still very time-consuming.
`In general,
`this manual
`configurator verification not only constrains the testing
`scope of
`the configurator but also requires a lot of
`manpower
`and
`development
`cycle
`time
`for
`the
`
`TM Trademark or registered trademark of the International Business Machines ('Iorporation.
`
`0-8186~2620—8/92 $3.00 © 19921EEE
`
`151
`
`CONFIGIT 1035
`
`CONFIGIT 1035
`
`1
`
`
`
`
`
`For
`implementation of a good quality configurator.
`larger computers such as the IBM ES/9OOOTM family
`where the product has more components and more
`combinations of configurations than smaller computers,
`such a manual verification and testing approach is
`usually not
`feasible.
`The issue of how to provide
`automation or intelligence to test data generation and
`analysis for the large computer configurator has become
`increasingly important
`in business competitive edges
`where the production cost of the configurator can be
`reduced and the quality of
`the configurator can be
`improved,
`In this paper, we will present an intelligent
`approach to verify a large computer configurator
`for
`IBM ES/9OOOTM with a large number of
`test cases
`generated automatically and the test
`results analyzed
`intelligently with little human intervention.
`
`The rest of the paper is organized as follows. Section
`2 states the various testing strategies of configurator
`verification.
`The intelligent approach of automating
`configurator verification is
`outlined
`in Section
`3.
`Sections 4, 5 and 6 discuss the details of this approach
`for test data generation, output analysis and structured
`expert review. Conclusions are made in Section 7.
`
`2. Verification and Testing of Configurators
`
`There are several verification and testing strategies for
`a large software program using varying amount of test
`data: exhaustive testing, selective testing, formal proofs,
`and partial
`verification with
`automated
`evaluation
`systems [2]. We will analyze each testing strategy and
`suggest the best one for verification and testing of large
`configurator programs.
`
`It is not always possible to have an exhaustive testing
`of any software program because of the size of all
`permuted testing cases.
`In the configurator domain, all
`possible combinations of customer configuration increase
`exponentially as the number of features increase from a
`minicomputer
`to
`a mainframe
`computer.
`For
`an
`ES/9000TN machine,
`the number of all configuration
`permutations could be millions. Even if all configuration
`permutations are used as test data,
`it will be extremely
`human-intensive and usually impossible to verify and
`analyze the test outputs
`
`Selective testing focuses on the selected part of the
`configurator program with specified testing inputs. This
`is the most common and easiest testing strategy which
`
`the product expert determines specifically what should be
`tested. Since the selected test data are limited,
`the test
`results usually can be analyzed manually. A verification
`tool will only need to provide the user with panels for
`specifications of the test case and the expected output,
`and the tool can generate discrepancies between the
`expected and actual configuration outputs
`for users'
`review. With its limited testing capability, the selective
`testing should not be the sole testing strategy but should
`be used in combination with other testing approaches for
`the purpose of flexibly selecting a small portion of the
`configurator for regression test or repetitive test.
`In
`practice. selective testing has been commonly used as the
`only
`testing
`strategy
`for
`configurator
`verification.
`Besides the insufficiency of testing a large scale software
`program such as the 138/9000TM configurator. this testing
`strategy can be easily biased by the user.
`
`Using the formal proof to verify and test a large
`software program is usually costly and the required
`advanced mathematical skills are not widely applicable.
`The software technologies for
`the implementation of
`large configurator programs might not be the same for
`every computer product
`and they are usually not
`algorithmic. Based on these, this testing strategy, fomral
`proof, should not be considered for the verification and
`testing of configurator.
`
`Partial verification with automated verification tools is
`
`to provide automated tools for software verification to an
`acceptable degree of reliability and performance. Studies
`[2]
`indicated that existing automated verification tools
`had reduced the software program errors drastically with
`several basic functions: (1) predicting the ripple effects
`of program modifications; (2) generating and evaluating
`test cases for a thorough and systematic program testing;
`(3) generating a database for future maintenance. This
`testing strategy has outperformed the above three testing
`strategies in many software applications. We'll call this
`testing
`strategy
`a
`comprehensive
`testing
`since
`comprehensive, instead of exhaustive or selected, amount
`of test data are used. Even though this testing strategy
`has proved to be a better testing strategy than the others
`for a large software program,
`this testing strategy has
`never been used in any configurator verification. Two
`major
`requirements are that (l)
`the verification tool
`needs to generate sufficient test data to disclose every
`potential fault of the configurator program but not with
`the exhaustive test data; (2) the verification tool should
`be capable of predicting and analyzing the test results.
`
`2
`
`
`
`
`
`The best verification tool for large configurator programs
`would need to satisfy the these two requirements and the
`above basic functions of comprehensive testing.
`The
`objective of
`this paper
`is
`to present an intelligent
`approach of design and implementation of
`such a
`verification tool. We will also demonstrate that the size
`of the exhaustive test data can be drastically reduced but
`its testing scope stays the same.
`In addition,
`the tool
`should also accept user—selected test data, predict and
`analyze the test results for the user. This is to provide
`the
`selective
`testing of
`ripple
`effects of “what-if"
`conditions (e.g.. adding or deleting a feature), and for
`divide-and-conquer to focus on a particular component of
`the large computer.
`
`3. Design of Intelligent Configurator Verification
`
`three basic
`The configurator verification requires
`processes: generating the test data, analyzing the actual
`and expected outputs, and fixing the deviations.
`The
`typical verification processes of today‘s configurators are
`to manually provide the test case and expected output,
`and then analyze the differences between the expected
`and actual
`test outputs.
`These manual processes as
`shown in Figure 1 require a lot of human intervention
`and are error-prone. The test data are usually limited and
`are not sufficient for a large configurator verification.
`
`To provide sufficient and comprehensive testing of
`large configurator programs,
`the automated verification
`tool needs to either automate or structure the above
`
`processes to facilitate large scale testing. The intelligent
`
`
`
`|
`|
`I
`
`'— — ——
`
`
`.
`
`Specrfy Test Case
`
`
`J
`‘
`Run through Configurator l.”
`
`
`I'— ———————————————
`7
`
`I
`|
`1
`
`I
`I
`|
`|
`|
`
`|
`
`||l
`
`| |
`
`Input Expected
`output
`
`
`
`
`
`Expected Output
`
`Configured Output
`
`
`
`Manual
`Test
`Case
`
`Input
`
`Manual
`Output
`
`Analysis
`
`.
`Expert
`Review
`
`|
`|
`|
`
`|
`
`
`
`
`
`Generate Differences
`
`L ______________________________ J
`r —————————————————
`
`
`
`Differences?
`
`No
`
`Yes
`
`Expert Analyzes &
`Fixes
`
`.
`|
`
`|
`|
`
`
`
`
`
`Figure 1. Typical Configurator Verification Processes
`
`3
`
`
`
`configurator verification we present consists of automatic
`test data generation,
`intelligent output analysis and
`structured expert review as shown in Figure 2. Two
`types of test data selection are provided in this intelligent
`verification: comprehensive testing and selective testing.
`If the comprehensive testing is selected (e,g.,
`testing
`several models of the product), all required test data will
`be created automatically based on a comprehensive test
`data generation algorithm.
`The expected test outputs
`will be generated by utilizing a configurator model
`where the a priori domain knowledge is captured. The
`discrepancies between expected and actual
`test results
`will be analyzed intelligently and the further analyses
`will be provided for domain experts' review. Test data
`will be classified as GOOD or BAD, and will be stored
`in the test database for future repetitive or regression
`tests.
`Each of
`the three processes
`is essential
`to
`
`the
`configurator verification and cannot exist without
`others. Even if mere is an automatic test case generator
`to generate as many test cases as possible,
`the entire
`verification process will be very human-intensive and
`impractical
`to
`analyze
`enormous outputs manually.
`Without a structured expert review process,
`there won't
`be effective and efficient verifications for later regression
`testing or “what-it" analyses where slight variations of
`previous
`test data are used.
`Each of
`these three
`processes
`for
`automated configurator verification is
`elaborated
`in
`the
`next
`three
`sections where
`the
`
`characteristics of configurator knowledge as defined
`below are used.
`
`3.]. Characteristics of Configurator Knowledge
`
`
`
`
`—
`"_ —‘
`— _
`_ “‘ _ "‘
`
`
`r—
`
`— — — _
`
`— "‘
`
`— _1
`l
`|
`|
`
`
`
`
`,_
`
`l
`I
`I
`
`|| lL
`
`I
`1
`:
`|
`|
`|
`
`Automatic
`Test
`Case
`Generation
`
`Intelligent
`Output
`Analysis
`
`Structured
`Expert
`Review
`
`l|
`
`
`
`
`
`
`Input/read
`
`Configurator
`
`
`Model
`
`_______________________
`
`
`
`Identify as
`Identify as
`_
`
`
`GOOD
`1‘?“an
`BAD
`
`ixes
`test cases
`test cases
`
`
` Stop when
`
`
`all test data
`
`are either GOOD
`
`or BAD
`
`
`
`Figure 2. The Architecture of the Intelligent Configurator Verification
`
`4
`
`
`
`
`
`In general, selection of a computer product requires
`specifications of the model (e.g., 135/9000TM model ###),
`and a collection of marketable features (cg, central
`storage, extended storage, channels, etc.) with each
`feature characterized by its options, i.e, the choices of
`quantity for a feature. For example, the central storage
`might have
`several options:
`256 megabytes, 512
`megabytes or 1024 megabytes, but only one of these
`options can be chosen for
`the central
`storage of a
`configuration. A featurc's options could vary depending
`on the model selected. Usually high-end models will
`offer more central storage than low-end models.
`
`(l) the
`A computer configuration is characterized by:
`selected model and features;
`(2) valid combinations of
`features and the model. The former are the individual
`selections of models and features. The latter are from
`the
`affiliations
`between
`the model
`and
`features.
`Conceptually,
`these configuration characteristics can be
`represented in a model/feature association diagram as
`shown in Figure 3 and this diagram should be generic
`enough to represent most of the IBM configurators, The
`diagram exhibits a configuration with the selected model
`and features,
`represented as
`circles,
`and also the
`associations existed among features as affected by the
`model, represented as rectangles.
`
`Suppose m, f, 0 and r denote a model, a feature, an
`option and a relation,
`respectively.
`A configuration
`config consists of the selected model and a set of
`component
`list
`cl,
`i.e,
`c0nflg:(m, cl)
`where
`cl={(/(i),0(,,,,n(j))li=1,2,
`,t} and each of
`the
`1
`components is described by the product feature and its
`selected option. Relations of features can be categorized
`exclusively into four distinct classes:
`features are
`pre-requisite
`or
`co—requisite,
`features
`are mutually
`exclusive, and features have a total quantity constraint,
`i.e,, R, a set of r, = (Raging, Rte—mm Rmulu~exclm Rioralian)
`For simplicity, let's assume only binary relations existed
`between features,
`these four types of relations can be
`represented as follows:
`
`R011. pre — req) = {(013 0j) , 02. 01)) I
`if 0?, 01') 6 cl
`then (13‘, 01) 6 cl)}
`
`where feature f}- is a pre—requisite of feature )2;
`
`R(m, co — exist) = {(023 Oj) , (figs 01)) 7 ((pr 01), (fi, 01)) I
`if and only if (1;, OJ) 6 cl and ([7,, 0,) 6 cl}
`
`where features 1; and j; should co-exist;
`
`
`
`Model
`
`Relations
`
`-OW“-
`Fe aturcs
`
`
`
`Figure 3. Model/feature Association Diagram
`
`155
`
`5
`
`
`
`
`
`R(m, mum — exclu) = {((fi ’ 0') 7 (fit, 01)) I
`if (fi, 0],) E cl then (fk, 0,) rs cl
`if (1;, 0,) e cl then (fi, 0,) ECU}
`
`or
`
`where features 1? and fi. are mutually exclusive;
`
`R(m, total—qt”) = {(01), {OZ-’01),
`if {(fi, 01-),
`, (fin 01)} 6 cl
`
`. 07a 01)}, valuell
`then 012(0),, 01) S value}
`
`fit have a total-quantity constraint
`,
`where features fl ,
`and 017
`is
`the total-quantity operator.
`The above
`relations of
`features in a model could exhibit
`two
`extreme cases such as,
`
`1.
`
`“the simplest case with R0,.) = {((fi,0,~))] where fl
`could be any feature”
`
`Under this circumstance, each feature is totally
`independent of the other features.
`'lhe configurator
`will only need to examine the selection of each
`individual
`feature or model without concern for
`connections or constraints existed between features.
`
`2.
`
`“the most complicated case with Rm) = {((fi,aj),
`.
`. (fl . 01))l”
`
`This is the most complicated case where each
`feature is related to each other. Whenever a feature
`
`the configurator has to check if this
`is selected,
`selection satisfies this feature's relationship to every
`previously selected feature.
`
`In practice, most computer configurators fall in between
`these two extreme cases. Most of the feature relations in
`
`IBM 138/9000TM are binary with a few exceptions of
`ternary relations
`
`The above representation can be used to represent
`most of the configurator knowledge. The completeness
`of this representation depends on the completeness of ils
`elements: models,
`features,
`features'
`options
`and
`relations,
`
`4. Automatic Test Data Generation
`
`to design an automatic test case
`It's not difficult
`generator
`to simulate
`all possible
`combinations of
`customer orders.
`In fact,
`the issue of an automatic test
`data generator is how to generate a reasonable size of
`test cases which are sufficient for an acceptable and
`reliable
`verification of
`configurator.
`What's
`also
`important
`is how can the test
`results be analyzed
`
`automatically so the entire test data generation and
`verification process can be automated with little human
`cooperation, We will address the automatic test data
`generation in this section and discuss how can the test
`results be analyzed intelligently in the next section.
`
`An exhaustive testing is a bruce—force approach where
`the search space for test data is extremely large but the
`test data generation is straightforward. A comprehensive
`testing with reduced search space for test data is not
`straight-forward and usually requires heuristics.
`The
`essence of the heuristic algorithm we use is from the
`previously defined configurator knowledge.
`Test data
`generated from this heuristic algorithm will be able to
`disclose any potential fault of the configurator program.
`Closely examining the configurator knowledge, we know
`that if features have no relations at all, test data of their
`exhaustive combinations are irrelevant since these test
`
`data are always valid but cannot explore any error of the
`configurator. These test data can be eliminated and the
`size of test data can be reduced drastically but the results
`of configurator verification are not affected. Details of
`this heuristic algorithm are described below.
`
`4.1. Comprehensive Test Data Generation Algorithm
`
`
`Comprehensive Test Data Generation Algorithm
`
`W
`
`1.
`2.
`
`Identify all the models in the set M.
`Collect all
`the product
`features and conceptual
`features to be included in the configuration.
`Identify all the relations R between features.
`4. Decompose features into subfeatures, if necessary,
`to
`reflect
`every
`relation
`(dependent, mutually
`exclusive or
`total
`quantity
`constraint).
`This
`collection of
`features or subfeatures will be the
`
`L).
`
`6.
`
`feature set F: [fil i: 1,
`data.
`
`, n} used to generate test
`
`set
`i.e.,
`for each feature,
`Identify the options
`The set
`00?) collects all die
`009: {0031ng F}.
`feasible options for the feature fi independent of the
`model.
`The purpose is to uncover the common
`error of choosing the unallowable option for any
`feature fl in any model.
`To generate the component list of each test case, do
`the following for each model m in M:
`
`a.
`
`Starting with one relation in R of features )7
`and 12, use the features‘ option sets 00'.) and 0m)
`to generate the exhaustive combinations of the
`
`156
`
`6
`
`
`
`, (fl , 0,)), and randomly
`component list ((fi , 01-)
`choose a fixed option for each of the rest of
`the features in F (e.g., the first option).
`
`Similarly, for each of the rest of the relations,
`generate
`the
`exhaustive
`combinations
`of
`options for its features and the options for the
`rest of the features stay the same.
`
`c.
`
`d.
`
`Eliminate any duplicate data generated from
`Steps 3 and b.
`
`For every feature not in any relation, generate
`its exhaustive options and the options for the
`rest of the features stay the same.
`
`Using the above heuristic algorithm, we can detect
`every
`potential
`fault
`in
`the
`specifications
`or
`implementations of configurator program. We can also
`eliminate a huge number of test data in the exhaustive
`testing since they are irrelevant
`to the configurator
`verification. With this heuristic algorithm, we can
`automatically generate
`test cases
`for comprehensive
`testing with drastically reduced size of test data but
`acceptable verification. The completeness of test data
`following
`this heuristic
`approach depends
`on
`the
`completeness of models, features, options and relations.
`
`Details of this heuristic algorithm will be illustrated in
`the following example. For simplicity, assume there is
`only one model but five features )1 , f2 , f3 , fl and f5 with
`Options
`(011 , 012),
`(021 i 022),
`(030:
`(041 , 042 , 043)
`and
`(051,052), respectively Features f1 and )2, f2 and f5 are
`related, either pre—requisite,
`co—requisite or mutually
`exclusive to each other.
`The automatically generated
`comprehensive test data following the above heuristic
`algorithm are shown in Table 1. The heuristic algorithm
`won't generate test data of other exhaustive combinations
`of features. such as f1 and f2, f1 and f3, f1 and f5, f2 and f3,
`fl and 13, since they are irrelevant to the configurator and
`cannot detect any error of the configurator. A total of
`22 test data can be eliminated but the testing scope of
`the configurator is as comprehensive and sufficient as the
`exhaustive testing.
`
`4.2. Automatic Test Data Generation for an [BM
`ES/9000T-‘4 Configurator
`
`For example, each model of an IBM ES/9000TM
`processor offers the following basic features [1]:
`
`0 Central Storage (CS)
`0 Expanded Storage (ES)
`0 Channels, Parallel or Serial
`0 Vectors
`
`Table 1. Examples of automatically generated comprehensive test data
`
`r
`Test Case #
`Comment
`
`——i
`
`
`
`
`02x
`021
`
`03;
`031
`
`04.
`041
`
`05,
`052
`
`on
`011
`
`4
`
`56 7
`
`8
`
`P“
`
`I-—
`
`
`
`1 --_- Generate the combinations off and 12
`2
`
`l— 3
`
`
`
`Generate the combinations off2 and f5 __|
`
`
`
`
`
`
`
`
`
`011 022 031 04110 052
`
`
`
`l5 .7
`
`7
`
`
`
`
`
`Integrated CryptographicTM (ICRFM)
`High Performance Parallel InterfaceTM (HiPPiTM)
`Machine colors
`Console Color
`
`Sysplex Timer AttachmentTM
`Transaction Processing Facility EnablerTM (TPFIM)
`Power Unit
`
`ESCONTM Analyzer
`Processor Controller
`
`Special Handling Tools
`
`features, one conceptual
`the above product
`Besides
`feature, the customer location, will be considered since
`the
`customer's
`locations
`can affect
`the
`customer‘s
`
`selection of the power unit and the special handling tool.
`The entire collection of product features or conceptual
`
`features of
`Figure 4.
`
`an ES/9000TM processor are
`
`shown in
`
`the identified relationships between the
`Based on all
`above features, we need to decompose several features
`since
`they
`can be
`installed on both
`sides of
`a
`multiple—processor machine and are distinguished in the
`machine configuration. These include features CS, ES,
`Vectors, ICRF, HiPPi, Sys-Timer-Attach, and TPF. They
`are decomposed into subfeatures CS-A and CS~B, ES—A
`and ES-B, Vector-A and Vector-B, ICRF-A and ICRF-B,
`HiPPi-A and
`HiPPi-B,
`Sys-Timer-Attach-A and
`Sys-Timer-Attach-B, TPF—A and TPF-B,
`respectively.
`One other feature which needs further decomposition is
`Channel since there are two types of channels, parallel
`
`
`
`E
`
`
`
`Storage
`
`Central
`
`Storage
`
`Expanded
`
`Analyzer
`
` Vectors
`
`
`
`Channels
`
`ICRF
`
`
`
`Colors
`
`
`
`
`Console
`
`Customer
`
`Locations
`
`
`
`r—
`
`Power Unit
`
`Special Handling
`Tools
`
`
`
`
`
`
`
`
`
` HiPPi
`
`TPF Enabler
`
`Processor
`
`Controller
`
`Sys. Timer
`Attach.
`
`ESCON
`
`ES/9000 Central Processor Model ##
`
`
`
`TM Trademarks or registered trademarks of the International Business Machines Corporation.
`
`Figure 4. ES/9000TM Processors
`
`158
`
`8
`
`
`
`the
`in
`distinguished
`are
`they
`and
`and ESCON,
`The feature decomposition and feature
`configuration.
`relationships are shown in Figure 5.
`
`features, options and relations
`the models,
`Once all
`are identified,
`the comprehensive test data generation
`algorithm can be used to generate test dam automatically.
`This example also demonstrates a comparative result
`between
`an
`exhaustive
`testing
`and
`the
`heuristic
`comprehensive testing, The total number of test cases
`form the
`exhaustive
`testing
`for
`this
`example
`is
`2,381,847,700,000 while
`the
`heuristic
`algorithm
`generates a total of 3,126 test cases. Even though the
`size of test data has been drastically reduced but
`the
`configurator verification is not affected.
`
`5. Intelligent Output Analysis
`
`We already showed a comprehensive testing algorithm
`for automatic generation of data.
`If the test results of
`these data require human analyses,
`it
`is not only
`error-prone but also infeasible for mainframe computer
`configurators such as IBM ES/9000TM where even the
`reduced size of comprehensive test data is numerous.
`In
`order to provide a total automation to the verification
`and testing of the configurator, we need to automate both
`processes: test data generation and output analysis.
`
`The automated output analyses require the verification
`tool
`to be able to analyze the test results in the same
`way as a human expert does.
`The tool will need to
`predict
`the expected configuration output and analyze
`this output with the actual configurator output,
`If both
`
`
`
`
`
`
`
`Central
`
`Storage
`A
`
`Machine
`
`Central
`
`Expanded
`
`Expanded
`
`
`Storage
`Storage
`
`
`
`B
`A
`
`
`
`
`
`ESCON
`
`Channels
`
`
`
`
`Storage
`Colors
`
`Sys. Timer
`
`Attach.
`
`
`Attach.
`
`
`
`
`Analyzer
`
`
`Sys. Timer
`
`TPF Enabler
`B
`
`
`B
`
`Customer
`
`Locations
`
`ESCON
`
`Processor
`Controller
`
`Power Unit
`
`Special Handling
`Tools
`
`ES/9000 Central Processor Model ##
`
`
`
`Figure 5. Associated Features in ES/9000TM Processors
`
`9
`
`
`
`the test case is a valid
`i.e.,
`outputs are the same,
`configuration or it
`is invalid with the same diagnosed
`violations.
`the configurator program has been tested
`successfully on this particular test case. However,
`if
`there are discrepancies between the expected and actual
`outputs,
`further investigations are needed to determine
`the causes. We have utilized the a priori configurator
`knowledge to generate test data automatically. Here we
`will demonstrate the intelligent verification tool
`for
`automatic output analysis by using the previously defined
`configurator knowledge.
`
`Given a test case of configuration c0nfig=(m,cD,
`where cl: {(fli), 0(m,,,)(j)) I i: 1, 2,
`, t},
`the validity of
`this test case is determined by the accurate selections of
`individual model and feature (e.g., the feature is offered
`in the model) and the correct combinations of features
`(e.g., two features are not mutually exclusive). First, the
`accuracy of the selection of individual model and feature
`is
`equivalent
`to
`the
`validity of
`each
`component
`(f(i), 00.,130))
`in the configuration.
`Based on our
`previously proposed philosophy of test data generation,
`test data should exhibit any incorrect selection of a
`feature's option, such as a low-end model is ordered with
`too much central storage or a high-end model is ordered
`with too few channels. This is the reason we use the set
`
`0m where 0m: {0m Ifi E F} in the test data generation
`algorithm. But to verify the accuracy of the component
`in the test data, we need to use the set 0mm where
`0mm: {om/Alfie F}.
`The set 0w includes all
`the
`possible options for feature f,- disregarding which model
`is selected while the set 0mm includes all
`the possible
`options for
`feature I? when the model
`In is selected.
`Second, the test data is examined to see it has the right
`combinations of features. This can be done by checking
`if the test data satisfies every relation in the set R,
`including every co-existence, mutual-cxclusiveness and
`total-quantity relation of features.
`In summary,
`the test
`output
`analysis
`requires
`the
`configurator model of
`elements:
`
`1.
`
`2.
`
`3.
`
`4.
`
`the set M, the collection of all models;
`
`the set F, the collection of all features;
`
`the collections of options for every
`the sets 0mm,
`feature 1? in every model m;
`
`the set R,
`features.
`
`including every relation among all
`
`the
`
`This configurator model can be initiated by the product
`expert and be loaded into the verification tool to predict
`the expected output of every test data.
`
`Following the example in Section 4.2, assume one test
`case
`is
`(rnodel###,
`[(CS-A,128MB),
`(CS-B,256MB),
`(ES-A,0MB),
`(ES-B,0MB),
`(Parallel-Channels,32),
`(ESCON-Channels,32),
`(Vector-Al),
`(Vector—B,1),
`(ICRF-A,1), (ICRF-DJ), etc.}).
`First, each component
`is examined to see if the feature has the right option
`since the chosen option could be offered by the feature
`but is not allowed in the selected model. Second, each
`component
`is checked against every relation in the
`relation set R. A summary of this process is shown in
`Table 2. The first analysis, Min/Max satisfied?,
`is to
`check the validity of the selected option for each feature,
`The following analyses, Pre-req correct?, Co-exist
`correct?, Mutually exclusive correct?, Total quantity
`satisfied? are to examine each component against each
`relation in R.
`This test case is found to violate one
`
`mutually exclusive relation between features Vector and
`ICRF.
`If the configurator indicates that this is a valid
`configuration,
`it means the configuration program is
`inaccurate and is not
`implemented with the mutually
`exclusive relation between features Vector and ICRF.
`
`But if the test result from the configurator is consistent
`with the findings from the verification tool, it means the
`configurator program is accurate on this particular test
`case.
`
`This intelligent output analysis can be repeated for a
`large number of test data in a batch process. Since the
`test data are generated by navigating through the
`configurator model and the model/feature association
`diagram,
`test data can be presented in a structured
`sequence and be grouped based on their variance such as
`different options for two features in a relation.
`If there
`is a missing implementation of the mutually exclusive
`relation between features Vector and ICRF in the
`
`configurator program, this error will be a common error
`to a group of test data grouped based on this relation.
`Once this error is identified, this error only needs to be
`corrected once. The user can then choose to re-run only
`this group of test data to ensure that a common error
`existed in a group of test data is fixed.
`
`6. Structured Expert Review
`
`the same
`the automated output analysis shows
`If
`result as the actual result from the configurator program,
`a satisfactory test case will be classified and collected
`into the GOOD test database, and an unsatisfactory test
`
`160
`
`10
`
`10
`
`
`
`Table 2. An Example of Automated Test Analysis of ES/9000TM MODEL ##4##
`
`Co-exist
`Min/Max
`Mutually
`Pre-req
`exclusive
`correct?
`correct?
`satisfied?
`quantity
`correct?
`satisfied?
`Yes
`Yes
`(CS—A , 128MB)
`Yes
`Yes
`
`Total
`
`
`
`(Vector-A , 1)
`
`(Vector_B . 1)
`
`(ICRl‘lA , 1)
`
`(ICRF-B . 1)
`
`(HiPPi-A , Yes)
`
`(HiPPi-B , Yes)
`
`
`Vector-A should be
`mutu. exclus. with
`ICRF—A
`Vector—B should be
`mutu. exelus. with
`ICRF-B
`
`
`
`No
`
`ICRF~A should be mutu.
`exclus. with Vector-A
`
`ICRF—B should be mutu.
`No
`exclus. with Vector-B
`Yes
`Yes
`
`(Machine—Color , BLUE)
`
`Yes
`
`Ye
`
`
`
`
`
`Ye
`Yes
`(Console—Color , BLUE)
`Ye
`
`(Sys—Tirner~Attach-A ,
`Yes)
`
`Yes
`
`(Sys—Timer-Attaeh-B ,
`Yes)
`
`Yes
`
`Yes
`
`Ye
`
`Yes
`Yes
`(TPFvEnabler—A , Yes)
`Yes
`(TPF-Enabler—B , Yes)
`Yes
`Yes
`etc.
`
`Ye
`
`case will be classified and collected into the BAD test
`
`there is any discrepancy between the
`If
`database.
`expected and actual results, a human product expert will
`need to determine the fix to the configurator program
`and re-run the test data after the configurator program is
`fixed.
`In every software verification and testing,
`it
`is
`very important to keep both GOOD and BAD test cases
`in each test
`iteration for
`the following purposes:
`(1)
`serves as the base for
`later
`testing and (2) provides
`continuity or tracking for later testing since a bad test
`
`case might become a good one and a good test case
`could become a bad one if the product specifications
`change.
`
`With the automated process for test data generation
`and output analysis, a human product expert doesn't have
`to create test data and manually analyze the results.
`Instead,
`the expert's knowledge and skills can be more
`effectively and efficiently utilized to review the analysis
`results generated automatically. This approach not only
`
`161
`
`11
`
`11
`
`
`
`
`
`verification and testing of
`scale
`large
`a
`enables
`configurator programs but also facilitates the repetitive
`regression tests.
`
`7. Conclusions
`
`In this article, we address the issue of automating the
`verification and testing of a large computer configurator
`program.
`In contrast
`to