`
`{-
`David D. Clark* - David R. Wilson*‘
`
`* Senior Research Scientist, MIT Laboratory for Computer science
`02139
`545 Technology Square, Cambridge, MA
`* Director, Information security Services, Ernst & Whinney
`2000 National City Center, Cleveland, OH
`44114
`
`ABSTRACT
`
`computer
`of
`discussions
`Most
`security focus on control of disclosure.
`In particular,
`the U.S. Department of
`Defense has developed a set of criteria
`fer
`e°mputet
`me°h§“iSmS
`t°
`Pr°Vide
`control
`of
`classified
`information.
`H°WeVer{
`f0r
`that
`core
`of
`data
`Proceselng
`concerned
`with
`business
`Operation and Control Of assets,
`the
`?r1mar¥
`5e°UFitY
`concern
`is
`data
`1“te9r1tY-_ Th1S_PaPer Presents a policy
`f°r data lneegrity based on
`commercial
`data Preeeeslng Praetleeer
`and C0mPaEeS
`the mechanisms needed
`for
`this policy
`with the W°°ha"1SmS
`needed t°
`enf°fCe
`the _lattice
`model
`for
`information
`§eCUrltY- We afgpe that a lattice model
`is
`not
`sufficient
`to
`characterize
`integrity policies,
`and
`that distinct
`mechanisms
`are
`needed.
`to
`control
`disclosure and to pt°V1de 1“te9ritY-
`
`INTRODUCTION
`““““““‘
`to
`of mechanisms
`Any
`discussion
`enforce computer security must
`involve a
`particular
`security
`policy
`that
`specifies the security goals the system
`must meet
`and
`the
`threats
`it must
`resist_
`For
`example,
`the
`nigh_leVel
`security goals mcst cften specified are
`that
`the
`system
`snculd
`prevent
`unauthorized
`disclosure
`or
`theft
`of
`information,
`should prevent unauthorized
`modification of
`information,
`and should
`prevent denial of service. Traditional
`threats
`that must
`be
`countered
`are
`system
`penetration
`by
`unauthorized
`persons,
`unauthorized
`actions
`by
`authorized persons, and abuse of special
`privileges
`by
`systems
`programmers
`and
`facility operators.
`These
`threats may
`be intentional or accidental_
`Imprecise or conflicting assumptions
`about
`desired policies
`often
`confuse
`discussions
`of
`computer
`security
`mechanisms.
`In particular,
`in comparing
`commercial
`and military
`systems,
`a
`
`underlying
`the
`about
`misunderstanding
`policies the two are trying to enforce
`often
`leads
`to
`difficulty
`in
`understanding the motivation for certain
`mechanisms
`that have been developed and
`espoused by
`one
`group
`or
`the other.
`This
`paper
`discusses
`the military
`security policy,
`presents
`a
`security
`policy
`valid
`in
`many
`commercial
`situations,
`and then compares
`the
`two
`policies to reveal
`important differences
`between them.
`The military security policy we are
`referring to is a set of policies that
`regulate
`the
`control
`of
`classified
`information within the government.
`This
`well—understood,
`high—level
`information
`security policy is that all classified
`information
`shall
`be
`protected
`from
`unauthorized
`disclosure
`or
`used
`declassification.
`Mechanisms
`to
`enforce
`this
`policy
`include
`the
`mandatory labeling of all documents with
`their
`classification
`level,
`and
`the
`assigning
`of
`user
`access
`categories
`based
`on
`the
`investigation
`(or
`"clearing") of all persons permitted to
`use
`this information.
`During the last
`15 to 20 years, considerable effort has
`gone
`into determining which mechanisms
`should be used to enforce. this policy
`Vlthlt 3 e9mputet'
`“e°ha“1$m5 etch as
`identification _
`and
`authorization ' of
`users, generation of audit
`information,
`and association of access control
`labels
`with all
`information _objects
`are well
`U“der5t°°d-
`Thls P°11°Y 15 detlned 1“
`the
`Department
`°t
`Defense
`Ttusted
`C°mPuter
`5Y3tem
`EVa1uat1°“ Ctltetla
`[DOD]’ often called the
`"Orange Book"
`fr°W the
`°°l°t
`°t
`lts
`°°Vet'_ ‘It
`a=t1?u1ateS .a Standttd tot. malntalntng
`confidentiality of
`information and
`is,
`‘Set
`.ehe "purposes
`.of
`our Paper’
`.the
`mllltaty " t“t°tmaE1°V secutlty p°l1°Y'
`The
`term m?11Fa‘Y
`15 Pe‘hePS t°t
`the
`most
`descriptive
`‘characterization
`of
`ttls
`?°l1e¥’
`lt.
`15
`relevant
`to
`any
`51tu?tt°“
`1“ Which
`access
`rules
`t°t
`sensltlve metettet must be e“f°r°ed~
`we
`use the term military as e eenelee tag
`which at
`least captures
`the origin of
`the policy.
`
`CH24l6-6/87/0000/Ol84SOi .00 © 1987 IEEE
`
`184
`
`EMC V. IV
`
`IPR2017-00338
`
`EMC v. IV
`IPR2017-00338
`Ex. 1019
`
`Ex. 1019
`
`
`
`environment,
`commercial
`the
`In
`is
`often
`disclosure
`preventing
`but preventing unauthorized
`important,
`data modification is usually paramount.
`In
`particular,
`for
`that
`core
`of
`commercial data processing that
`relates
`to management and accounting for assets,
`preventing
`fraud
`and
`error
`is
`the
`primary goal.
`This goal
`is addressed by
`enforcing the integrity rather
`than the
`privacy of
`the
`information.
`For
`this
`reason,
`the
`policy
`we will
`concern
`ourselves with
`is
`one
`that
`addresses
`integrity rather
`than disclosure.
`We
`will call
`this a commercial policy,
`in
`contrast
`to
`the military
`information
`security policy.
`We are not suggesting
`that
`integrity plays no role in military
`concerns.
`However,
`to the extent
`that
`the Orange Book
`is the articulation of
`the
`military
`information
`security
`policy,
`there is a clear difference of
`emphasis
`in the military and commercial
`worlds.
`While the accounting principles that
`are the basis of
`fraud and error control
`are well known,
`there is yet no Orange
`Book
`for
`the
`commercial
`sector
`that
`articulates how these policies are to be
`implemented in the context of a computer
`system.
`This makes
`it difficult
`to
`answer
`the
`question
`of whether
`the
`mechanisms designed to enforce military
`information security policies also apply
`to
`enforcing
`commercial
`integrity
`policies.
`It would be very nice if the
`same mechanisms
`could meet both goals,
`thus
`enabling
`the
`commercial
`and
`military worlds to share the development
`costs
`of
`the
`necessary mechanisms.
`However, we will argue that
`two distinct
`classes of mechanism will be
`required,
`because some of
`the mechanisms needed to
`enforce
`disclosure
`controls
`and
`integrity controls are very different.
`is
`Therefore,
`the goal of
`this paper
`to defend two conclusions. First,
`there
`is a distinct set of security policies,
`related
`to
`integrity
`rather
`than
`disclosure, which are often of highest
`priority
`in
`the
`commercial
`data
`processing
`environment.
`Second,
`some
`separate mechanisms
`are
`required
`for
`enforcement of
`these policies, disjoint
`from those of the Orange Book.
`
`MILITARY SECURITY POLICY
`
`the
`associated with
`policies
`The
`management
`of
`classified information,
`and the mechanisms used to enforce these
`
`policies, are carefully defined and well
`understood
`within
`the
`military.
`However,
`these
`mechanisms
`are
`not
`necessarily well
`understood
`in
`the
`commercial world, which
`normally does
`not have such a complex requirement
`for
`control
`of
`unauthorized
`disclosure.
`Because
`the military
`security model
`
`In
`
`provides a good starting point, we begin
`Wlth . a
`brief
`summary
`of
`computer
`security in the context of classified
`information control.
`the control
`for
`The
`top-level goal
`is
`very
`classified
`information
`of
`simple:
`classified information must not
`be
`disclosed
`to
`unauthorized
`individuals.
`At
`first
`glance,
`it
`appears the correct mechanism to enforce
`this policy is
`a
`control
`over which
`individuals
`can read which data items.
`This mechanism, while certainly needed,
`is much
`too simplistic
`to solve
`the
`entire
`problem
`of
`unauthorized
`information
`release.
`particular,
`enforcing
`this
`policy
`requires
`a
`mechanism to control writing of data as
`well as reading it. Because the control
`of
`writing
`data
`is
`superficially
`associated
`with
`ensuring
`integrity
`rather
`than preventing theft,
`and
`the
`classification
`policy
`concerns
`the
`control of
`theft, confusion has arisen
`about
`the
`fact
`that
`the military
`mechanism includes strong controls over
`who can write which data.
`reasoning
`Informally,
`the
`line of
`leads
`to
`this mechanism is
`as
`that
`follows.
`To enforce
`this policy,
`the
`system must protect
`itself
`from the
`authorized
`user
`as
`well
`as
`the
`unauthorized user.
`There are a number
`of ways
`for
`the
`authorized
`user
`to
`declassify information.
`He can do so as
`a result of
`a mistake, as
`a deliberate
`illegal action, or because he invokes a
`program on his behalf
`that, without his
`knowledge,
`declassifies
`data
`as
`a
`malicious side effect of its execution.
`This
`class
`of
`program,
`sometimes
`called a
`"Trojan Horse"
`program,
`has
`received
`much
`attention within
`the
`military.
`To understand how to control
`this class of problem in the computer,
`consider
`how
`a
`document
`can
`be
`declassified
`in
`a
`noncomputerized
`context.
`The
`simple technique involves
`copying
`the
`document,
`removing
`the
`classification labels from the document
`with a pair of scissors, and then making
`another
`copy
`that
`does
`not
`have
`the
`classification
`labels.
`This
`second
`copy, which physically appears
`to be
`unclassified,
`can then be carried past
`security guards who are responsible for
`controlling
`the
`theft
`of
`classified
`documents. Declassification occurs
`by
`copying.
`computer
`a
`in
`this
`prevent
`To
`it
`is necessary to control
`the
`system,
`ability of an authorized user
`to copy a
`data
`item.
`In
`particular,
`once
`a
`computation has
`read a data item of
`a
`certain security level,
`the system must
`ensure that
`any data
`items written by
`that Computation have
`a security label
`at
`least as restrictive as
`the label of
`the
`item previously read.
`It
`is
`this
`
`$5
`
`
`
`the security level of
`mandatory check of
`all data items whenever
`they are written
`that enforces
`the high
`level
`security
`policy.
`this
`of
`component
`important
`An
`mechanism is that checking the security
`level
`on
`all
`reads
`and writes
`is
`mandatory and enforced by the system, as
`opposed to being at
`the discretion of
`the individual user or application.
`In
`a
`typical
`time
`sharing
`system not
`intended
`for
`multilevel
`secure
`operation,
`the
`individual
`responsible
`for
`a piece of data determines who may
`read
`or
`write
`that
`data.
`such
`discretionary
`controls
`are
`not
`to
`sufficient
`enforce
`the military
`rules
`security
`because,
`as
`suggested
`above,
`the authorized user
`(or programs
`running on his behalf) cannot be trusted
`to
`enforce
`the
`rules
`properly.
`The
`of
`mandatory
`controls
`the
`system
`so
`constrain the
`individual user
`that
`any action he
`takes
`is guaranteed to
`conform to the
`security policy.
`Most
`systems
`intended for military security
`provide
`traditional
`discretionary
`in
`control
`addition to
`the mandatory
`classification checking to support what
`is informally called "need to know."
`By
`this mechanism,
`it
`is possible for
`the
`user
`to
`further
`restrict
`the
`accessibility of his data, but it is not
`possible
`to
`increase
`the
`scope
`in
`a
`manner
`inconsistent
`with
`the
`classification levels.
`of
`In
`1983,
`the U.s. Department
`Defense produced the Orange Book, which
`attempts
`to
`organize
`and
`document
`mechanisms
`that
`should be
`found
`in a
`computer
`system designed to enforce the
`military
`security
`policies.
`This
`document
`stresses
`the
`importance
`of
`mandatory
`controls
`if
`effective
`enforcement
`of
`a
`policy
`to
`be
`is
`achieved within a
`system.
`enforce
`To
`the particular
`policy of
`the Orange
`Book,
`the mandatory controls
`relate to
`data labels and user access categories.
`Systems
`in
`division
`C
`have
`no
`requirement
`for
`mandatory
`controls,
`while
`systems
`in divisions
`A
`and
`B
`specifically
`have
`these
`mandatory
`maintenance
`and
`checking
`controls
`for
`labels
`and
`user
`rights.
`(Systems
`in
`Division A are distinguished from those
`in B, not by additional function, but by
`having been designed to permit
`formal
`verification of
`the security principles
`
`_
`_
`of the system.)
`Several security systems used in the
`commercial
`environment,
`specifically
`RACE,
`ACF/2,
`and CA-Topsecret,
`were
`recently evaluated using the Orange Book
`criteria.
`The
`C
`ratings
`that
`these
`security
`packages
`received
`would
`indicated that
`they did not meet
`the
`mandatory requirements of
`the
`security
`model as described in the Orange Book.
`
`these packages are used commonly in
`Yet,
`industry and
`viewed
`as
`being
`rather
`effective in their meeting of
`industry
`requirements.
`This would
`suggest
`that
`industry
`views
`security
`requirements
`somewhat differently than the
`security
`policy described in
`the Orange Book.
`The next section of
`the paper begins
`a
`discussion of this industry view.
`
`COMMERCIAL SECURITY POLICY FOR INTEGRITY
`
`confidential
`of
`control
`Clearly,
`in both
`the
`information is
`important
`environments.
`commercial
`and military
`However,
`a major goal of commercial data
`processing,
`often
`the most
`important
`goal,
`is to ensure integrity of data to
`prevent
`fraud and errors.
`No user of
`the system,
`even if authorized, may be
`permitted to modify data items in such a
`way that assets or accounting records of
`the company are lost or corrupted.
`Some
`mechanisms
`in the system,
`such as user
`authentication, are an integral part of
`enforcing
`both
`the
`commercial
`and
`military
`policies.
`However,
`other
`mechanisms are very different.
`to
`used
`The
`high—level mechanisms
`policies
`enforce
`commercial
`security
`related to data integrity were derived
`long before computer
`systems
`came
`into
`existence.
`Essentially,
`there are two
`mechanisms at
`the heart of
`fraud and
`error
`control:
`the
`well-formed
`transaction,
`and
`separation
`of
`duty
`among employees.
`well-formed
`the
`of
`The
`concept
`should not
`a user
`transaction is
`that
`manipulate data arbitrarily, but only in
`constrained ways
`that preserve or ensure
`the
`integrity of
`the data.
`A
`very
`common
`mechanism
`in
`well-formed
`transactions
`is
`to
`record
`all
`data
`modifications
`in a
`log so that actions
`can
`be
`audited
`later.
`(Before
`the
`computer, bookkeepers were instructed to
`write
`in
`ink,
`and
`to make correcting
`entries
`rather
`than erase
`in
`case of
`error.
`In
`this
`way
`the
`books
`themselves, being write—only, became the
`log,
`and any evidence of erasure was
`indication of fraud.)
`formally structured
`Perhaps the most
`example
`of well-formed
`transactions
`occurs
`in
`accounting
`systems, which
`model
`their
`transactions
`on
`the
`principles of double entry bookkeeping.
`Double
`entry
`bookkeeping
`ensures
`the
`internal
`consistency
`of
`the
`system's
`data
`items
`by
`requiring
`that
`any
`modification of
`the books comprises
`two
`parts, which account for or balance each
`other.
`For example,
`if a check is to be
`written (which implies an entry in the
`cash account)
`there must be a matching
`entry on
`the accounts payable account.
`If
`an entry is not performed properly,
`so that
`the parts do not match,
`this can
`
`W6
`
`
`
`test
`thus
`the
`
`independent
`an
`by
`detected
`be
`books).
`It
`is
`the
`(balancing
`possible to detect
`such frauds
`as
`simple issuing of unauthorized checks.
`The
`second mechanism to
`control
`fraud and error,
`separation of
`duty,
`attempts
`to
`ensure
`the
`external
`consistency of
`the data objects:
`the
`correspondence between the data object
`and
`the
`real
`world
`object
`it
`represents.
`Because
`computers
`do
`not
`normally have direct sensors to monitor
`the real world, computers cannot verify
`external consistency directly.
`Rather,
`the correspondence is ensured indirectly
`by
`separating
`all
`operations
`into
`several subparts and requiring that each
`subpart
`be
`executed
`by
`a
`different
`person.
`For
`example,
`the process
`of
`purchasing some
`item and paying for
`it
`might
`involve subparts:
`authorizing the
`purchase order,
`recording the arrival of
`the item,
`recording the arrival of
`the
`invoice,
`and authorizing payment.
`The
`last
`subpart,
`or
`step,
`should not
`be
`executed unless
`the previous
`three are
`properly
`done.
`If
`each
`step
`is
`performed
`by
`a different person,
`the
`external
`and
`internal
`representation
`should correspond unless
`some of
`these
`people
`conspire.
`If
`one
`person
`can
`execute
`all
`of
`these
`steps,
`then
`a
`simple
`form of
`fraud is possible,
`in
`which
`an order
`is placed and
`payment
`made to a fictitious company without any
`actual delivery of
`items.
`In this case,
`the books appear
`to balance;
`the error
`is
`in the
`correspondence
`between
`real
`and recorded inventory.
`Perhaps the most basic separation of
`duty rule is that any person permitted
`to
`create
`or
`certify
`a well—formed
`transaction may
`not
`be
`permitted
`to
`execute it
`(at
`least against production
`data).
`This rule ensures that at
`least
`two
`people
`are
`required
`to
`cause
`a
`change
`in
`the
`set
`of well-formed
`transactions.
`The
`separation
`is
`duty method
`of
`of
`the
`in
`case
`effective
`except
`this
`employees.
`For
`collusion
`among
`reason,
`a
`standard auditing disclaimer
`is that
`the system is certified correct
`under
`the assumption that
`there has been
`no collusion. While
`this might
`seem a
`risky assumption,
`the method has proved
`very effective in practical control of
`fraud.
`Separation of duty can be made
`very powerful by thoughtful application
`of
`the
`technique,
`such
`as
`random
`selection of
`the
`sets
`of
`people
`to
`perform some operation,
`so
`that
`any
`proposed
`collusion
`is
`only
`safe
`by
`chance.
`Separation of duty is
`thus
`a
`fundamental principle of commercial data
`integrity control.
`system to
`a computer
`Therefore,
`for
`be used for commercial data processing,
`specific
`mechanisms
`are
`needed
`to
`
`To ensure that
`enforce these two rules.
`data items are manipulated only by means
`of well-formed transactions, it is first
`necessary to ensure that a data item can
`be manipulated only by a specific set of
`programs.
`These
`programs
`must
`be
`inspected for proper construction,
`and
`controls must be provided on the ability
`to install and modify these programs, so
`that
`their
`continued
`validity
`is
`ensured.
`ensure
`separation
`of
`To
`duties,
`each user must be permitted to
`use only certain sets of programs.
`The
`assignment of people
`to programs must
`again be
`inspected to ensure that
`the
`desired controls are actually met.
`These integrity mechanisms differ in
`number
`of
`important ways
`from the
`a
`mandatory controls for military security
`as described in the Orange Book. First,
`with these integrity controls,
`a data
`item is not necessarily associated with
`a particular security level, but
`rather
`with
`a
`set
`of
`programs permitted to
`manipulate
`it.
`Second,
`a user
`is not
`given authority to read or write certain
`data
`items,
`but
`to
`execute
`certain
`programs
`on
`certain data
`items.
`The
`distinction between these two mechanisms
`is
`fundamental. with the Orange Book
`controls,
`a user
`is constrained by what
`data items he can read and write.
`If he
`is authorized to write a particular data
`item he may
`do
`so
`in
`any way
`he
`chooses.
`with
`commercial
`integrity
`controls,
`the user
`is
`constrained by
`what programs he
`can execute,
`and the
`manner
`in which he
`can read or write
`data items is implicit
`in the actions of
`those programs.
`Because of
`separation
`of duties, it will almost always be the
`case
`that
`a user,
`even
`though
`he
`is
`authorized to write a data item, can do
`so
`only
`by
`using
`some
`of
`the
`transactions
`defined
`for
`that
`data
`item.
`other
`users, with
`different
`duties, will have
`access
`to different
`sets
`of
`transactions
`related to
`that
`data.
`
`MANDATORY COMMERCIAL CONTROLS
`
`is
`The concept of mandatory control
`central
`to the mechanisms
`for military
`security, but
`the term is not usually
`applied to commercial systems.
`That
`is,
`commercial
`systems
`have not
`reflected
`the idea that certain functions, central
`to
`the
`enforcement
`of
`policy,
`are
`designed as a fundamental characteristic
`of
`the system. However, it is important
`to
`understand
`that
`the
`mechanisms
`described in the previous
`section,
`in
`some
`respects,
`are mandatory controls.
`They are mandatory in that
`the user of
`the system should not, by any sequence
`of operations,
`be
`able
`to modify
`the
`list of programs permitted to manipulate
`a particular data item or
`to modify the
`
`B7
`
`
`
`a
`list of users permitted to execute
`given program.
`If
`the individual user
`could do
`so,
`then there would
`be
`no
`control
`over
`the
`ability
`of
`an
`untrustworthy user
`to alter
`the system
`for fraudulent ends.
`integrity
`In
`the
`commercial
`environment,
`the owner of an application
`and the general controls implemented by
`the
`data
`processing
`organization
`are
`responsible
`for
`ensuring
`that
`all
`programs
`are well-formed transactions.
`As
`in the military environment,
`there is
`usually
`a
`designated
`separate
`staff
`responsible for assuring that users can
`execute transactions only in such a way
`that
`the
`separation of
`duty rule
`is
`enforced.
`The
`system ensures
`that
`the
`user
`cannot
`circumvent
`these controls.
`This
`is
`a mandatory
`rather
`than
`a
`discretionary control.
`The two mandatory controls, military
`commercial,
`are
`very
`different
`and
`mechanisms.
`They
`do
`not
`enforce
`the
`same
`policy.
`The military mandatory
`control enforces the correct setting of
`classification levels.
`The
`commercial
`mandatory
`control
`enforces
`the
`rules
`that
`implement
`the
`well-formed
`transaction
`and
`separation
`of
`duty
`a
`model.
`when
`constructing
`computer
`system to support
`these mechanisms, very
`different
`low—level
`tools
`are
`implemented.
`these two
`An
`interesting example of
`sets of mechanisms can be found in the
`Multics operating system, marketed by
`Honeywell
`Information
`Systems
`and
`evaluated by the Department of Defense
`in Class B2 of its evaluation criteria.
`A certification in Division B
`implies
`that Multics has mandatory mechanisms to
`enforce
`security
`levels,
`and
`indeed
`those
`mechanisms
`were
`specifically
`implemented to make the system usable in
`a military multilevel secure environment
`[WHITMORE].
`However,
`those mechanisms
`do not provide a sufficient basis
`for
`enforcing a commercial
`integrity model.
`In
`fact, Multics
`has
`an
`entirely
`different
`set
`of mechanisms,
`called
`protection rings,
`that were
`developed
`specifically
`for
`this
`purpose
`[SCHROEDER]. Protection rings provide a
`means
`for ensuring that data bases can
`be
`manipulated
`only
`by
`programs
`authorized to use
`them. Multics
`thus
`has
`two
`complete
`sets
`of
`security
`mechanisms,
`one
`oriented
`toward
`the
`military and designed specifically for
`multilevel
`operation,
`and
`the
`other
`designed
`for
`the
`commercial model
`of
`integrity.
`The analogy between the two forms of
`mandatory control
`is not perfect.
`In
`the integrity control model,
`there must
`be
`more
`discretion
`left
`to
`the
`administrator of the system, because the
`determination of what constitutes proper
`
`separation of duty can be done only by a
`comparison
`with
`application-specific
`criteria.
`The
`separation
`of
`duty
`determination
`can
`be
`rather
`complex,
`because
`the
`decisions
`for
`all
`the
`transactions
`interact.
`This
`greater
`discretion means
`that
`there
`is
`also
`greater scope for error by the security
`officer or
`system owner,
`and that
`the
`system is
`less
`able
`to
`prevent
`the
`security officer,
`as
`opposed
`to
`the
`user,
`from misusing the system.
`To the
`system user, however,
`the behavior of
`the two mandatory controls
`is similar.
`The rules are seen as a fundamental part
`of
`the
`system,
`and
`may
`not
`be
`circumvented,
`only
`further
`restricted,
`by any other discretionary control
`that
`exists.
`
`COMMERCIAL EVALUATION CRITERIA
`
`As discussed earlier, RACF, ACF/2,
`and CA-Topsecret were all
`reviewed using
`the Department
`of Defense
`evaluation
`criteria described in the Orange Book.
`Under
`these criteria,
`these systems did
`not
`provide
`any mandatory
`controls.
`However,
`these systems, especially when
`executed
`in
`the
`context
`of
`a
`telecommunications monitor
`system such
`as CICS or
`IMS, constitute the closest
`approximation the commercial world has
`to
`the
`enforcement
`of
`a mandatory
`integrity policy.
`There
`is
`thus
`a
`strong need for
`a commercial equivalent
`of
`the military evaluation criteria to
`provide a means of categorizing systems
`that are useful for integrity control.
`Extensive study is needed to develop
`document with the depth
`of detail
`a
`associated with
`the
`Department
`of
`Defense evaluation criteria. But, as a
`starting point, we propose the following
`criteria,
`which
`we
`compare
`to
`the
`fundamental
`computer
`security
`requirements from the "Introduction" to
`the Orange Book. First,
`the system must
`separately
`authenticate
`and
`identify
`every user,
`so that his actions can be
`controlled
`and
`audited.
`(This
`is
`similar
`to the Orange Book
`requirement
`for
`identification.)
`Second,
`the system
`must
`ensure
`that
`specified data
`items
`can be manipulated only by a restricted
`set of programs,
`and
`the data center
`controls must ensure that
`these programs
`meet
`the well-formed transaction rule.
`Third,
`the system must associate with
`each user
`a valid set of programs to be
`run,
`and the data center controls must
`ensure
`that
`these
`sets
`meet
`the
`separation of duty rule.
`Fourth,
`the
`system must maintain an
`auditing log
`that
`records every program executed and
`the name of
`the authorizing user.
`(This
`is superficially similar
`to the Orange
`Book requirement
`for accountability, but
`
`B8
`
`
`
`to
`
`be
`
`audited are quite
`
`events
`the
`different.)
`the
`In addition to these criteria,
`military
`and
`commercial
`environments
`share
`two
`requirements.
`First,
`the
`computer
`system must contain mechanisms
`to ensure that
`the system enforces
`its
`requirements.
`And
`second,
`the
`mechanisms
`the
`system must
`be
`in
`protected
`against
`tampering
`or
`unauthorized
`change.
`These
`two
`ensure
`requirements, which
`that
`the
`system actually does what
`it asserts it
`does,
`are clearly an
`integral part of
`any
`security
`policy.
`These
`are
`generally referred to as
`the
`"general"
`or
`"administrative"
`controls
`in
`a
`commercial data center.
`
`A FORMAL MODEL OF INTEGRITY
`
`introduce a more
`In this section, we
`formal model
`for data integrity within
`computer
`systems,
`and compare our work
`with other efforts in this area.
`we use
`as
`examples
`the
`specific
`integrity
`policies
`associated with
`accounting
`practices, but we believe our model
`is
`applicable to a wide range of
`integrity
`policies.
`identify and label
`To begin, we must
`those data items within the system to
`which
`the
`integrity model must
`_ be
`We
`applied.
`call
`these
`"Constrained
`Data
`Items," or CDIs.
`The particular
`integrity policy desired is defined ‘by
`two
`classes
`of procedures:
`Integrity
`Verification Procedures,
`or
`IVPs,
`and
`Transformation Procedures, or TPs.
`The
`purpose of an IVP is to confirm that all
`of the CD13 in the system conform to the
`integrity specification at
`the time the
`IV?
`is
`executed.
`In
`the
`accounting
`example,
`this correspcnds
`to the audit
`function,
`in
`which
`the
`books
`are
`balanced and reconciled to the external
`
`The TP corresponds to our
`environment.
`the well—formed transaction.
`concept of
`The purpose of
`the TPs
`is to change the
`set of
`CD15
`from one valid state to
`another.
`In the accounting example,
`a
`TP would correspond to a double entry
`transaction.
`the
`integrity of
`the
`To maintain
`CDIs,
`the system must ensure that only a
`TP can manipulate the CDIs.
`It
`is this
`constraint
`that motivated
`the
`term
`constrained
`Data
`Item.
`Given
`this
`constraint, we
`can argue
`that, at
`any
`given time,
`the CD15 meet
`the integrity
`requirements.
`(we call
`this condition a
`“valid state.")
`We
`can assume
`that at
`some time in the past
`the system was
`in
`a
`valid
`state,
`because
`an
`IVP was
`executed
`to
`verify
`this.
`Reasoning
`forward from this point, we can examine
`the
`sequence
`of
`TPs
`that
`have
`been
`executed.
`For
`the first TP executed, we
`can assert
`that it left
`the system in a
`
`$9
`
`By definition
`follows.
`valid state as
`it will
`take the CDIs
`into a valid state
`if
`they were
`in a valid state before
`execution
`of
`the
`TP.
`But
`this
`precondition was ensured by execution of
`the IVP.
`For
`each TP
`in turn, we can
`repeat
`this
`necessary step to ensure
`that, at any point after
`a sequence of
`TPs,
`the system is still valid.
`This
`proof method resembles
`the mathematical
`method
`of
`induction,
`and
`is
`valid
`provided the system ensures
`that only
`TPs can manipulate the CDIs.1
`that
`ensure
`While
`the
`system can
`cannot
`it
`TPs manipulate CDIs,
`only
`performs
`a
`that
`the
`TP
`ensure
`wel1—formed
`transformation.
`The
`a
`validity of
`TP
`(or
`an
`IVP)
`can be
`determined only by certifying it with
`respect
`to a specific integrity policy.
`In the case of
`the bookkeeping example,
`each TP would be certified to implement
`transactions
`that
`lead
`to
`properly
`segregated double entry accounting.
`The
`certification
`function
`is
`usually
`a
`manual
`operation,
`although
`some
`automated aids may be available.
`a
`thus
`Integrity
`assurance
`is
`two~part process:
`certification, which
`is done by the security officer,
`system
`owner, and system custodian with
`and
`respect
`to an
`integrity policy;
`the
`enforcement,
`which
`is
`done
`by
`system.
`Our model
`to this point can be
`summarized in the following three rules:
`
`IVPs must
`All
`C1: (Certification)
`that all CDIs
`properly ensure
`are in a valid state at
`the time
`the IVP is run.
`
`C2: All TPs must be certified to be
`valid.
`That
`is,
`they must
`take
`a CD1
`to a valid final state,
`given
`that
`it
`is
`in
`a valid
`state to begin with.
`For
`each
`TP, and each set of CD15 that it
`may manipulate,
`the
`security
`officer
`must
`specify
`a
`"relation," which defines
`that
`execution.
`A
`relation is
`thus
`
`additional detail which
`an
`is
`lThere
`the
`system must enforce, which
`is
`to
`ensure that TPs are executed serially,
`rather than several at once. During the
`mid—point
`of
`the
`execution of
`a
`TP,
`there is no requirement
`that
`the system
`be
`in a valid state.
`If
`another
`TP
`begins execution at
`this point,
`there is
`no assurance that
`the final state will
`be valid.
`To address this problem, most
`modern data base systems have mechanisms
`to
`ensure
`that
`TPs
`appear
`to
`have
`executed in a strictly serial
`fashion,
`even
`if
`they were
`actually executed
`concurrently for efficiency reasons.
`
`
`
`(CDIa, CDIb,
`(TPi,
`the form:
`of
`CDIC,
`.
`.
`.)), where the list of
`CDIs defines a particular set of
`arguments
`for which the TP has
`been certified.
`
`El:
`
`system must
`The
`(Enforcement)
`list of
`relations
`maintain the
`specified in rule C2,
`and must
`ensure
`that
`the
`only
`manipulation of any CDI
`is by a
`TP, where the TP is operating on
`the CDI
`as
`specified in
`some
`relation.
`the basic
`above
`rules provide
`The
`framework to ensure internal consistency
`of
`the CDIs.
`To provide a mechanism for
`external consistency,
`the separation of
`duty mechanism, we need additional
`rules
`to
`control which persons
`can
`execute
`which programs on specified CDIs:
`
`list
`E2: The system must maintain a
`form:
`of
`relations
`of
`the
`(UserID, TPi,
`(CDIa, CDIb, CDIC,
`...)), which relates a user,
`a
`TP, and the data objects that TP
`may reference on behalf of
`that
`user.
`It must ensure that only
`executions described in one of
`the relations are performed.
`
`C3:
`
`The list of relations in E2 must
`be
`certified
`to
`meet
`the
`separation of duty requirement.
`
`specified
`relations
`the
`Formally,
`than those
`for rule B2 are more powerful
`of
`rule El,
`so
`E1
`is
`unnecessary.
`However,
`for
`both
`philosophical
`and
`practical reasons,
`it
`is helpful
`to have
`both
`sorts
`of
`relations.
`E1
`Philosophically,
`keeping
`and
`E2
`separate helps
`to indicate that
`there
`are
`two basic problems
`to be
`solved:
`internal and external consistency.
`As a
`practical matter,
`the existence of both
`forms
`together permits complex relations
`to be expressed with shorter
`lists, by
`use of
`identifiers within the relations
`that use "wild card" characters to match
`classes of TPs or CDIs.
`use
`The
`above
`relation made
`UserID, an identifier for a user of
`system.
`This
`implies
`the need
`for
`rule to define these:
`
`of
`the
`a
`
`33:
`
`authenticate
`system must
`The
`each
`user
`identity
`of
`the
`attempting to execute a TP.
`
`to both commercial
`is relevant
`Rule E3
`and military systems.
`However,
`those
`two classes of systems use the identity
`of
`the user
`to enforce very different
`policies.
`The
`relevant policy in the
`military context,
`as described in the
`Orange Book,
`is
`based
`on
`level
`and
`category
`of
`clearance,
`while
`the
`
`commercial policy is likely to be based
`on
`separation of
`responsibility among
`two or more users.
`restrictions on
`There may be other
`In
`each case,
`the validity of
`a TP.
`this restriction will be manifested as a
`certification
`rule
`and
`enforcement
`rule.
`For
`example,
`if
`a
`TP
`is valid
`only during certain hours of
`the day,
`then
`the
`system
`must
`provide
`a
`trustworthy clock (an enforcement
`rule)
`and the TP must be certified to read the
`clock properly.
`enforcement
`integrity
`Almost
`all
`systems require that all TP execution be
`logged
`to
`provide
`an
`audit
`trail.
`However, no special enforcement
`rule is
`needed to implement
`this facility;
`the
`log can be modeled as another CDI, with
`an associated TP
`that only appends
`to
`the existing CDI value.
`The only rule
`required is:
`
`be certified to
`C4: All TPs must
`append—only
`CDI
`write
`to
`an
`all
`information
`(the
`log)
`necessary to permit
`the nature
`of
`the
`operation
`to
`be
`reconstructed.
`
`critical
`one more
`only
`is
`There
`to this integrity model.
`Not
`component
`is
`constrained
`data.
`In
`all
`data
`addition to CDIs, most
`systems contain
`data items not covered by the integrity
`policy
`that
`may
`be
`manipulated
`arbitrarily,
`subject
`only
`to
`discretionary
`controls.
`These
`Unconstrained Data
`Items, or UDIs,
`are
`relevant because they represent
`the way
`new information is fed into the system.
`For example,
`information typed by a user
`at
`the keyboard is a UDI;
`it may have
`been
`entered or modified arbitrarily.
`To deal with this class of data,
`it
`is
`necessary to recognize that certain TPs
`may take UDIs as
`input values,
`and may
`modify
`or
`create CDIs
`based
`on
`this
`information.
`This
`implies
`a
`certification rule:
`
`an
`takes a UDI as
`C5: Any TP that
`input value must
`be certified
`to
`perform
`only
`valid
`transformations,
`or
`else
`no
`transformations,
`for
`any
`possible value of
`the UDI.
`The
`transformation should take the
`input
`from a UDI
`to a CD1, or
`the
`UDI
`is
`rejected.
`Typically,
`this
`an
`edit
`program.
`
`is
`
`the
`effective,
`be
`this model
`For
`various certification rules must not be
`bypassed.
`For
`example,
`if
`a user
`can
`create and run a new TP without havin