`IMPLEMENTATION AND EXPERIENCES
`
`Joy Foglesong, George Richmond, Loellyn Cassell,
`Carole Hogan, John Kordas, Michael Nemanic
`
`Lawrence Livermore National Laboratory
`Livermore, California
`
`ABSTRACT
`
`The LINCS Storage System (LSS) has been in production
`at Lawrence Livermore National Laboratory (LLNL) since
`January 1988. In the development of the LSS, the design-
`ers made key architectural decisions that included the
`separation of data and control messages, a network-wide
`locking mechanism, and the separation of the naming
`mechanism from other object managers. Other important
`issucs of the system deal with space management, descrip-
`tor managcment, and system administration. This paper
`outlines the LSS software system and then focuses on
`thesc kcy design decisions and issues with the intent of
`providing some insight into the types of difficulties
`designers face in developing a distributed storage system.
`
`OVERVIEW OF THE SOFTWARE SYSTEM
`
`The LINCS' Storage System is based on the IEEE Mass
`Storagc Reference Model' and integrates host, central, and
`archival storage systems into one transparent, logical
`systcm (see Figure 1). The host system storage media
`consists of solid-state and rotating disks on local machines.
`Thc ccntral system medium is a collection of magnetic
`disks on the storage machine. The archival systcm
`medium is carlridge tape kept either in on-line robotic
`devices or in off-line vaults.
`
`Host A
`
`Host B
`
`Bitfile
`Server
`
`Name
`Server
`
`Bitfile
`
`Name
`
`Figure 1. LSS Hierarchy.
`
`18 CH2844-9/0000/0018/$01.00 0 1990 IEEE
`
`The software components of the LSS were designed to
`qerate as servers on a distributed network? Their design
`extends the classic client-sever model through the use of
`multitasking within servers and clients to support concur-
`rent access to objects. The two main types of LSS servers
`are name servers, which translate human-oriented names
`to machine-oriented object identifiers, and bitfile servers,
`which manage access to bitfiles on disk or on archival
`media (currently IBM model 3480 tape cartridges). Host
`machines and the storage computer have name servers that
`manage name/object identifier pairs, bitfile servers that
`manage bitfiles on the various storage media, and bitfile
`movers that move bitfiles between archival tape and
`central disk and between host bitfile servers and the central
`server.**
`
`In the remainder of this section, we briefly outline the
`abstract objects managed by the storage system and the
`major components of the system. The two main abstract
`objects of the LSS are the bitfile, implemented by the
`bitfile servers, and the directory, implemented by the name
`servers. A bitfile is viewed as a descriptor and a body.
`The descriptor contains attributes of the bitfile, such as the
`time the bit segment was last written and the bitfile size.
`The body is a sequence of uninterpreted, logically contigu-
`ous bits. The operations that can be performed on the
`body include functions such as create, destroy, read, and
`write. Fields of the bitfile descriptor can be read with the
`interrogate operation and, when allowed, written with the
`change operation. A directory is viewed as a descriptor
`and a list of entries. The descriptor contains attributes of
`the directory such as the time an entry was last inserted or
`deleted and the number of entries contained in the direc-
`tory. An entry is a human-name/object-identifier pair.
`Directories may be created, destroyed, or listed and entries
`may be created, deleted, fetched, or renamed. As with the
`bitfile descriptor, fields in the directory descriptor may be
`read with the interrogate operation and written with the
`change operation.
`
`~~ LINCS is an acronym for Livermore Integrated Network
`Communication System and refers to a set of standard communi-
`cation protocols and service interface primitives that defiie the
`software basis for a distributed environment.
`** In Figure 1, the bitfile movers are represented by the arrows
`between the servers.
`
`Oracle Exhibit 1010, page 1
`
`
`
`The name servers provide a transparent naming service
`based on the use of unique, network-wide object identifiers
`for all resources. Object identifiers in the system have the
`same structure regardless of the type of object they
`identify. This allows objects of varying types to be
`cataloged in the same directory, providing a uniform
`naming mechanism across all objects. Since the directory
`object identifiers are also globally unique and can be
`stored in directories, a logically single, hierarchical,
`directed-graph directory structure can be built. This
`structure supports name transparency; translation of a
`pathname initiated from any site in the network always
`references the same object. Furthermore, there is no
`necessary relationship between the directory location and
`the location of the objects cataloged in the directory. To
`help reduce network delay when accessing a directory, it is
`planned that the host name servers will coopcrate with the
`central name server to cache and migrate directories.
`
`The bitfile servers provide network-wide random acccss to
`bitfiles. The design model for the integrated network
`bitfile system is to have fully integrated host, central, and
`archival bitfile servers and movers that support automatic
`migration throughout the entire storage hierarchy. The
`connection between the central and host bitfile servers is
`not yet fully implemented. However, the current bitfile
`system hierarchy is integrated from the central bitfile
`server through the archive’s off-line vault volumes. When
`the bitfile system is fully integrated, a client on a host will
`request a bitfile access from his local server. If the local
`server has no knowledge of the bitfile being requested, it
`will contact the central server. The central server will find
`the bitfile, which may reside on another host machine or
`on central media or in the archive, and send a copy to the
`requesting host bitfile server. Until the hierarchy is fully
`connectcd, a client on a host machine may access a bitfile
`managed by either the central or the archival bitfile server
`through direct communication with the central server or by
`invoking a utility that copies the bitfile to a local bitfile
`server.
`
`The central and archival levels of the storage hierarchy are
`integrated using a 200-gigabyte disk cache as a front-end
`to tape. As new bitfiles accumulate, they are automatically
`written to tape. The descriptors for all bitfiles on tape are
`maintained on disk for easy access. On a read request, a
`bitfile is automatically cached from tape to the disk cache.
`Bitfiles are purged from the disk cache using a bitfile-
`size-time-of-last-access algorithm.
`
`Using this overview as background, we now focus on
`several key design decisions and issues.
`
`SEPARATION OF CONTROL AND DATA
`
`The separation of control and data messages in the LSS
`improves transfer rates and allows third-party copies
`
`during which data does not pass through bitfile server
`buffers. Data associated with reads or writes is received or
`sent on a data communication association (source-
`destination address pair) separate from the control commu-
`nication associations (see Figure 2a). The third-party copy
`model is illustrated in Figure 2b. This design can be used
`to create a pipe-line of services as shown in Figure 2c.
`
`process
`
`Process
`
`J
`
`Data Association
`
`Figure 2a. Separation of Control and Data.
`
`Control
`AssLi a
`
`~
`
`b
`
`~
`
`~
`
`~
`
`Control
`;
`j
`
`;
`
`o
`
`n
`
`
`
`Process
`
`Association
`
`Figure 2b. Third-party Copy.
`
`U
`Figure 2c. Pipeline.
`
`The optimization of third-party copies is supported by
`receive-any and send-any mechanisms in the LINCS
`message-system interface. These mechanisms use
`association capabilities called stream numbers to make
`communications secure; associations are defined by the
`triple (source address, destination address, stream num-
`ber). The receive-any and send-any mechanisms allow a
`process to declare that it is willing to receive or send
`messages with one or more members of the triple unspeci-
`any source, or any destination, or any stream
`fied-i.e.,
`number. For example, the third-party copy protocol uses
`19
`
`Oracle Exhibit 1010, page 2
`
`
`
`the receive-any by indicating a specific destination address
`and a specific stream number but an unspecified source
`address: a rendezvous occurs when a message arrives
`matching the receive-any’s specific destination address
`and specific stream number. The third-party copy protocol
`uses the send-any by indicating a specific source address
`and a specific stream number but an unspecified destina-
`tion address. The message is flow-blocked at the source
`machinc until an ack with the specific source address and
`strcam number is received by the source machine, com-
`plcting the rendezvous by matching the acknowlcdging
`dcstination address with the any-destination addrcss.
`
`Thc third-party copy utilizes these mechanisms in the
`following protocol. When a controlling process requires
`data to be moved bctween second and third processes, it
`first sends a mcssagc to the second, over a control associa-
`tion, that includcs: an indication that the process is to use
`thc scnd- or reccive-any mechanism in the data transfer, an
`indication whcthcr the process will send or receive data,
`and a strcam numbcr. In response, the second process
`will: 1) invokc a rxeive-any, if receiving data, or a send-
`any, if scnding data, using the stream number received
`from thc controlling process and 2) send a message back
`to the controlling process containing its local data address
`uscd in invoking the receive- or send-any. The controlling
`proccss thcn scnds a message to the thud process that
`includcs an indication that the process is to use a send- or
`rcccivc-spccific in the data transfer; an indication whether
`the proccss will send or receive data: the data address
`rcccived from the sccond process; and the stream number.
`In rcsponsc, the third process invokes a receive-specific, if
`rccciving data, or a scnd-specific, if sending data, using
`thc data address and stream number received from the
`controlling proccss. The any message of the second
`proccss will rendczvous with the specific data message or
`specific ack of the third process and the data will flow
`directly between these processes. When the data transfer
`is complet, both source and sink processes send a message
`to thc controlling process acknowledging completion of
`the transfer (sec Figure 3).
`
`(any ,receive,X)
`
`-
`
`Process
`1
`
`7 (spec,send,Daddr,X)
`
`
`
`<
`I
`
`Data
`Process
`Process
` Association *
`4
`2
`3
`
`L
`I
`Figure 3. Third-party Copy Protocol.
`
`20
`
`The LSS utilizes bitfile movers to actually transfer the
`data. A bitfile mover performs data transfer directly
`between channels such as memory or networks or between
`devices such as disk or tape. In our UNIX implementa-
`tions, the bitfile movers have been implemented as either
`normal user processes or as kemel entities, the former for
`portability and the latter for performance. All bitfile
`movers use the same communication and lightweight
`tasking libraries, whether in kemel or user space. On a
`single machine, data is transferred by the mover between a
`device and application memory; over a network, data is
`actually moved directly between bitfile movers. For a full
`discussion of the bitfile movers, see References.’
`
`Because the LSS separates data and control messages and
`utilizes the mover processes, the bitfile servers never touch
`the sequence of bits in the bitfile body. This not only
`avoids data copying when only one machine is involved in
`a transfer, but it is also important when a program on
`machine A invokes a transfer of all or part of a bitfile from
`machine B to machine C. The program on machine A
`controls the movement of the data but the data flows
`directly from machine B to machine C. This architecture
`gives designers the flexibility to place data movement
`control on lower-performance machines without impacting
`the performance of the system.
`
`The LSS experience with separation of data and control
`messages has shown this mechanism to be valuable for
`both performance and modularity.
`
`ARCHIVAL SPACE MANAGEMENT
`
`Storage technology is failing to keep pace with the rapid
`growth of supercomputer memory capacities. It has been
`the experience at LLNL that a small number of large
`bitfiles can occupy a large part of the system’s storage
`resources. These bitfiles are the results of scientific
`simulations and their sizes are proportional to supercom-
`puter memory size. The capacity of new robotic libraries,
`such as the STC 4400, once thought to be adequate, will
`soon be insufficient to handle the data produced at a large
`scientific laboratory.
`
`Due to these technology constraints, it is important to
`maximize the use of current storage media. To fully
`utilize available on-line robotic devices, the LSS com-
`pletely fills each tape cartridge by writing data until an
`end-of-tape error is received. A tape may hold many files
`or a file may span many tapes. The LSS also manages the
`on-line archive as a level in the storage hierarchy. Inactive
`bitfiles migrate to lower-level, off-line vault volumes
`when on-line archival space is needed; bitfiles in the vault
`are moved up to the on-line robotic devices as they are
`accessed.
`
`Algorithms used to manage space on magnetic disks
`
`Oracle Exhibit 1010, page 3
`
`
`
`cannot be used to manage archival media that do not
`provide random write access. To reuse free space on a
`magnetic tape the active data on the tape must first be
`copied to a new volume, an operation called repacking.
`Since copying is an expensive operation, only volumes
`with a considerable amount of free space should be
`repacked.
`
`zatastrophic failure, the incremental log from the last week
`would be applied to the latest full backup.
`
`File descriptor management in the LSS ensures fast access
`and data integrity at a moderate cost. In twenty-five years
`of operation of the LSS and the predecessor system, no
`failures occurred that required accessing the backup tapes.
`
`The necessity of repacking to reclaim unreferenced space
`affects the criteria by which LSS bitfiles are chosen to
`migrate to the vault. While it is desirable to use a space-
`time product algorithm that allows smaller bitfiles to
`remain higher in the hierarchy longer than larger bitfiles, it
`is not desirable to migrate a few large bitfiles from many
`robotic tape cartridges to vault cartridges if the space
`released by these bitfiles does not make any of the robotic
`tapes candidates for repacking. Only bitfiles that together
`account for enough free space on their volumes should be
`migrated to the vault.
`
`In the LSS, the archival Server maintains a table of the
`current number of referenced blocks on each archival
`volume. The server maintains a minimum number of free
`volumes by repacking volumes on which less than half of
`the data blocks are referenced. If there are not enough free
`blocks on candidate volumes, it migrates bitfiles to the
`vault according to a space-time algorithm, except that
`bitfiles are migrated only if they result in volumes becom-
`ing candidates for repacking. Migrating bitfiles rather than
`volumes to the vault minimizes the number of active
`bitfiles in the vault and minimizes the number of cartridges
`handled manually (particularly convenient since LLNL’s
`vault and on-line facilities are located in different build-
`ings).
`
`ARCHIVAL FILE DESCRIPTOR MANAGEMENT
`
`In the LSS, bitfile descriptors for bitfiles managed by the
`archival server are kept on dedicated, redundant disk for
`fast queries and updates and to add flexibility in assigning
`the physical location of the bitfile body. Because the
`bitfile descriptors are vital to the system, great care is
`taken to ensure their integrity and recovery.
`
`To ensure the integrity of the bitfile descriptors, atomic
`Lransactions are used to perform updates to the duplicated
`disks. To protect against multiple disk failures, a log of all
`descriptor insertions, deletions, and modifications is
`maintained. Sixteen descriptor updates (contents of one
`physical disk sector) are collected in an internal buffer
`before being atomically written to disk. The buffer is also
`flushed to disk every two minutes if not enough updates
`have been collected. The log is written to tape periodically
`to guard against the loss of the log disks. Since recovery
`after many months of updates from an incremental log
`would be slow, a complete tape backup of all the archival
`server disks is performed weekly. To recover from a
`
`FILE SYNCHRONIZATION
`
`To preserve a bitfile’s consistency when it is accessed
`concurrently by multiple applications, the LSS uses a
`distributed locking mechanism with notification, instcad of
`lifetime timeout; an application maintains a lock until it is
`finished accessing the bitfilc or it is notified that another
`application is interested in accessing the bitfile. When an
`application receives a notification, it may release the lock
`and allow access to the bitfile by the other application or it
`may refuse to give up the lock. An application might
`reasonably keep bitfiles locked for months if no other
`application wishes to access them. If an application
`holding a lock does not respond to a request to release the
`lock, a lock-breaking mechanism can be used by the
`requesting application to recover the lock.
`
`The LSS locking mechanism is based on classic readwrite
`locks: with extensions for notification and lock breaking.
`A lock held by a user application may span many read and
`write accesses, whereas the bitfile servers lock objects
`only for individual accesses. There are three types of
`locks: no-lock, read-lock, and write-lock. A write-lock
`allows both reading and writing by the client holding the
`lock. There may be multiple concurrent read-locks on a
`bitfile allowing multiple readers, but a write-lock excludes
`all other readers and writers. Locks may be categorized
`into levels, with the highest level being a write-lock and
`the lowest level being a no-lock. Each bitfile server
`manages locks for its own clients.
`
`The lock operations on a bitfile include ser-lock, reduce-
`lock, and break-lock. Set-lock is used by clients either to
`set a lock or to lower the level of a lock on a bitfile.
`Reduce-lock is used to ask a client to lower or release its
`lock on a bitfile. A client sends a break-lock request to a
`bitfile server to break a lock held by a client that is down.
`The granularity of locking is a bitfile. A lock request
`includes a bitfile identifier, a lock type, and a notification
`address, which is the requesting client’s address in the case
`of a set-lock and the lock holder’s address in the case of a
`reduce- or break-lock. The notification address in a lock is
`used to identify the lock holder.
`
`Crash recovery considerations for locking include: 1) how
`to proceed when a bitfile server is unavailable and 2) how
`to restore lock state when a bitfile server reinitializes after
`a crash. If a bitfile server fails to respond to a reduce lock
`request, causing the requesting bitfile server to refuse an
`
`21
`
`Oracle Exhibit 1010, page 4
`
`
`
`application access to a bitfile, then the application may
`send a break lock request. In the LSS, breaking a lock
`invalidatcs changes that were made while the lock was in
`effect. A client would choose this action only when losing
`updates is prcfcrablc to waiting for the batfile server to
`return to service.
`
`ADMINISTRATIVE REQUIREMENTS OF A
`DISTRIBUTED FILE SYSTEM
`
`We have identificd six administrative requirements for the
`LSS: 1) to ensure fair use of storage, 2) to ensure
`efficient use of storage, 3) to achieve high performance,
`4) to minimize cost, 5 ) to provide for accountability of
`storage rcsources, and 6 ) to permit cost recovery. The
`proposcd mcthod for meeting these requirements is a
`combination of charging and allocation. Charging refers
`to a mechanism by which users are charged for the storage
`rcsourccs (space and transfers) hey consume. Allocation
`rcfcrs to a mechanism for ensuring that each user or group
`of uscrs has an appropriate share of the available storage
`rcsourccs.5 In this section, we will focus on the difficulty
`of designing mechanisms that are consistent with the
`administrative requirements, with the design goal of
`location transparency, and with user expectations.
`
`The prcdecessor system to the LSS, which controlled only
`central and archival storage, lacked adequate adminisma-
`Live measures. It allowed unrestricted free use of storage,
`and provided no incentive for users to restrict the amount
`of data they generated or to delete unwanted data. On the
`host disks, storage was also free, but an attempt was made
`to rcstrict the amount of data on disks by purging idle
`bitfiles. This mcasure proved to be ineffective. To
`prcvcnt their bitfiles from being purged, users simply ran
`applications that periodically touched their bitfiles to keep
`them activc.
`
`To dispel the notion of free storage, a new charging policy
`was implcmentcd in the LSS. The motivation behind this
`initial policy was to recover the cost of the system and to
`discourage uscrs from maintaining all of their bitfiles on
`expcnsive fast-access storage. The algorithm for comput-
`ing charges was [bitfile-length times bitfile-age times
`charge-rate], where the charge rate varied from one storage
`dcvice to another. The bitfile age was the smaller of the
`time sincc last charged or the time since creation. The
`information nccded to compute the charge was collected
`weekly as each bitfile server traversed its descriptors. A
`separate utility computed charges based on the reports
`from each LSS server, decremented user bank accounts
`accordingly, and produced billing reports.
`
`Problems with the new charging policy soon became
`apparcnt. The primary problem was that the same bitfile
`was multiply charged if it happened to reside on more than
`one bitfile server. That was the case, for example, when a
`
`22
`
`central bitfile was cached on a host disk. The appearance
`of multiple charges for the same bitfile, and of charges that
`reflected the location of bitfiles in the system, clearly
`defeated the system design goal of transparency. Mecha-
`nisms to remedy the multiple-charge problem are being
`considered. One such mechanism allows users to classify
`bitfiles as active, archival, etc., to help them control their
`costs and to improve the effectiveness of migration
`algorithms. We are also considering a policy based on a
`fixed rate that is independent of the storage medium. This
`policy would be consistent with the system transparency
`goal. As an interim fix, the system has been changed to
`charge only for bitfiles stored on archival volumes.
`
`Charging alone does not meet all of the administrative
`requirements. Because there are physical limits to a
`storage system and because purely economic factors do
`not seem to be adequate, we arc considering allocation
`limits to ensure fair sharing of storage resources and to
`ensure good performance of the system. The proposal we
`have adopted, though not yet implemented, includes two
`types of allocation, global and disk. Global allocation
`applies to the total space acquired at all levels of storage,
`including host disk, central disk, and archival tape. Each
`user has a global allocation, which is an upper limit on the
`tolal amount of data a user may store. Once this limit is
`reached, the user may not create more bitfiles, or extend
`existing bitfiles, until he generates free space by destroy-
`ing bitfiles.
`
`Disk allocations apply only to host and central disk. Users
`have maximum disk allocations but are allowed to exceed
`their allocations if space allocated to other users is not in
`use. When space is needed, bitfiles belonging to users
`who have exceeded their allocations are migrated first.
`For host disks, users also have minimum disk allocations.
`Users’ bitfiles will not migrate if their space utilization
`falls below this amount.
`
`Designing adequate charging and allocation mechanisms
`for a transparent system poses a difficult challenge.
`Charging policies that strive to do accurate cost recovery
`are inherently inconsistent with the goal of transparency.
`There is much to learn about selecting and implementing
`effective administrative policies in these areas.
`
`NETWORK-WIDE NAMING MECHANISM AND
`USER EXPECTATIONS
`
`Before we implemented a network-wide naming mecha-
`nism, the LSS was a distributed system but not a transpar-
`ent one. Users were aware of object location and had to
`explicitly invoke transfer routines to move bitfiles between
`host machines and the central server. Inactive host bitfiles
`were automatically destroyed and access to archival
`bitfiles was relatively slow. The central name server and
`each host maintained separate unconnected directory
`
`Oracle Exhibit 1010, page 5
`
`
`
`structures which did not migrate between machines. As a
`result of this environment, users wrote programs to keep
`bitfiles on host disks by periodically accessing bitfiles
`stored on the hosts. Users resisted the idea of location
`transparency because it meant they would lose control of
`access time.
`
`The LSS location transparency design implementation
`consists of two phases. The first includes creation of a
`network-wide directory structure, for name transparency;
`and the second, which is yet to be implemented, caches
`and migrates directories for performance. In the first
`phase, we connected the directory structures stored on the
`host machines with the directory structure in central.
`Central and archival bitfiles and central directories could
`then be cataloged in directories on the host machines and
`vice versa, creating a transparent, cross-machine naming
`structure. Because bitfiles and directories have globally
`unique identifiers, the storage system could locate the
`resources regardless of the directory in which they were
`cataloged and regardless of which server managed the
`resource.
`
`In this strange new storage world, users became frustrated
`when they unknowingly crossed machine boundaries in
`perusing through their directory structures because
`performance suddenly declined. Yet, many resisted the
`idea of caching and migration of bitfiles and directories to
`improve performance; these users desired to control
`location of their bitfiles and directories to ensure fast
`access. We believe, however, that the system can best
`manage object location, much as demand paging systems
`manage virtual memory.
`
`There are two lessons to be learned from our experience.
`First, when creating a transparent storage system it is
`important to achieve transparency in all aspects before
`putting the system on-line. Second, in presenting a new
`storage system to users, system designers need to carefully
`consider what the users expect and educate them about the
`differences in the new system.
`
`SEPARATION OF HUMAN NAMING FROM
`OTHER OBJECT SERVERS
`
`When considering a human-oriented naming mechanism
`for a distributed storage system, designers must choose
`between one that is integral to the object managers, such as
`UNIX and CFS, and one that is isolated in separate name
`managers, such as XDFS and ALPINE.6 We chose the
`latter design, recognizing that it has both advantages and
`disadvantages. The advantages of a separate naming
`mechanism include: 1) providing a uniform mechanism
`for naming many different types of objects, not just
`bitfiles; 2) providing all other servers independence from
`human-oriented naming conventions, allowing them to
`function in a variety of user environments;? 3) permitting
`
`the other servers to be optimized to manage their own
`objects without the need to deal with human-oriented
`naming issues;' 4) permitting new types of objects to be
`named in the same way as existing objects, facilitating
`extensibility;' 5) permitting several forms of application-
`dependent and general-purpose higher-level name services
`to be provided in addition to a particular directory service;'
`and 6) permitting applications to create, access, and
`destroy objects without storing the object identifiers in any
`name service?
`
`The disadvantages of a separate naming mechanism,
`discussed below, include: 1) complicating certain aspects
`of storage management; 2) requiring additional security
`mechanisms; and 3) causing performance degradation in a
`particular class of applications.
`
`We believe that the advantages of a separate naming
`mechanism outweigh the disadvantages and that the LSS
`has achieved these advantages. In particular, a separate
`naming mechanism allows the bitfile servers of the LSS to
`be integrated with the naming mechanism of any existing
`operating system. We are now working on minimizing
`the effects of its disadvantages, as described below.
`
`Storage Management
`
`The LSS servers will perform functions given a valid
`object identifier containing the appropriate access rights.
`The general availability of certain server functions, in
`combination with the separation of the name servers from
`the other object servers, has consequences for storage
`management. Specifically, clients have the potential of
`creating lost objects, objects for which there are no extant
`identifiers, and of creating dangling pointers, identifiers no
`longer pointing to valid objects. Lost objects are created
`when a client deletes all references to the object without
`destroying the object. Dangling pointers are created when
`a client destroys an object but does not delete all the
`references to it.
`
`The problems of lost objects and dangling pointers can be
`solved by correctly managing reference counts and by
`prohibiting explicit destroys of objects. A count of all
`references to it is kept with each object as part of its
`storage management state. When the count becomes zero,
`meaning the object is no longer referenced, the space it
`occupies can be reclaimed for further use. All applications
`in the LSS environment that store identifiers to objects
`need to increment and decrement reference counts
`correctly to protect against lost objects and delete object
`references before destroying objects to protect against
`dangling pointers. For example, name servers should send
`increment and decrement messages to the object servers
`when objects are inserted and deleted from directories.
`When the reference count goes to zero, the object can be
`implicitly destroyed.
`
`23
`
`Oracle Exhibit 1010, page 6
`
`
`
`Howevcr, the increment- and decrement-reference-count
`functions are not controlled by the current LSS access
`control mechanism and therefore may be invoked by any
`clicnt possessing a valid machine-oriented identifier.
`Thcre is no way to ensure that clients will invoke the
`rckrcnce-count functions correctly. Furthermore, there is
`no way to guarantcc that all naming services or applica-
`tions that store object identifiers will correctly increment
`and decrement the reference counts as objects are inserted
`and delcted from thcir databases. Similarly, there is no
`way to guarantce that all object references are dclcted
`before the object is explicitly destroyed. In these circum-
`stances, rcfercnce counts cannot be relied upon to reclaim
`storage space.
`
`Onc solution to these problems is to allow only trusted
`naming applications to invoke the incrcment- and decre-
`mcnt-refcrcnce-count and destroy functions and to require
`clicnts to store machine-oriented identifiers only in these
`applications. The additional access control necessary to
`implcmcnt trusted applications could be obtained through
`scvcral mechanisms including rights amplification
`capabilities, rights verification servers, or access lists. The
`LSS name servers would be the initial set of trusted
`applications, but it would be reasonable to include any
`naming service that could establish its correct use of
`rcfcrence-count and destroy functions. Of course, a
`rcquiremcnt to use a restricted set of applications to
`catalog object identifiers would restrict clients’ ability to
`use thcir own naming services.
`
`Security
`
`Security is affected in two ways by the separate naming
`mechanism. First, it is hampered when a client, possessing
`a valid object idcntificr, maliciously or inadvertently
`dcstroys an object still being referenced by other clients by
`sending repeated decrement-reference-count functions to
`the