throbber
Using Secure Coprocessors
`
`Bennet Yee
`May 1994
`CMU-CS-94-149
`
`School of Computer Science
`Carnegie Mellon University
`Pittsburgh, PA 15213
`
`Submitted in partial fulfillment of the requirements
`for the degree of Doctor of Philosophy
`
`Thesis Committee:
`Doug Tygar, Chair
`Rick Rashid
`M. Satyanarayanan
`Steve White, IBM Research
`
`Copyright© 1994 Bennet Yee
`
`This research was sponsored in part by the Advanced Research Projects Agency under contract number
`F19628-93-C-0193; the Avionics Laboratories, Wright Research and Development Center, Aeronautical
`Systems Division (AFSC), U.S. Air Force, Wright-Patterson AFB, OH 45433-6543 under Contract F33615-
`90-C-l465, ARPA Order No. 7597; IBM; Motorola; the National Science Foundation under Presidential
`Young Investigator Grant CCR-8858087; TRW; and the U.S. Postal Service.
`The views and conclusions in this document are those of the authors and do not necessarily represent the
`official policies or endorsements of any of the research sponsors.
`
`SAMSUNG EX. 1006 - 1/104
`
`

`

`Keywords: authentication, coprocessor, cryptography, integrity, privacy, security
`
`SAMSUNG EX. 1006 - 2/104
`
`

`

`Abstract
`
`How do we build distributed systems that are secure? Cryptographic techniques can be used
`to secure the communications between physically separated systems, but this is not enough:
`we must be able to guarantee the privacy of the cryptographic keys and the integrity of
`the cryptographic functions, in addition to the integrity of the security kernel and access
`control databases we have on the machines. Physical security is a central assumption
`upon which secure distributed systems are built; without this foundation even the best
`cryptosystem or the most secure kernel will crumble. In this thesis, I address the distributed
`security problem by proposing the addition of a small, physically secure hardware module,
`a secure coprocessor, to standard workstations and PCs. My central axiom is that secure
`coprocessors are able to maintain the privacy of the data they process.
`This thesis attacks the distributed security problem from multiple sides. First, I an(cid:173)
`alyze the security properties of existing system components, both at the hardware and
`software level. Second, I demonstrate how physical security requirements may be iso(cid:173)
`lated to the secure coprocessor, and showed how security properties may be bootstrapped
`using cryptographic techniques from this central nucleus of security within a combined
`hardware/software architecture. Such isolation has practical advantages: the nucleus of
`security-relevant modules provide additional separation of concern between functional re(cid:173)
`quirements and security requirement, and the security modules are more centralized and
`their properties more easily scrutinized. Third, I demonstrate the feasibility of the secure co(cid:173)
`processor approach, and report on my implementation of this combined architecture on top
`of prototype hardware. Fourth, I design, analyze, implement, and measure performance of
`cryptographic protocols with super-exponential security for zero-knowledge authentication
`and key exchange. These protocols are suitable for use in security critical environments.
`Last, I show how secure coprocessors may be used in a fault-tolerant manner while still
`maintaining their strong privacy guarantees.
`
`SAMSUNG EX. 1006 - 3/104
`
`

`

`SAMSUNG EX. 1006 - 4/104
`
`

`

`Contents
`
`1
`
`Introduction and Motivation
`
`2 Secure Coprocessor Model
`2.1 Physical Assumptions for Security .
`2.2 Limitations of Model .
`2.3 Potential Platforms . . . . . .
`Security Partitions . . . . . .
`2.4
`Machine-User Authentication
`2.5
`Previous Work
`2.6
`
`3 Applications
`3 .1 Host Integrity Check . . . . . . . . . . . . . . .
`3 .1.1 Host Integrity with Secure Coprocessors .
`3.1.2 Absolute Limits
`3.1.3 Previous Work
`3.2 Audit Trails . . . . . .
`. . . .
`3 .3 Copy Protection
`3.3.1 Copy Protection with Secure Coprocessors
`3.3.2 Previous Work .
`. . . . .
`3 .4 Electronic Currency
`. . . . . . .
`3.4.1 Electronic Money Models
`3.4.2 Previous Work . . . . .
`3.5 Secure Postage . . . . . . . . . .
`3.5.1 Cryptographic Stamps ..
`3.5.2 Software Postage Meters .
`
`4 System Architecture
`. . . . . .
`4.1 Abstract System Architecture
`4.1.1 Operational Requirements ... .
`4.1.2 Secure Coprocessor Architecture.
`4.1.3 Crypto-paging and Sealing ..
`4.1.4 Secure Coprocessor Software
`4.1.5 Key Management ..
`4.2 Concrete System Architecture
`4.2.1 System Hardware
`4.2.2 Host Kernel .. . ..
`
`1
`
`5
`5
`6
`7
`8
`10
`11
`
`13
`13
`13
`15
`16
`19
`19
`20
`22
`22
`22
`26
`28
`29
`31
`
`35
`35
`35
`36
`37
`37
`38
`39
`39
`43
`
`SAMSUNG EX. 1006 - 5/104
`
`

`

`4.2.3 Coprocessor Kernel
`
`. . . . .
`
`5 Cryptographic Algorithms/Protocols
`5.1 Description of Algorithms
`5 .1.1 Key Exchange . . . . . . . .
`5 .1.2 Authentication . . . . . . . .
`5 .1.3 Merged Authentication and Secret Agreement
`5 .1.4 Practical Authentication and Secret Agreement
`5 .1.5 Fingerprints
`. .
`5.2 Analysis of Algorithms.
`5 .2.1 Key Exchange
`5 .2.2 Authentication .
`5 .2.3 Merged Authentication and Secret Agreement .
`5 .2.4 Practical Authentication and Secret Agreement
`5.2.5 Fingerprints .. ... ... . . . . . . . . . .
`
`6 Bootstrap and Maintenance
`6.1 Simple Secure and Bootstrap . . . . . . . .
`6.2 Flexible Secure Bootstrap and Maintenance
`6.3 Hardware-level Maintenance .
`6.4 Tolerating Hardware Faults
`. . . . . . . .
`
`7 Verification and Potential Failures
`7 .1 Hardware Verification
`. . . .
`7.2 System Software Verification.
`7.3 Failure Modes
`7.4 Previous Work . . . . . . . .
`
`8 Performance
`8.1 Cryptographic Algorithms
`8.2 Crypto-Paging . . . . . .
`
`9 Conclusion and Future Work
`
`47
`
`53
`53
`54
`56
`58
`60
`61
`62
`62
`62
`64
`65
`66
`
`71
`72
`72
`73
`74
`n
`77
`78
`79
`80
`
`81
`81
`83
`
`85
`
`ii
`
`SAMSUNG EX. 1006 - 6/104
`
`

`

`List of Figures
`
`3.1 Copy-Protected Software Distribution ............
`3.2 Postage Meter lndicia . . .. ....... ..... . ...
`3.3 PDF417 encoding of Abraham Lincoln's Gettysburg Address
`
`4.1 Dyad Prototype Hardware
`4.2 DES Engine Data Paths ..
`4.3 Host Software Architecture
`
`5.1 Fingerprint residue calculation .
`5.2 Fingerprint calculation (C code)
`
`21
`28
`30
`
`40
`41
`44
`
`67
`68
`
`iii
`
`SAMSUNG EX. 1006 - 7/104
`
`

`

`List of Tables
`
`2.1 Subsystem Vulnerabilities Without Cryptographic Techniques .
`2.2 Subsystem Vulnerabilities With Cryptographic Techniques
`
`8.1 Cryptographic Algorithms Run Time . . . . . . . . ... .
`
`9
`9
`
`82
`
`iv
`
`SAMSUNG EX. 1006 - 8/104
`
`

`

`Acknowledgements
`
`I would like to thank Doug Tygar, without whom this thesis would not have been possible.
`I would also like to thank my parents, without whom I would not have been possible.
`I was fortunate to have a conscientious and supportive thesis committee: thanks to Rick
`Rashid for his helpful advice (and his colorful metaphors); thanks to Steve White and his
`crew at IBM Research for their insights into secure coprocessors (and for their generous
`hardware grant); thanks to Satya for systems advice.
`Special thanks go to Michael Rabin, whose ideas inspired my protocol work. I am also
`indebted to Alfred Spector, who helped Doug and I with Strongbox, the predecessor to
`Dyad. Steve Guattery was generous with his time in helping with the proof reading. (All
`rerors remaining are mine, of course.)
`Thanks to Wayne Wilkerson and his staff at the U. S. Postal Service for many discussions
`related to cryptographic stamps.
`Thanks to Symbol Technologies Inc for figure 3.3.
`
`V
`
`SAMSUNG EX. 1006 - 9/104
`
`

`

`vi
`
`SAMSUNG EX. 1006 - 10/104
`
`

`

`Chapter 1
`
`Introduction and Motivation
`
`Is privacy the first roadkill on the Information Superhighway? 1 Will super(cid:173)
`highwaymen way lay new settlers to this electronic frontier?
`
`While these questions may be too steeped in metaphor, they raise very real concerns.
`The National Information Infrastructure (NII) [32] grand vision would have remote com(cid:173)
`puters working harmoniously together, communicating via an "electronic superhighway,"
`providing new informational goods and services for all.
`Unfortunately, many promising NII applications demand difficult-to-achieve distributed
`security properties. Electronic commerce applications such as electronic stock brokerage,
`pay-per-use, and metered services have strict requirements for authorization and confi(cid:173)
`dentiality -
`providing trustworthy authorization requires user authentication; providing
`confidentiality and privacy of communications requires end-to-end encryption. As a result
`of the need for encryption and authentication, our systems must be able to maintain the
`secrecy of the keys used for encrypting communications, the secrecy of the user-supplied
`authentication data ( e.g., passwords), and the integrity of the authentication database against
`which the user-supplied authentication data is checked. Furthermore, hand in hand with
`the need for privacy is the need for system integrity: without the integrity of the system
`software that mediates access to protected objects or the integrity of the access control
`database, no system can provide any sort of privacy guarantee.
`Can strong privacy and integrity properties be achieved on real, distributed systems?
`The most common computing environments today on college campuses and workplaces
`are open computer clusters and workstations in offices, all connected by networks. Physical
`security is rarely realizable in these environments: neither computer clusters nor offices
`are secure against casual intruders,2 let alone the determined expert. Even if office locks
`were safe, the physical media for our local networks are often but a ceiling tile away -
`any hacker who knows her raw bits can figure out how to tap into a local network using a
`PC. To make matters worse, for many security applications we must be able to protect our
`systems against the occasional untrustworthy user as well as intruders from the outside.
`
`1Tue source of this quote is unclear; one paraphrased version appeared in print, as "If privacy isn't already
`the first roadkill along the information superhighway, then it's about to be" [55], and other variants of this
`have appeared in diverse locations.
`2Tue knowledge of how to pick locks is widespread; many well-trained engineers can pick office locks [96].
`
`1
`
`SAMSUNG EX. 1006 - 11/104
`
`

`

`Standard textbook treatments of computer security assert that physical security is a
`necessary precondition to achieving overall system security. While this may have been a
`requirement that was readily realizable for yesterday's computer centers with their large
`mainframes, it is clearly not a realistic expectation for today's PCs and workstations: their
`physical hardware is easily accessible by both authorized users and malicious attackers
`alike. With complete physical access, the adversaries can mount various attacks: they can
`copy the hard disk's contents for offline analysis; replace critical system programs with
`trojan horse versions; replace various hardware components to bypass logical safeguards,
`etc.
`By making the processing power of workstations widely and easily available, we have
`made the entire system hardware accessible to interlopers. Without a foundation of physical
`security to build on, logical security guarantees crumble. How can we remedy this?
`Researchers have realized the vulnerability of network wires and other communication
`media. They have brought tools from cryptography to bear on the problem of insecure
`communication networks, leading to a variety of key exchange and authentication protocols
`[25, 27, 30, 59, 67, 78, 80, 93, 98] for use with end-to-end encryption, providing privacy
`for network communications. Others have noted the vulnerability of workstations and their
`disk storage to physical attacks, and have developed a variety of secret sharing algorithms
`for protecting data from isolated attacks [39, 75, 86]. Tools from the field of consensus
`protocols can be applied as well. Unfortunately, all of these techniques, while powerful,
`still assume some measure of physical security, a property unavailable on conventional
`workstations and PCs. The gap between reality and the physical security assumption must
`be closed before these techniques can be implemented in a believable fashion.
`Can we provide the necessary physical security to PCs and workstations without crip(cid:173)
`pling their accessibility? Can real, secure electronic commerce applications be built in a
`networked, distributed computing environment? I argue that the answer to these questions
`is yes, and I have built a software/hardware system called Dyad that demonstrates my ideas.
`In this thesis, I analyze the distributed security problem not just from the traditional
`cryptographic protocol viewpoint but also from the viewpoint of a hardware/software sys(cid:173)
`tem designer.
`I address the need for physical security and show how we can obtain
`overall system security by bootstrapping from a limited amount of physical security that
`is achievable for workstation/PC platforms - by incorporating a secure cop rocessor in a
`tamper-resistant module. This secure coprocessor may be realized as a circuit board on the
`system bus, a PCMCIA3 card, or an integrated chip; in my Dyad system, it is realized by
`the Citadel prototype from IBM, a board-level secure coprocessor system.
`I analyze the natural security properties inherent in secure coprocessor enhanced com(cid:173)
`puters, and demonstrate how security guarantees can be strengthened by bootstrapping
`security using cryptographic techniques. Building on this analysis, I develop a combined
`software/hardware system architecture, providing a firm foundation upon which applica(cid:173)
`tions with stringent security requirements can be built. I describe the design of the Citadel
`
`3Personal Computer Memory Card International Association
`
`2
`
`SAMSUNG EX. 1006 - 12/104
`
`

`

`prototype secure coprocessor hardware, the Mach [2] kernel port running on top of it, the
`resultant system integration with the host platform, the security applications running on top
`of the secure coprocessor, and new, highly secure cryptographic protocols for key exchange
`and zero-knowledge authentication.4
`By attacking the distributed security problem from all sides, I show that it is eminently
`feasible to build highly secure distributed systems, with bootstrapped security properties
`derived from physical security.
`The next chapter discusses in detail what is meant by the term secure coprocessor and
`the basic security properties that secure coprocessors must possess. Chapter 3 outlines
`five applications that are impossible without the security properties provided by secure
`coprocessors. Chapter 4 describes the combined hardware/software system architecture of
`a secure coprocessor-enhanced host. I consider the basic operational requirements induced
`by the demands of security applications and then describe the actual system architecture as
`implemented in the Dyad secure coprocessor system prototype. Chapter 5 describes my
`new cryptographic protocols, and gives an in-depth analysis of their cryptographic strength.
`Chapter 6 addresses the security issues present when initializing a secure coprocessor, and
`presents techniques to make a secure coprocessor system fault tolerant. Additionally,
`I demonstrate techniques where proactive fault diagnostics may allow some classes of
`hardware faults to be detected and permit the replacement of a malfunctioning secure
`coprocessor. Chapter 7 shows how both the secure coprocessor hardware and system
`software may be verified, and examines the consequences of system privacy breaches.
`Chapter 8 gives performance figures for the cryptographic algorithms, the overhead incurred
`by crypto-paging, and the raw DMA transfer times for our prototype system. In chapter 9,
`I propose challenges for future developers of secure coprocessors.
`
`4Some of this research was joint work: the design of Dyad, the secure applications, and the new protocols
`was done with Doug Tygar ofCMU. The basic secure coprocessor model was developed with White, Palmer,
`and Tygar. The Citadel system was designed by Steve Weingart, Steve White, and Elaine Palmer of IBM; I
`debugged Citadel and redesigned parts of it.
`
`3
`
`SAMSUNG EX. 1006 - 13/104
`
`

`

`4
`
`SAMSUNG EX. 1006 - 14/104
`
`

`

`Chapter 2
`
`Secure Coprocessor Model
`
`A secure coprocessor is a hardware mod!ule containing (1) a CPU, (2) bootstrap ROM,
`and (3) secure non-volatile memory. This hardware module is physically shielded from
`penetration, and the 1/0 interface to the module is the only way to access the internal state
`of the module. (Examples of packaging technology are discussed later in section 2.3.) This
`hardware module can store cryptographic keys without risk of release. More generally, the
`CPU can perform arbitrary computations (under control of the operating system); thus the
`hardware module, when added to a computer, becomes a true coprocessor. Often, the secure
`coprocessor will contain special purpose hardware in addition to the CPU and memory; for
`example, high speed encryption/decryption hardware may be used.
`Secure coprocessors must be packaged so that physical attempts to gain access to the
`internal state of the coprocessor will result in resetting the state of the secure coprocessor
`(i.e., erasure of the secure non-volatile memory contents and CPU registers). An intruder
`might be able to break into a secure coprocessor and see how it is constructed; the intruder
`cannot, however, learn or change the internal state of the secure coprocessor except through
`normal 1/0 channels or by forcibly resetting the entire secure coprocessor. The guarantees
`about the privacy and integrity of the secure non-volatile memory provide the foundations
`needed to build distributed security systems.
`With a firm security foundation available in the form of a secure coprocessor, greater
`security can be achieved for the host computer.
`
`2.1. Physical Assumptions for Security
`
`All security systems rely on a nucleus of assumptions. For example, it is often assumed that
`encryption systems are resistant to cryptanalysis. Similarly, I take as axiomatic that secure
`coprocessors provide private and tamper-proof memory and processing. These assumptions
`may be falsified: for example, attackers may exhaustively search cryptographic key spaces.
`Similarly, it may be possible to falsify my physical security axiom by expending enormous
`resources (possibly feasible for very large corporations or government agencies). I rely
`on a physical work-factor argument to justify my axiom, similar in spirit to intractability
`assumptions of cryptography. My secure coprocessor model does not depend on the partic(cid:173)
`ular technology used to satisfy the work-factor assumption. Just as cryptographic schemes
`may be scaled or changed to increase the resources required to penetrate a cryptographic
`
`5
`
`SAMSUNG EX. 1006 - 15/104
`
`

`

`system, current security packaging techniques may be scaled or changed to increase the
`work-factor necessary to successfully bypass the secure coprocessor protections.
`
`Chapter 3 shows how to build secure subsystems running partially on a secure copro(cid:173)
`cessor.
`
`2.2. Limitations of Model
`
`Con.fining all computation within secure coprocessors would ideally suit our security needs,
`but in reality we cannot -
`and should not -
`convert all of our processors into secure
`coprocessors. There are two main reasons: first, the inherent limitations of physical security
`techniques for packaging circuits; and second, the need to keep the system maintainable.
`Fortunately, as we shall see in chapter 3, we do not need to physically shield the entire
`computer. It suffices to physically protect only a portion of the computer.
`
`If the secure coprocessor is sealed in epoxy or a similar material, heat dissipation require(cid:173)
`ments limit us to one or two printed circuit boards. Future developments may eventually
`relax this and allow us to make more of the solid-state components of a multiprocessor
`workstation physically secure, perhaps an entire card cage; however, the security problems
`of external mass storage and networks will in all likelihood remain constant.
`
`While it may be possible to secure package an entire multiprocessor, it is likely to be
`impractical and is unnecessary besides. If we can obtain similar functionalities by placing
`the security concerns within a single coprocessor, we can avoid the cost and maintenance
`problems of making multiple processors and all memory secure.
`
`Easy maintenance requires modular design. Once a hardware module is encapsulated
`in a physically secure package, disassembling the module to fix or replace some compo(cid:173)
`nent will probably be impossible. Wholesale board swapping is a standard maintenance/
`hardware debugging technique, but defective boards are normally returned for repairs; with
`physical encapsulation, this will no longer be possible, thus driving up costs. Moreover,
`packaging considerations and the extra hardware development time imply that secure co(cid:173)
`processor's technology may lag behind the host system's technology -
`perhaps by one
`generation. The right balance between physically shielded and unshielded components
`depends on the class of intended applications. For many applications, only a small portion
`of the system must be protected.
`
`What about system-level recovery after a hardware fault? If secrets are kept only within
`a single secure coprocessor, having to replace a faulty unit with a different one due to a will
`lead to data loss. After we replace a broken coprocessor with a good one, will we be able
`to continue running our applications? Section 6.4 gives techniques for periodic checkup
`testing and fault tolerant operation of secure coprocessors.
`
`6
`
`SAMSUNG EX. 1006 - 16/104
`
`

`

`2.3. Potential Platforms
`
`Several physically secure processors exist. This section describes some of these plat(cid:173)
`forms, giving the types of attacks these systems resist, and system limitations arising from
`packaging technology.
`The µABYSS [103] and Citadel [105] systems employ board-level protection. The
`systems include a standard microprocessor (Citadel uses an Intel 80386), some non-volatile
`(battery backed) RAM, and special sensing circuitry to detect intrusion into a protective
`casing around the circuit board. Additionally, Citadel includes fast (approximately 30
`MBytes/sec) DES encryption hardware. The security circuitry erases non-volatile memory
`before attackers can penetrate far enough to disable the sensors or read memory contents.
`Physical security mechanisms must protect against many types of physical attacks.
`In the µABYSS and Citadel systems, it is assumed that intruders must be able to probe
`through a straight hole of at least one millimeter in diameter. to penetrate the system (probe
`pin voltages, destroy sensing circuitry, etc). To prevent direct intrusion, these systems
`incorporate sensors consisting of fine (40 gauge) nichrome wire and low power sensing
`circuits powered by a long-lived battery. The wires are loosely but densely wrapped in
`many layers around the circuit board and the entire assembly is then dipped in epoxy. The
`loose and dense wrapping makes the exact position of the wires in the epoxy unpredictable
`to an adversary. The sensing electronics detect open circuits or short circuits in the wires
`and erase non-volatile memory if intrusion is attempted. Physical intrusion by mechanical
`means ( e.g., drilling) cannot penetrate the epoxy without breaking one of these wires.
`Another attack is to dissolve the epoxy with solvents to expose the sensor wires. To
`block this attack, the epoxy is designed to be chemically "harder" than the sensor wires.
`Solvents will destroy at least one of the wires -
`and thus create an open-circuit - before
`the intruder can bypass the potting material and access the circuit board.
`Yet another attack uses low temperatures. Semiconductor memories retain state at very
`low temperatures even without power, so an attacker could freeze the secure coprocessor
`to disable the battery and then extract memory contents. The systems contain temperature
`sensors which trigger erasure of secrets before the temperature drops below the critical
`level. (The system must have enough thermal mass to prevent rapid freezing -
`by being
`dipped into liquid nitrogen or helium, for example -
`and this places some limitations on
`the minimum size of the system. This has important implications for secure smartcard
`designers.)
`The next step in sophistication is the high-powered laser attack. The idea is to use a
`high powered (ultraviolet) laser to cut through the epoxy and disable the sensing circuitry
`before it has a chance to react. To protect against such an attack, alumina or silica is added,
`causing the epoxy to absorb ultraviolet light. The generated heat creates mechanical stress,
`causing the sensing wires to break.
`Instead of the board-level approach, physical security can be provided for smaller,
`chip-level packages. Clipper and Capstone, the NSA's proposed DES replacements (4, 99,
`100] are special purpose encryption chips. These integrated circuit chips are reportedly
`
`7
`
`SAMSUNG EX. 1006 - 17/104
`
`

`

`designed to destroy key information (and perhaps other important encryption parameters
`-
`the encryption algorithm, Skipjack, is supposed to be secret as well) when attempts are
`made to open the integrated circuit chips' packaging. Similarly, the iPower [58] encryption
`chip by National Semiconductor has tamper detection machinery which causes chemicals
`to be released to erase secure data. The quality of protection and the types of attacks which
`these system can withstand have not been published.
`Smartcards are another approach to physically secure coprocessing [54]. A smartcard
`is a portable, super-small microcomputer. Sensing circuitry is less critical for many ap(cid:173)
`plications (e.g., authentication, storage of the user's cryptographic keys), since physical
`security is maintained by the virtue of its portability. Users carry their smartcards with
`them at all times and provide the necessary physical security. Authentication techniques
`for smartcards have been widely studied [1, 54]. Additionally, newer smartcard designs
`such as some GEMPlus or Mondex cards [35] feature limited physical security protection,
`providing a true (simple) secure coprocessor.
`The technology envelope defined by these platforms and their implementation parame(cid:173)
`ters constrains the limits of secure coprocessor algorithms. As the computation power and
`physical protection mechanisms for mobile computers and smartcards evolve, this envelope
`will grow.
`
`2.4. Security Partitions
`
`System components of networked hosts may be classified by their vulnerabilities to various
`attacks and placed within "native" security partitions. These natural security partitions
`contain system components that provide common security guarantees.. Secure coprocessors
`add a new system component with fewer inherent vulnerabilities and create a new security
`partition; cryptographic techniques reduce some of these vulnerabilities and enhance secu(cid:173)
`rity. For example, using a secure coprocessor to boot a system and ensure that the correct
`operating system is running provides privacy and integrity guarantees on memory not oth(cid:173)
`erwise possible. Public workstations can employ secure coprocessors and cryptography to
`guarantee the privacy of disk storage and provide integrity checks.
`Table 2.1 shows the vulnerabilities of various types of memory when no cryptographic
`techniques are used. Memory within a secure coprocessor is protected against physical
`access. With the proper protection mechanisms, data stored within a secure coprocessor
`can be neither read nor tampered with. A working secure coprocessor can ensure that
`the operating system was booted correctly (see section 3.1) and that the host RAM is
`protected against unauthorized logical access. 5 It is not, however, well protected against
`physical access - we can connect logic analyzers. to the memory bus and listen passively
`
`5I assume that the operating system provides protected address spaces. Paging is performed on either a remote
`disk via encrypted network communication (see section 4.1 .3 below) or a local disk which is immune to all
`but physical attacks. To protect against physical attacks for the latter case, we may need to encrypt the data
`anyway or ensure that we can erase the paging data from the disk before shutting down.
`
`8
`
`SAMSUNG EX. 1006 - 18/104
`
`

`

`Subsystem
`
`Vulnerabilities
`Integrity/Privacy
`Availability
`None
`Secure Coprocessor None
`Host RAM
`Online Physical Online Physical
`Access
`Access
`Offline Physical Offline Physical
`Access
`Access
`Online Remote Online Remote Access
`Access
`Offline Analysis
`
`Secondary Store
`
`Network
`( communication)
`
`Table 2.1 Subsystem Vulnerabilities Without Cryptographic Techniques
`
`to memory traffic, or use an in-circuit emulator to replace the host processor and force the
`host to periodically disclose the host system's RAM contents. Furthermore, it is possible
`to use multi-ported memory to remotely monitor RAM. (While it may be impractical to do
`this in a way invisible to users, this line of attack can not be entirely ruled out.) Secondary
`storage may be more easily attacked than RAM since the data can be modified offline; to do
`this, however, an attacker must gain physical access to the disk. Network communication
`is completely vulnerable to online eavesdropping and offline analysis, as well as online
`message tampering. Since networks are used for remote communication, it is clear that
`these attacks may be performed remotely.
`
`Subsystem
`
`Vulnerabilities
`Integrity/Privacy
`Availability
`Secure Coprocessor None
`None
`Online Physical Host Processor
`Host RAM
`Data
`Access
`Offline Physical None
`Access
`Online Remote None
`Access
`
`Secondary Store
`
`Network
`( communication)
`
`Table 2.2 Subsystem Vulnerabilities With Cryptographic Techniques
`
`As table 2.2 illustrates, encryption can strengthen privacy guarantees. Data modifica(cid:173)
`tion vulnerabilities still exist; however, tampering can be detected by using cryptographic
`
`9
`
`SAMSUNG EX. 1006 - 19/104
`
`

`

`checksums as long as the checksum values are stored in tamper-proof memory. Note that
`the privacy level is a function of the subsystem component using the data. If host RAM
`data is processed by the host CPU, moving the data to the secure coprocessor for encryption
`is either useless or prohibitively expensive [29, 61] -
`the data must appear in plaintext
`form to the host CPU and is vulnerable to online attacks. However, if the host RAM data is
`serving as backing store for secure coprocessor data pages (see section 4.1.3), encryption is
`appropriate. Similarly, encrypting the secondary store via the host CPU protects that data
`against offline privacy loss but not online attacks, whereas encrypting that data within the
`secure coprocessor protects that data against online privacy attacks as well, as long as that
`data need not ever appear in plaintext form in the host memory.
`For example, if we wish to send and read encrypted electronic mail, encryption and
`decryption can be performed by the host processor since the data must reside within both
`hosts for the sender to compose it and for the receiver to read it. But, the exchange of the
`encryption key used for the message should involve secure coprocessor computation: key
`exchange should use secrets that must remain within the secure coprocessor. 6
`
`2.5. Machine-User Authentication
`
`How can we authenticate users to machines and vice versa? One solution is smartcards (see
`section 2.3) with zero knowledge protocols (see secton 5.1.2).
`Another way to verify the presense of a secure coprocessor is to ask a third-party entity
`such as a physically sealed third-party computer -
`to check the machine's identity for
`-
`the user. This service can also be provided by normal network servers machines such as
`file servers. Remote services must be difficult to emulate by attackers. Users will notice
`the absence of these services to detect that something is amiss. This necessarily implies
`that these remote services must be available before the users authenticate to the system.
`The secure coprocessor must be present for the remote services to work correctly.
`Evidence that these serv

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket