throbber
Policy-Directed Code Safety
`by
`David E. Evans
`
`
`
`S.B. Massachusetts Institute of Technology (1994)
`S.M. Massachusetts Institute of Technology (1994)
`
`
`
`Submitted to the Department of Electrical Engineering and Computer Science
`in partial fulfillment of the requirements for the degree of
`
`Doctor of Philosophy
`at the
`Massachusetts Institute of Technology
`
`February 2000
`
`
`
`©Massachusetts Institute of Technology 1999. All rights reserved.
`
`
`
`Author………………………………………………………………………………………
`David Evans
`Department of Electrical Engineering and Computer Science
`October 19, 1999
`
`
`
`Certified by…………………………………………………………………………………
`John V. Guttag
`Professor, Computer Science
`Thesis Supervisor
`
`
`
`Accepted by…………………………………………………………………...……………
`Arthur C. Smith
`Chairman, Departmental Committee on Graduate Students
`
`
`
`
`
`
`1
`
`
`
` Exhibit 1015 Page 1
`
` SYMANTEC
`
`

`
`SYMANTEC Exhibit 1015 Page 2
`
` Exhibit 1015 Page 2
`
` SYMANTEC
`
`

`
`
`
`
`
`Policy-Directed Code Safety
`by
`David E. Evans
`
`Submitted to the Department of Electrical Engineering and Computer Science in partial
`fulfillment of the requirements for the degree of Doctor of Philosophy
`
`
`
`Abstract
`
`Executing code can be dangerous. This thesis describes a scheme for protecting the user by
`constraining the behavior of an executing program. We introduce Naccio, a general architecture
`for constraining the behavior of program executions. Naccio consists of languages for defining
`safety policies in a platform-independent way and a system architecture for enforcing those
`policies on executions by transforming programs. Prototype implementations of Naccio have
`been built that enforce policies on JavaVM classes and Win32 executables.
`
`Naccio addresses two weaknesses of current code safety systems. One problem is that current
`systems cannot enforce policies with sufficient precision. For example, a system such as the Java
`sandbox cannot enforce a policy that limits the rate at which data is sent over the network without
`denying network use altogether since there are no safety checks associated with sending data.
`The problem is more fundamental than simply the choices about which safety checks to provide.
`The system designers were hamstrung into providing only a limited number of checks by a design
`that incurs the cost of a safety check regardless of whether it matters to the policy in effect.
`Because Naccio statically analyzes and compiles a policy, it can support safety checks associated
`with any resource manipulation, yet the costs of a safety check are incurred only when the check
`is relevant.
`
`Another problem with current code safety systems is that policies are defined in ad hoc and
`platform-specific ways. The author of a safety policy needs to know low-level details about a
`particular platform and once a safety policy has been developed and tested it cannot easily be
`transferred to a different platform. Naccio provides a platform-independent way of defining
`safety policies in terms of abstract resources. Safety policies are described by writing code
`fragments that account for and constrain resource manipulations. Resources are described using
`abstract objects with operations that correspond to manipulations of the corresponding system
`resource. A platform interface provides an operational specification of how system calls affect
`resources. This enables safety policies to be described in a platform-independent way and
`isolates most of the complexity of the system.
`
`This thesis motivates and describes the design of Naccio, demonstrates how a large class of safety
`policies can be defined, and evaluates results from our experience with the prototype
`implementations.
`
`
`
`Thesis Supervisor: John V. Guttag
`Title: Professor, Computer Science
`
`
`
`3
`
`
`
` Exhibit 1015 Page 3
`
` SYMANTEC
`
`

`
`
`
`
`
`
`
`Acknowledgements
`
`
`
`John Guttag is that rare advisor who has the ability to direct you to see the big picture when you
`are mired details and to get you to focus when you are distracted by irrelevancies. John has been
`my mentor throughout my graduate career, and there is no doubt that I wouldn’t be finishing this
`thesis this millennium without his guidance.
`
`As my readers, John Chapin and Daniel Jackson were helpful from the early proposal stages until
`the final revisions. Both clarified important technical issues, gave me ideas about how to
`improve the presentation, and provided copious comments on drafts of this thesis.
`
`Andrew Twyman designed and implemented Naccio/Win32. His experience building
`Naccio/Win32 helped clarify and develop many of the ideas in this thesis, and his insights were a
`significant contribution to this thesis.
`
`During my time at MIT, I’ve at the good fortune to work with many interesting and creative
`people. The MIT Laboratory for Computer Science and the Software Devices and Systems group
`provided a pleasant and dynamic research environment. Much of what I learned as a grad student
`was through spontaneous discussions with William Adjie-Winoto, John Ankcorn, Anna Chefter,
`Dorothy Curtis, Stephen Garland, Angelika Leeb, Ulana Legedza, Li-wei Lehman, Victor
`Luchangco, Andrew Myers, Anna Pogosyants, Bodhi Priyantha, Hariharan Rahul, Michael
`Saginaw, Raymie Stata, Yang Meng Tan, Van Van, David Wetherall, and Charles Yang. This
`work has also benefited from discussions with Úlfar Erlingsson and Fred Schneider from Cornell,
`Raju Pandey from UC Davis, Dan Wallach from Rice University, Mike Reiter from Lucent Bell
`Laboratories, and David Bantz from IBM Research.
`
`Geoff Cohen wrote the JOIE toolkit used as Naccio/JavaVM’s transformation engine and made
`its source code available to the research community. He provided quick answers to all my
`questions about using and modifying JOIE.
`
`Finally, I thank my parents for their constant encouragement and support. I couldn’t ask for two
`better role models.
`
`4
`
`
`
`
`
`
`
` Exhibit 1015 Page 4
`
` SYMANTEC
`
`

`
`
`
`
`
`
`
`Table of Contents
`
`
`
`1 Introduction
`1.1 Threats and Countermeasures
`
`1.2 Background
`
`1.3 Design Goals
`1.3.1 Security
`1.3.2 Versatility
`1.3.3 Ease of Use
`1.3.4 Ease of Implementation
`1.3.5 Efficiency
`
`1.4 Contributions
`
`1.5 Overview of Thesis
`
`2 Naccio Architecture
`2.1 Overview
`
`2.2 Policy Compiler
`
`2.3 Program Transformer
`
`2.4 Walkthrough Example
`
`3 Defining Safety Policies
`3.1 Resource Descriptions
`3.1.1 Resource Operations
`3.1.2 Resource Groups
`
`3.2 Safety Properties
`3.2.1 Adding State
`3.2.2 Use Limits
`3.2.3 Composing Properties
`
`3.3 Standard Resource Library
`
`3.4 Policy Expressiveness
`
`4 Describing Platforms
`4.1 Platform Interfaces
`
`4.2 Java API Platform Interface
`4.2.1 Platform Interface Level
`4.2.2 File Classes
`4.2.3 Network Classes
`4.2.4 Extended Safety Policies
`
`
`
`5
`
`9
`10
`13
`14
`16
`16
`17
`17
`18
`18
`19
`
`21
`21
`23
`24
`26
`
`29
`29
`30
`32
`33
`33
`34
`35
`36
`39
`
`41
`41
`43
`43
`45
`48
`49
`
`
`
` Exhibit 1015 Page 5
`
` SYMANTEC
`
`

`
`
`
`
`
`4.3 Win32 Platform Interface
`4.3.1 Platform Interface Level
`4.3.2 Prototype Platform Interface
`
`4.4 Expressiveness
`
`5 Compiling Policies
`5.1 Processing the Resource Use Policy
`
`5.2 Processing the Platform Interface
`
`5.3 Generating Resource Implementations
`5.3.1 Naccio/JavaVM
`5.3.2 Naccio/Win32
`
`5.4 Generating Platform Interface Wrappers
`5.4.1 Naccio/JavaVM
`5.4.2 Naccio/Win32
`
`5.5 Integrated Optimizations
`
`5.6 Policy Description File
`
`6 Transforming Programs
`6.1 Replacing System Calls
`6.1.1 Naccio/JavaVM
`6.1.2 Naccio/Win32
`6.1.3 Other Platforms
`
`6.2 Guaranteeing Integrity
`6.2.1 Naccio/JavaVM
`6.2.2 Naccio/Win32
`
`7 Related Work
`7.1 Low-Level Code Safety
`
`7.2 Language-Based Code Safety Systems
`
`7.3 Reference Monitors
`7.3.1 Java Security Manager
`7.3.2 Interposition Systems
`7.3.3 Transformation-based Systems
`
`7.4 Code Transformation Engines
`7.4.1 Java Transformation Tools
`7.4.2 Win32 Transformation Tools
`
`8 Evaluation
`8.1 Security
`
`8.2 Versatility
`8.2.1 Theoretical Limitations
`8.2.2 Policy Expressiveness
`
`8.3 Ease of Use
`
`
`
`6
`
`52
`53
`54
`55
`
`57
`57
`59
`60
`61
`62
`65
`65
`71
`71
`73
`
`75
`75
`75
`77
`78
`78
`79
`81
`
`85
`85
`86
`89
`89
`90
`93
`94
`94
`95
`
`97
`97
`99
`100
`100
`105
`
`
`
` Exhibit 1015 Page 6
`
` SYMANTEC
`
`

`
`
`
`
`
`8.4 Ease of Implementation
`
`8.5 Efficiency
`8.5.1 Test Policies
`8.5.2 Policy Compilation
`8.5.3 Application Transformation
`8.5.4 Execution
`
`9 Future Work
`9.1 Improving Implementations
`9.1.1 Assurance
`9.1.2 Complete Implementations
`9.1.3 Performance Improvements
`
`9.2 Extensions
`
`9.3 Deployment
`
`9.4 Other Applications
`
`10 Summary and Conclusion
`10.1 Summary
`
`10.2 Conclusion
`
`References
`
`
`106
`108
`108
`109
`112
`113
`
`121
`121
`121
`122
`123
`124
`126
`128
`
`131
`131
`132
`
`133
`
`
`
`7
`
`
`
` Exhibit 1015 Page 7
`
` SYMANTEC
`
`

`
`
`
`
`
`List of Figures
`
`
`Figure 1. Naccio Architecture.
`Figure 2. Wrapped system call sequence.
`Figure 3. Interaction diagram for enforcing LimitWrite.
`Figure 4. File System Resources.
`Figure 5. NoBashingFiles property.
`Figure 6. LimitBytesWritten Safety Property.
`Figure 7. LimitWrite resource use policy.
`Figure 8. Network Resources.
`Figure 9. Platform interface wrapper for java.io.File class.
`Figure 10. RFileMap helper class.
`Figure 11. Platform Interface wrapper for java.io.FileOutputStream class.
`Figure 12. Platform interface for java.net.Socket.
`Figure 13. NCheckedNetworkOutputStream helper class.
`Figure 14. Policy that limits network send rate by delaying transmissions.
`Figure 15. Policy that limits bandwidth by splitting up and delaying network sends.
`Figure 16. RegulatedSendSocket wrapper modification code.
`Figure 17. NRegulatedOutputStream helper class (excerpted).
`Figure 18. Naccio/Win32 platform interface wrapper for DeleteFileA.
`Figure 19. Resource class generated by Naccio/JavaVM.
`Figure 20. Resource headers file generated by Naccio/Win32.
`Figure 21. Implementation resource.c generated by Naccio/Win32 for LimitWrite.
`Figure 22. Pass-through semantics.
`Figure 23. Generated policy-enforcing library class for java.io.FileOutputStream.
`Figure 24. Results for jlex benchmark.
`Figure 25. Results for tar execution benchmark.
`Figure 26. Results for ftpmirror execution benchmark.
`
`
`List of Tables
`
`
`Table 1. Policy compilation costs.
`Table 2. Program transformer results.
`Table 3. Micro-benchmark performance.
`Table 4. Benchmark checking.
`
`22
`25
`27
`31
`34
`35
`36
`38
`46
`46
`47
`48
`49
`50
`51
`51
`52
`54
`62
`63
`64
`68
`70
`116
`117
`118
`
`110
`112
`114
`115
`
`
`
`8
`
`
`
` Exhibit 1015 Page 8
`
` SYMANTEC
`
`

`
`
`
`
`
`
`
`
`
`Chapter 1
`Introduction
`
`Traditional computer security has focused on assuring confidentiality, integrity and availability.
`Confidentiality means hiding information from unauthorized users; integrity means preventing
`unauthorized modifications of data; and availability means preventing an attacker from making a
`resource unavailable to legitimate users. Military and large commercial systems operators are (or
`at least should be) willing to spend large amounts of effort and money as well as to risk
`inconveniencing their users in order to provide satisfactory confidentiality, integrity and
`availability assurances.
`
`The security concerns for typical home and non-critical business users are very different. In the
`past, these users had limited security concerns. Since they were typically not connected to a
`network, their primary concern was viruses on software distributed on floppy disks. Although
`viruses could be a considerable annoyance, users who stuck to shrink wrapped software were
`unlikely to encounter viruses, and the damage was limited to destroying files (or occasionally
`hardware) on a single machine.
`
`Today, nearly all computers are connected to the public Internet much of the time. Although the
`benefits of connectivity are unquestioned, being on a network introduces significant new security
`risks. The damage a program can do is no longer limited to damaging local data or hardware – it
`can send personal information through the global Internet, damaging the operator’s reputation or
`finances. Furthermore, the likelihood of executing an untrustworthy program is dramatically
`increased. The ease of distributing code on the Internet means users often have little or no
`knowledge about the origin of the code they choose to run. In addition, it is becoming hard to
`distinguish the “programs” from the “data” – Java applets embedded in web pages can run
`unbeknownst to the user; documents can contain macros that access the file system and network;
`and email messages can contain attachments that are arbitrary executables.
`
`The solution in high security environments is to turn off all mobile code and only run validated
`programs from trusted sources. This can be done by configuring browsers and other applications
`to disallow active contents such as Java applets and macros, or by installing a firewall that
`monitors all network traffic and drops packets that may contain untrustworthy code. This
`solution sacrifices the convenience and utility of the network, and would be unacceptable in many
`environments. Instead, solutions should allow possibly untrustworthy programs to run, but allow
`the user to place precise limits on what they may do. In such an environment, security
`mechanisms must be inexpensive and unobtrusive. Anecdotal evidence suggests that any code
`safety system that places a burden on its users will be quickly disabled, since its benefits are only
`apparent in the extraordinary cases in which a program is behaving dangerously.
`
`A code safety system provides confidence that a program execution will not do certain
`undesirable things. Although much progress has been made toward this goal in the last few years,
`current systems are still unsatisfactory. This work seeks to address two important weaknesses of
`existing code safety systems:
`
`
`
`9
`
`
`
` Exhibit 1015 Page 9
`
` SYMANTEC
`
`

`
`
`
`
`
`1. They cannot enforce sufficiently precise policies. This means either a program is allowed
`to do harmful things, or users are unable to run some useful programs. For example, a
`system like the Java sandbox cannot enforce a policy that limits the number of bytes that
`may be written to the file system without preventing writing completely. This is a result
`of the limited locations where safety checking can be done. The designers were forced to
`select a small number of security-relevant operations that can have safety checking since
`the overhead of a safety check is always suffered even if the policy in effect places no
`constraints on the security-relevant operation.
`
`2. The mechanisms they provide for defining safety policies are ad hoc and platform-
`specific. Ad hoc policy definition mechanisms limit the policies that can be defined to
`the class of policies considered by the system designers. It is impossible to anticipate all
`possible attacks or security requirements, so ad hoc policy definition mechanisms are
`inevitably vulnerable to new attacks. Tying policy definition to a particular execution
`platform means that policy authors need to know intimate details about that platform, and
`there is no opportunity to reuse policies on different execution platforms. This is a
`problem for policy authors, but also limits what policies are available to users. Further, it
`increases the gap between those people capable of writing and understanding policies and
`those who must trust a provided definition.
`
`This thesis demonstrates that it is possible to produce a code safety system that does not suffer
`from these weaknesses without sacrificing convenience or efficiency. We describe Naccio1, an
`architecture for code safety, and report on two prototype implementations: Naccio/JavaVM that
`enforces policies on JavaVM classes, and Naccio/Win32 that enforces policies on Win32
`executables. Naccio defines policies by associating checking code with abstract resource
`manipulations. A Naccio implementation includes an operational specification of an execution
`platform in terms of those abstract resource manipulations. Naccio enforces policies by
`transforming programs to interpose checking code around security-critical operations.
`
`1.1 Threats and Countermeasures
`
`No security system can prevent all types of threats. Our focus is on threats stemming from
`executing programs. We ignore threats that do not result from a legitimate user running a
`program including compromised authentications and physical security breeches.
`
`Different kinds of threats call for different countermeasures. Countermeasures for threats related
`to program executions come in two basic forms: restrictions on which programs may run, and
`constraints on what executions may do. Restrictions on which programs may run can be based on
`trust and cryptography (only run programs that are cryptographically signed by someone I trust),
`or based on static analysis that proves a program does not have certain undesired properties (only
`run programs that a virus detector checks do not contain instruction sequences matching known
`viruses). Constraints on what executions may do can be expressed as a policy.2 The policy that
`
`
`
`1 The name Naccio is derived from catenaccio, a style of soccer defense popularized by Inter Milan in the
`1960s. Catenaccio sought to protect the Inter net from attacks, by wrapping potential threats with a marker
`that monitors their activity and aggressively removing potentially dangerous parts (that is, the ball) from
`attackers as soon as they cross the domain protection boundary (also known as the midfield line).
`
`2 Not to be confused with an organizational security policy that specifies what policy to enforce on different
`types of programs.
`
`
`
`10
`
`
`
` Exhibit 1015 Page 10
`
` SYMANTEC
`
`

`
`
`
`
`
`should be enforced on an execution depends on how much trust the user has in the program and
`how much knowledge is available about its expected behavior. Ideally, all executions would run
`with a policy that limits them to exactly the behavior deemed acceptable for that program. This is
`not possible, however, since users cannot be expected to research and encode the limits of
`expected behavior for every program before running it. Instead, we should use different policies
`as countermeasures to different types of threats. Threats where code safety is an important
`countermeasure include viruses, Trojan horses, faulty programs and user mistakes.
`
`Viruses
`Viruses are code fragments that propagate themselves automatically. The damage they cause
`ranges from causing a minor annoyance to destroying hard drives and distributing confidential
`information. Every few weeks a new virus attack is reported widely in the mainstream media
`[NYTimes99a, NYTimes99b, NYTimes99c]. (cid:3)
`
`Although early computer viruses spread by attaching themselves to programs, extensibility
`features in modern email programs and web browsers make creating and spreading viruses much
`easier. A recent example is the Melissa Word macro virus [Pethia99]. It propagates using an
`infected Word document contained in an email message. When a user opens the infected
`document, the macro executes automatically (unless Word macros are disabled). The macro then
`lowers the macro security settings to permit all macros to run when future documents are opened
`and propagates itself by sending infected email messages to addresses found in the user’s
`Microsoft Outlook address books. The macro also infects the standard document template file
`that is loaded by default by all Word documents. If the user opens another Word document, that
`document will be mailed along with the virus to addresses in the user’s address books.
`
`The most common virus countermeasures are virus detection programs such as McAfee
`VirusScan [McAfee99] and Symantec Norton AntiVirus [Symantec98]. Nearly every new PC
`comes with virus detection software installed. Most virus detectors scan files for signatures
`matching a database of known viruses. Commercial products for detecting viruses recognize tens
`of thousands of known viruses, and their vendors employ large staffs to identify new viruses.
`
`The problem with this approach is that it depends on recognizing a known virus, so it offers no
`protection against new viruses. Because viruses like the Melissa macro virus can spread
`remarkably quickly over the Internet, they can do considerable damage before they are identified
`and virus detection databases can be updated. The damage inflicted by Melissa was limited to
`propagating itself and sending possibly confidential files to known addresses. A terrorist
`motivated to cause as much damage as possible could fairly easily create a variant of Melissa that
`inflicts far more harm.
`
`To detect or prevent damage from previously unidentified viruses requires an approach that does
`not depend on recognizing a known sequence of instructions. Some commercial virus detection
`products include heuristics for identifying likely viruses based on static properties of the code or
`dynamic properties of an execution [Symantec99]. These approaches lead to an arms race
`between virus creators and virus detectors, as virus creators go to greater lengths to make their
`viruses hard to detect. Although heuristic detection techniques show some promise, it is unlikely
`that they will ever be able to correctly distinguish all viruses from legitimate programs.
`
`A different approach is to limit the damage viruses can cause and their ability to propagate by
`observing and constraining program behavior. For example, the damage done by macro viruses
`could be limited by enforcing a policy on Microsoft Word executions. We would want to enforce
`different policies on Word executions depending on whether they were started to read a document
`
`
`
`11
`
`
`
` Exhibit 1015 Page 11
`
` SYMANTEC
`
`

`
`
`
`
`
`embedded in an email message or web page, or started to edit a trusted document. When Word is
`used to edit a local document, perhaps a policy that prohibits any network transmission would be
`adequate. For documents from untrustworthy sources, a reasonable policy would require explicit
`permission from the user before Word transmits anything over the Internet, reads sensitive files,
`alters the registry, or modifies the standard document templates.
`
`Trojan horses
`A Trojan horse is an apparently useful program that also does some things the user considers
`undesirable. There have been many instances where an attacker has distributed a deliberately
`malicious program in the guise of a useful one. For example, someone distributed a version of
`linux-util that contained a login program that would allow unauthorized users to execute arbitrary
`commands [CERT99b].
`
`In addition, there are programs a user may consider malicious even if the author did not intend to
`produce a malicious attack. For example, an early version of the Microsoft Network client would
`read and transmit the user’s directory structure [Risks95]. While most users would be unaware
`that this is occurring, and would not be overtly damaged by it (other than losing bandwidth that
`could have been used for transmitting useful data), many would consider it a privacy violation.
`
`Countermeasures for Trojan horses are similar to those for viruses, except that more precise
`policies may be needed. Although it would be difficult to monitor the information sent over the
`network by the Microsoft Network client, it would be possible to detect suspicious transmissions
`and alert the user. A more reasonable policy would ignore the actual transmitted data but place
`restrictions on which files, directories and registry entries could be examined, thereby limiting the
`information available to the program.
`
`Faulty programs
`Program bugs pose two different kinds of security threats – an attacker may deliberately exploit
`them or they may accidentally cause harm directly. The security advisories recorded by CERT
`[CERT99a] are rife with examples of buggy programs leading to exploitable security
`vulnerabilities. Of the 71 advisories posted between January 1996 and May 1999, 60 are directly
`attributable to specific program bugs (of these, 13 are the direct result of buffer overflows). A
`particularly vulnerable program is sendmail. Attackers have exploited various bugs in sendmail
`to gain root access [CERT96a, CERT96b], execute programs with group permissions of another
`user [CERT96c], and to execute arbitrary commands with root privileges [CERT97].
`
`Other program bugs cause harm unintentionally. One notorious example is the Therac-25, a
`device for administering radiation to cancer patients [Leveson93]. Because of software bugs, it
`would occasionally administer a lethal dose of radiation and several patients died as a result.
`Although the system software had ad hoc safety checks, they were obviously not sufficient.3
`Because they were ad hoc, operators and doctors could not examine them and decide if the device
`was trustworthy.
`
`The best way to obtain protection from exploitable or harmful program bugs would be to produce
`bug-free programs. Despite progress in software development and validation techniques, it is
`
`
`
`3 The Therac-25 disaster was the result of numerous factors ranging from flawed hardware design to poor
`regulation procedures. Although code safety mechanisms could be part of the solution, designing safety-
`critical systems involves far more than just code safety.
`
`
`
`12
`
`
`
` Exhibit 1015 Page 12
`
` SYMANTEC
`
`

`
`
`
`
`
`inconceivable that this will be accomplished in the foreseeable future. Since programs will
`inevitably contain bugs, code safety systems should be used to limit the damage resulting from
`buggy programs.
`
`As with Trojan horses, the expected behavior of the program is known so it is reasonable to
`enforce a precise policy that limits what it can do. The difference is that the software vendor
`should be an ally in protecting the user from bugs, unlike a malicious attack. Security-conscious
`software vendors could include policies with their software distributions or even distribute their
`software with an integrated safety policy enforced. Reputable vendors should be motivated to
`protect their users from damaging bugs and might be expected to devote some effort towards
`producing a suitable policy. By separating the policy enforcement mechanisms from the
`application, they can have more confidence that the policy is enforced correctly. In addition,
`publishing an application’s safety policy in a standard, easily understood format would give
`potential customers a chance to decide if the application is trustworthy.
`
`User mistakes
`Perhaps the most common way programs cause harm is unintentional mistakes by users. Because
`of poor interfaces or ignorance, users may inadvertently destroy valuable data or unknowingly
`transmit private information. One example is when an unsuspecting user issues the command
`tar cf * to create a new directory archive. This command will replace the contents of the first file
`in the directory with an archive of all other files, destroying whatever happened to be the first file.
`Although the program is behaving correctly according to its documentation, this is probably not
`the behavior the user indented. A well-designed interface lessens the risk of harmful user
`mistakes, but combining this with a user-selected and independently enforced policy is a more
`robust solution.
`
`1.2 Background
`
`Researchers have been working on limiting what programs can do since the early days of
`computing. Early work on computer security focused on multi-user operating systems built
`around a privileged kernel. The kernel is the only part of the system that manipulates resources
`directly. User programs must call functions in the operating system kernel to manipulate
`resources. The operating system limits what user programs can do to system resources by
`exposing a narrow interface and putting checks in the system calls to disallow unsafe resource
`use. Each application process runs in a separate address space, enforced by hardware support for
`virtual memory. A process cannot see or modify memory used by another process since it is not
`part of its virtual address space.
`
`The problem with using separate processes to protect memory is that the cost of creating and
`maintaining a process is high, as is the cost of communicating and sharing data between
`processes. Switching between different processes involves a context switch, which is usually
`expensive. Several systems have attempted to provide the isolation offered by separate processes
`within a single process by using software mechanisms. We use low-level code safety to refer to
`security designed to isolate programs and require that all resource manipulations go through well-
`defined interfaces. It includes the control flow safety, memory safety, and stack safety needed to
`prevent programs from accessing arbitrary memory segments [Kozen98]. There are several ways
`to provide low-level code safety. Approaches such as the Java byte code verifier and proof-
`carrying code techniques statically verify that the necessary properties are satisfied. Software
`fault isolation provides the necessary guarantees by inserting masking or checking instructions to
`
`
`
`13
`
`
`
` Exhibit 1015 Page 13
`
` SYMANTEC
`
`

`
`
`
`
`
`limit the targets of jumps and memory instructions. Section 7.1 describes work in low-level code
`safety.
`
`Although Naccio depends on low-level code safety for the integrity of its policy enforcement
`mechanisms, the focus of this thesis is on policy-directed code safety. Policy-directed code safety
`seeks to enforce different policies on different executions. This can be done either by statically
`verifying the desired properties always hold, or by enforcing properties using run-time checking.
`Since it is infeasible to verify most interesting properties on arbitrary programs, most work has
`focused on run-time enforcement.
`
`Most run-time constraint mechanisms, including Naccio, can be viewed as reference monitors
`[Lampson71, Anderson72]. A reference monitor is a system component that enforces constraints
`on access and manipulation of a resource. It should be invoked whenever the monitored resource
`manipulation occurs, and it should be protected from program code in a way that prevents
`bypassing or tampering. Reference monitor systems differ in how the monitors are invoked.
`They could be called explicitly by the operating system kernel, called by a separate watchdog
`process, or integrated directly into program code. Naccio integrates reference monitors directly
`into code, but takes advantage of system library interfaces to limit the code that must be altered.
`
`Reference monitors also differ in how checking code is defined. Some possibilities include
`access matrices, finite automata, or general code. In a reference monitor security system, policies
`are limited by where reference monitor calls can be placed and what system state they may
`observe. There is usually a tradeoff between supporting a large class of policies and the
`performance and complexity of the system. Naccio security is based on reference monitors that
`can be flexibly introduced into programs at different points. This allows for a large class of
`policies to be enforced, but avoids the overhead necessary to support many reference monitors
`when a simple policy is enforced.
`
`One example of a reference monitor is the SecurityManager used for high-level code safety in the
`Java virtual machine. API functions limit what programs can do by using the SecurityManager
`class. It acts as a reference monitor, enforcing a particular security policy by controlling access to
`system calls. The Java approach limits the poli

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket