throbber
Automated Assistance for Detecting Malicious Code *
`
`R. Crawford, P. Kerchen, K. Levitt, R. Olsson, M. Archer, M. Casillas
`
`Department of Computer Science
`University of California, Davis
`Davis, CA 95616
`virus@cs.ucdavis.edu
`
`Email:
`
`Abstract
`
`This paper gives an update on our continuing work on the Malicious Code Testbed
`(ACT). The MCT is a semi-automated tool, operaiing in a simulated, cleanroom en-
`cironment, that is capable of detecting rnany types of malicious code, such as viruses,
`Trojan horses, and time/logic bombs. The MCT allows security analysts to check a
`program before instailation, thereby avoiding any damage a malicious program might
`inflict.
`Keywords: Detection of Maidons Code, Static Analysis, Dynamic Analysis.
`
`1
`
`Introduction
`
`The Malicous Code Testbed (MCT) was originally designed to use both static and
`dynamic analysis tools developed at the University of California, Davis, that have
`been shown to be effective against certain types of malicous code. One goal of the
`testbed is to enhance the power of similar tools by using them in a complementary
`fashion to detect more general cases of malicious code.
`In our report to this conference last year [1], we presented a design overview of the
`MCT. In the present paper, we report on our progress towards upereding the MCT
`environment for dynamic analysis.
`Although, in prindple, the notion of a Mahdous Code Testbed is independent of any
`particular operating system or architectural platform, our initial implementation efforts
`have focused on simulating a DOS operating system running on PC zrchitectures. This
`design decision was made primarily because the PC/DOS ervironmentis so widespread
`and accessible to intrusions; thus this environmentis the one that has engendered the
`most real-world malicious code we can use to challenge our detection techniques.
`Sections 2 and 3 provide backgrornd material on malicious code and current detec-
`thon methods. Section 4 reviews the use of events in dynamic analysis techniques, and
`Section S$ describes the architecture of the MCT. Section 6 presents some results from
`our experience using the MCT on malicious code.
`
`“SPONSORS: Lawrence Livermore National Laboratory, US. Department of Energy
`
`Work performed under the auspices of the U.S. Department of EnergybyLawrence
`Livermore National Laboratory underContract W-7405-Eng-48.
`oy
`
`m+
`tt ope tp icy!
`i eV yw Le
`“AUG 02 £23
`
`OSTI
`
`CS-1017
`
`Cisco Systems,Inc. v. Finjan, Inc. mi
`Page 1
`
`CS-1017
`Cisco Systems, Inc. v. Finjan, Inc.
`Page 1
`
`

`

`2 Malicious Code — A Brief Overview
`
`In recent years, various forms of malicious code have appeared on virtually all major
`families of computer platform. The prevalence of malicious code — Trojan horses, time
`bombs, worms, and viruses — threatens the traditional “open systems” approach that
`has evolved in the academic realm, as well as in much of the commercial sector.
`The current situation in the personal computer arena may be indicative of future
`trends in workstation and mainframe environments. On PC systems — where literally
`hundreds of comprter viruses, time bombs, and Trojan horses have proliferated —
`the problem is caused by rogue programs that unwittingly are muited in to the system.
`Thus malicious code may be inserted into almost any type of compater system via these
`same avenues — “shareware” may be installed, or malicious code might be produced
`in-house by a disgrantled employee, or a program containing malidous code might even
`be purchased from a legitimate vendor of commercial software.
`Onr definition of what constitutes “malicious” code shall address only the probable
`effects of executing such code; we shall not concern ourselves with the “original intent”
`of the (possibly unknown) writer. Although the intentions of the writer may be crucial
`in determining legal! culpability — ¢g., whether malice and forethought were present —
`to include such considerations within the scope of our “working definition” for malicions
`code would clearly render the problem incomputable.
`Yet even using our restncted, operational definition of “malicious code”, the prob-
`lem of malicious code detection — in the most general case — is not decidable by
`purely formal methods. This follows not merely from the results of [4] [2] [3], bat
`rather because the inherent semantics of the problem statement demand that a value
`judgement regarding the nature of the code’s probable effects be remdered. But because
`doing so would require that the ment of the program’s potential users be considered,
`no article of faith akin to Church’s Thesis can serve to bridge the gap between our
`intuitive sense of “malicious effects”, and algorithmic solutions. It would seem that, in
`all but the most severely restricted programming environments, the problem statement
`must remain a fuzzy one.
`Thus, although no algorithm that identifies malicious code in all environments and
`in all guises can exist, a number of techniques already exist for coping with certain re-
`stricted forms of malidous code. Since the problem cannot with certainty be prevented
`in current programming environments, it must be managed instead.
`This idea forms the basis of the Malicious Code Testbed — an automated assistant
`whose mission it is to perform the “grunt work” necessary to aid a homan analyst
`in detecting not only currently known forms of malicious code, but also mutated or
`entirely novel forms. Given the absence of a decision procedure for malicious code,
`such a testbed would allow one to examine a program to ascertain whether or notit is
`suspicious.
`We first discuss the most prevalent methods of coping with malicious code, and
`then describe some of onr previons work aimed at providing defenses against malidons
`code. Then we explore in greater detail the Mclicious Code Testbed.
`
`

`

`3 A Sample of Current Methods for Coping
`with Malicious Code
`
`Presently, the majonty of malicious code defenses are concerned with computer viruses.
`However, some are more broadly applicable to malicious code in general These meth-
`ods may be divided into two distinct classes depending on when they are applied: as
`& pre-ezecution check or at run time. Pre-execution techniques are applied to a suspi-
`cious program before it can be executed by a user. In contrast, run time methods are
`actually applied to the program as it executes, in hopes of stopping the program before
`it can cause damage or allow a virus to propagate. Another taxonomy of malicious code
`defenses divides all methods into the categories of staticor dyramic analysis. Although
`most static analysis techniques are applied as pre-execution checks, certain static anal-
`ysis techniques can be applied at run time. Similarly, although most dynamic analysis
`techniques are applied as ran time checks, certain dynamic analysis techniques (such
`as our own Malicions Code Testbed) can be applied as pre-execution checks.
`Manyof the more sophisticated pre-execation methods rely on the prior existence of
`2 copy of the program that is assumed to be “clean”, perhaps because it was originally
`written by a trusted programmer and then translated into an executable file by a trusted
`compiler on a secure system. One such method compntes cryptographic checksums that
`are characteristic of that trusted executable file, and embeds them in that file. [6] The
`file is then copied to an insecure environment, whose operating system will not allow a
`user to execute any program until it has recomputed what those checksums should be
`and compared those values with the ones actually embedded in the program. In this
`way, most alterations made to a trusted executable file after it leaves the secure system
`can be detected before the program is executed in the insecure environment.
`It is important to note that this technique shares one important characteristic
`in common with most other sophisticated pre-<xecution methods — ultimately, they
`depend on the prior application of detection (or formal verification) techniques in order
`to certify an executable file as “clean” in the first place.
`Keeping Ken Thompson’s admonition “on trusting trnst” firmly in mind [5], how
`should a security administrator proceed when faced with programs so large or complex
`that “trust, but verify” is not a feasible option? We suggest that — in the middle
`ground between the two extremes of exhanstively provable correctness and trust based
`on nothing more substantial than personal familiarity with, or a background security
`check on, a program's writer — the MCT (acting to assist 2 human analyst) can provide
`&@ practical alternative basis for trust.
`
`3.1 Simple Scanners and Monitors
`
`Simple scanners such as McAfee’s Scanv or Norstad’s Disinfectant are by and large
`the most common pre-execution method in use today. Typically, the user will invoke
`a scanner to search the static text of a binary program for fixed patterns (bitstrings)
`that match those of known malicious programs. If none of those bitstrings are found,
`the user them proceeds to execute the program. Thus these scanners boast a very good
`record in defending against known malicious programs, such as polymorphic viruses
`that use a known “Mutation Engine”, but they cannot be applied in general to finding
`new malicous code, or even to finding familiar malicious code protected by 2 “Mata-
`
`

`

`tion Engine” thatis, itself, slightly mutated. Another popular approach uses simple
`monitors to observe program execution and detect potentially malidous behavior at
`Tun time. Such monitors usually sit astride the system call interface, e.g., to watch
`all disk accesses and ensure that no unauthorized writes are performed. Unfortunately
`such techniques incur a substantial speed penalty during execution of normal programs,
`and typically become quite a nuisance to the user.
`To be effective, these programs must also err on the conservative side, resulting in
`many false alarms which require user interaction. But ir these interactions, current
`techniques require the user to make relatively immediate (and usually uninformed)
`decisions regarding whether the program should be allowed to pro_ced. Such decisions
`would benefit immensely from the opportunity to explore a trace of the program’s
`history, as well as its then-current execution state.
`
`3.2 Encryption & Watchdog Processors
`
`Encryption is another metiod of coping with the threat of malicious code. Lapid,
`Ahituv, and Neumann [7] use encryption to defend against Trojan horses and trapdoors.
`When correctly implemented, encryption techniques are quite effective against many
`types of malicious code, but the cost of such a system is high due to the required
`hardware. Similarly, wotchdog processors [8] also require additional hardware. Such
`processors are capable of detecting imvalid reads/writes from/to memory, but they
`require additional support to effectively combat viruses. Also, both of these methods
`are dependent on the prior existence of a “clean” version of every program that is to
`be executed. As mentioned, to certify such copies as “clean” in the first place requires
`either formal verification or a malicious code detection capability, which is the subject
`of the present paper.
`
`4 Review of Dynamic Analysis using Events
`Over the last few years, we have developed a powerful, state-of the-art debugger called
`Dalek [9]. Dalek incorporates two significant advances over traditional debuggers:
`it
`featnres a fully-prograrnmable language for manipulating the debugging environment,
`and it provides extensive support for user-definable events.
`The MCT user’s environment was designed in accordance with the philosophy un-
`derlying the Dalek debugger, and features analogous to those in Dalek have been incor-
`porated into the MCT. But we have also customized the MCT environment, in light
`of its specific mission to help ferret ont malicious code. We believe that “dynamic
`analysis” (and the development of appropriate methodologies for it) should be seen
`as representing an extremely promising avenueofinquiry, rather than as being just a
`fancy word for the sorts of things people have always done with traditional debuggers.
`By folly programmable, we mean the MCT is an extendible environment, in a sim-
`ilar sense that the Emacs text-editor is extendible. But due to the nature of the
`MCT’s mission, these general-purpose language constructs have been fully integrated
`with traditional appication-specific debagging features such as breakpoints and single-
`stepping.
`Like the Dalek debugger, the MCT also provides automated support for detecting
`hierarchical events — occurrences of interesting activities during the execution of the
`
`

`

`suspicious program. This capability allows the MCT to represent the suspideus pro-
`gtam’s behavior in terms of whatever higher-level abstractions have been defined by
`the security analyst.
`In some ways, an event is conceptually similar to a tuple in a relational database
`— once the structure of a particular database table has been defined by the user, every
`oceurrence of an event of that type that is detected by the MCT will have its attributes
`recorded permanently, as fields in a newly inserted tuple. That is, whem the MCT
`detects an event occurrence, it causes a corresponding tuple (or record) to appear in
`the appropriate database table. The attributes associated with an event should contain
`information sufficient to characterize a particular occurrence of that event, allowing it
`to be distinguished from other instances of the same event. The code written by
`a security analyst for an event’s definition can cause it. upon activation, to assign
`values to these attributes from variables in the suspicious program, from variables in
`the “outer” MCT environment, or from computation based on a combination of such
`variables.
`In addition to defining an event as a template for passive data, the security analyst
`also needs to define an active, procedural aspect for that event. This is accomplished
`by writing a body of code in the MCT’s language, and associating it with that event.
`The purpose of this code, when activated,
`is to recognize exactly those conditions
`in the suspicious program's execution state that the security analyst has specified as
`constituting a valid occurrence of this particular type of event.
`This event-recognition code can be execnted mannally by the security analyst as
`sfhe single-steps the suspicious program, or it can be executed automatically by the
`MCT, if the analyst has bound that event’s code to a breakpoint, or to a range of
`breakpoint addresses. Events whose code 1s activated in this manner are called primitive
`events.
`
`The MCT also supports high-level events. When defining a high-level event, one
`must specify the names of all lower-level events on which it depends. A high-level event
`is not explicitly raised; instead, the MCT can antomatically trigger a high-level event's
`code into executing whenever an occurrence of a primitive event on which that high-
`level event depends is successfully recognized. The high-level event’s code will have
`access to all the attributes of its lower-level constitnent events, as well as access to the
`“raw” state of the suspicious program and to variables defined in the “outer” MCT
`environment.
`Note that the secunty analyst can define a high-level event whose recognition may
`depend on lower-level constituent events whose occurrences are widely separated im
`time. For a concrete example of a network of events used to detect self-propagating
`code, see [1].
`Viewed from the perspective ofa relational database, a high-level event is conceptu-
`ally akin to an ongoing query: In defining a high-level event, the security analyst poses
`a query. The MCT then provides mcremental answers to that activated query, as the
`behavior of the suspicious program canses new occurrences of primitive event/attnbute
`tuples antomatically to be inserted in the database.
`The “execution history database” maintains a record ofall recognized event oc-
`currences and their attributes. It may be browsed selectively by the security analyst
`in interactive mode, or accessed programmatically via actess functions written in the
`MCT’s language.
`
`
`
`

`

`5 Architecture of the MCT
`
`Onedesign goal for the MCT is that it be as universal as possible. Thatis, the testbed
`should in principle be capable of analyzing both source code and executable files from
`different processors and different operating systems. However, to achieve such broad
`applicability, we would have to develop various front-ends and back-ends for the MCT.
`In addition, becanse of the radically different “security architectures” (or lack thereof)
`on different platforms, that portion of the MCT between the front-ends and back-ends
`~ that portion common to all platforms - cculd tum ont to be the null intersection.
`Nevertheless, we feel that using a common machine-independentinternal form language
`may illuminate aspects common to many security architectures.
`
`Initial Program Loading
`5.1
`If started, for example, with an “executable” file, a front-end will need to understand
`any loading (and possibly some dynamic linking) conventions of the target operating
`system. The front-end will also need to know the processor type of the machine code in
`order to properly translate it into the Lisp-based internal form. A back-end will need
`to emulate any dynamic linking operations of the target operating system, as well as its
`system call interface. Thus, for example, if a program running under the MCT “writes”
`a file and then “reads” it back again. it should not be apparent to that program that
`itispot, in fact, running directly on the target CPU and operating system.
`Either before or after translation of an executable program into an internal form,
`the MCT might also search it to identify any known standard system-library routines.
`Assuming those routines are “clean”, this step could significantly reduce the size of the
`problem.
`In addition, since the fype of every parameter required by a system-library
`routine 15 known, this information permits subsequent phases of the amalysis to infe>
`the types of any variables in the suspicions program that are passed as arguments to
`those system-librarv routines.
`
`5.2 Program Representation — Internal Form Language
`In order for the MCT user profitably to apply various amalysis tools, those tools must
`share a common representation of the suspicious program that is the subject of their
`analysis.
`To analyze the behavior of an executable machine code program, we must first
`translate its code into our internal form language. We have designed a set of pro-
`cedures that, given a 2-tuple (Memory_Address, Memory.Contents), will translateits
`Memory.Confents into our internal form language. Because not all assembler instrac-
`tions have the same length,
`it behooves us to explicitly represent the original Afem-
`ory_Address as another field in the internal form representation. This will allow the
`transiation tool easily to access other 2-tuples representing the next few adjacent Mem-
`ory_Addresses, should the need arise, in order to complete the job of disassembling a
`single long instruction.
`The interna] form language was deliberately designed to include only a small number
`of basic operators, thus simplifying the analysis. These operators are closely related
`to the hardware operations on a microprocessor, allowing convenient translation from
`machine code into the interna] form. Typical basic operations of the internal form
`
`

`

`language include READ or WRITEto a Memory_Address or Register. As an example
`of the syntax of the internal form, an indirect write of 0 through register CX might
`look like:
`( WRITE.BYTE0, (ADD (READ.WORD CX) #0x01AE) )
`
`The internal form is a Lisp-like language, whose order of evaluation is the same
`as that of Lisp. By defining the internal form’s operators (e.g., “READ.WORD”) as
`functions in Lisp, a program written in the internal form language can be interpreted
`by any standard Lisp system. Thus, by using Lisp as the “native language” of the
`MCT, dynamic analysis can readily simulate the execution of 2 program that has been
`translated from machine code into the internal form.
`
`5.3. Memory Model for the Code Segment
`
`Although the translator will produce a string of code in our internal form language, the
`MCT must store much more than just that string of syntax. To adequately represent
`even a 1-byte instruction of the original machine code, the MCT uses an elaborate data
`structure that also stores the original (Memory_Address, Memory. Contents) 2-tuple,
`along with various auxiliary fields to record other information that may be computed by
`dynamic analysis techniques. Representations ofevery Memory_Addressin a suspicious
`program’s code segment are stored in a table in the MCT.
`
`5.4 Memory Model for the Data Segment
`
`Cells in a suspicious program’s data memory can be represented by the same struc-
`tures as are used for its code, althongh atfirst it might appear that only the (Mem-
`ory_Address, Memory Contents) fields are needed. These representations ofits data
`can be stored in the sametable as the representations of its code. Named registers on
`the target CPU are treated as a special case of data memory. The MCT interpreter
`“allocates” data memory only as required by the dynamic behavior of the program (Le.,
`for the :un timestack and local variables, and for memory that is explictly allocated
`dynamically via calls to malloc). We must also load the MCT interpreter with any
`initialized data in the original executable machine codefile, as well as any sections of
`DOS we think the program might attempt to access directly (e.g., the interrupt vector
`table}.
`
`6 Experience Using the Malicious Code Testbed
`
`The MCT is written in Common Lisp, and its execution of intermal form code in a
`simulated PC/DOSenvironment on a Unix workstation is several orders of magnitude
`slower than genuine execution of the original machine code on a PC platform. Never-
`theless, because the security analyst can define events, and then leave the MCT to ron
`unassisted for long penods to watch for occurrences of those events, this time penalty
`1s acceptable.
`In order to detect self-modifying code, we have included several predefined events in
`a standard library for the MCT. These events record every memory access — attributes
`such as the memory access mode (Le., Read, Write, or eXecute), and the memory
`address and contents. Thus, we can perfonm a relational join within this table, ¢.g.,
`
`

`

`if a particular location has been modified by some instruction, we can determine the
`address of the responsible instruction, and the contents of that instruction, even if they
`have been modified subsequently. The overhead incurred by this recordkeeping is one
`reason for the MCT’s slow execution.
`In implementing the MCT, we are extending the boundaries of our simulated DOS
`environment incrementally, as necessitated by the demands of our test programs for
`DOS/BIOS system services. Currently, the DOS system call interface is still somewhat
`skeletal. Our simulation of the PC hardwareis also fairly rudimentary, ¢.g., we do not
`currently simulate the periodic clock tick interrupts, and thus we avoid their associated
`processing time.
`As mentioned, the MCT is an extendible, customizable environment. Thus, the
`exact nature of the “display” it presents to the useris 2 matter of personal choice. In the
`sample displays that follow, we utilize a highly verbose mode, that, in most situations,
`would present the security analyst with far more undigested, low-level information
`than desired. Nevertheless, on occasion this level of detail is desirable, and is certainly
`justified in this case on expository grounds.
`
`
`The Malicious Code Testbed Displays a “Trace” of Program Execation
`
`*** Tnitializing Low-DOS *** for Compaq 386, DOS 3.31 : 0x0000 - 0x1000
`MCT will simulate — JMP 0x1195
`at IP 0x1100 ::
`— ((JUMP (T #x1195)))
`MCT will simulate -—- CLD at IP 0x1195 =:
`— ((WRITE.F DF #x0))
`
`at IP 0x1196 ::
`MCT will simulate — MOV AH, OxEO
`—- ((WRITE.B AB (READ.B (CONST #xE0))))
`MCT will simulate — INT 0x21
`at IP 0x1198 ::
`— ((LIB INT #x21))
`
`MCT will simulate — CMP AH, 0xEO at IP 9xl19A =:
`—- ((WRITE.F AF (AUX (- (READ-B AH) (READ.B (CONST #xE0)))))
`(WRITE.F OF (OVERFLOW (- (READ.B AH) (READ.B (CONST #xE0)))))
`(WRITE.F PF (PARITY (- (READ.B AH) (READ.B (CONST #xE0)))))
`(WRITE.F SF (SIGN (- (READ.B AH) (READ.B (CONST #xE0)))))
`(WRITE.F ZF (ZERO (- (READ.B AH) (READ.B (CONST #xE0)))))
`(WRITE.F CF (CARRY (- (READ.B AH) (READ.B (CONST #xE0))))))
`
`MCT will simulate — INC #£c11B5=at [P 0x2119D ::
`— ({JUMP ((=0 (READ.F CF)) #x11B5) (T #x119F)))
`
`MCT will simulate —- MOV AX, CS at IP OxL1BS =:
`— ((WRITEW AX (READ._W CS)))
`
`MCT will simulate — ADD AX, #x10=at IP 0x11287::
`—- (WRITE.W AX (+ (READ.W AX) (READ.W (CONST #x10))))
`
`
`

`

`The first example MCT display above provides the conceptual equivalent of a pro-
`gram “trace”, such as might be provided by a debugger. For each newly executing
`instruction, the MCT displays the 5085 assembler mnemonic, the address of that in-
`struction, and its translation into our internal form language (which is then executed
`by the Lisp interpreter):
`
`In the next example, the security analyst has programmed the MCT to display
`a more high-level message immediately upon detecting an occurrence of the event
`named SELF-MOD-CODE.In this particular case, a memory location that was initially
`accessed in modes Read, then Write, is subsequently accessed in eXecute mode. The
`event code, written by the security analyst, notifies him after each subsequent ¢Xecute
`access, and also provides some higher-level information it has computed — namely,
`which instruction was responsible for modifying the instruction the MCT just executed.
`
`
`
`The Malicious Code Testbed Detects Self-Modifying Code
`
`MCT will simulate — JMP-INTER-SEG 0x3FC 060x000=at IP 6x123D::
`— ((JUMP-ABS #x3FC #x0))
`
`MCT will simulate —- REP at IP 0x3FC::
`— ((PREFIX "REP #x1))
`SELF-MOD-CODE EVENT — Location 0x3FC modified by address 0x121B
`
`MCT will simulate — REP MOVS
`
`at IP 0x3FD ::
`
`—— ((WRITE.W (ES-SHIFT (READ.W DI) (READ.W (DS-SHIFT (READ.W SI))))
`(WRITE.W SI (+ (READ.W SI) #x2)) (WRITE.W DI (+ (READ.W DI) #x2)))
`SELF-MOD-CODE EVENT — Location 0x3FD modified by address 0x121B
`
`MCT will simulate — REP MOVS|at IP 0x3FD ::
`
`— ((WRITE.W (ES-SHIFT (READ.WDI)) (READ.W (DS-SHIFT (READ.WSI))))
`(WRITE.WSI (+ (READ.W SI) #x2)) (WRITE.W DI (+ (READ.W DI) #x2)))
`SELF-MOD-CODE EVENT — Location 0x3FD modified by address 0x121B
`
`
`the MO'VSinstruction that was modified is being repeated
`In the case above,
`because of its prefix, REP. Thus, every time it repeats, the MCT displays the fact that
`it detected another occurrence of the event, SELF-MOD-CODE. The security analyst
`might decide to redefine this event so that when not in “verbose” mode (as determined
`by examining a variable in the “outer” MCT environment), it will quietly record all
`eXecule accesses to this location after the first Head, Write, eXecute sequence, but will
`not announce those subsequent occurrences of the event mmmediately.
`
`

`

`In the next example, after a section of malicious code has decrypted itself (not
`shown), the decrypted code proceeds to read the realtime clock twice in rapid succession
`to check whether it is being single-stepped under a debugger. If less than 1 second has
`elapsed, it assumes it is not being watched, and then attempts a reboot.
`
`
`
`
`
`Decrypted Code Reads Clock to Check if being SingleStepped under Debugger;
`IfNot Being Watched,It
`“Attempts a Reboot
`
`
`
`MCT will simulate —MOV at IP 0x1035 ::
`— ((WRITE.B AH (READ.B (CONST 0x2))))
`SELF-MOD-CODE EVENT — Location 0x1035 modified by address 0x105C
`
`MCT wall simulate -INT at IP 0xi037 ::
`—~ ((LIB INT 0x1A))
`Program desires to read 24-hr realtime clock (base 10):
`Enter hours:
`11
`Enter mmutes:
`Enter seconds
`Enter hundredths:
`
`15
`03
`
`00
`
`SELF-MOD.CODE EVENT — Location 0x1037 modified by address 0x105C
`
`MCT will simulate -PUSH at IP 0x1039 ::
`— ((WRITE.WSP (- (READ.W SP) 0x2))
`(WRITE.W(SS-SHIFT (READ.W SP)) (READ.W DX)))
`SELF-MOD-CODE EVENT — Location 0x1039 modified by address 0x105C
`
`MCT will simulate —MOV_=at IP 0x103A ::
`— ((WRITE_B Ai (READ.B (CONST 0x2)))}
`SELF-MOD-CODE EVENT — Location 0x103A modified by address 0x105C
`
`MCT will simulate ~INT at IP 0x103C ::
`—- ((LIB INT 0x14))
`Program desires to read 24-hr realtime clock (base 10):
`Enter hours:
`11
`Enter minutes:
`Enter seconds:
`Enter hundredths:
`
`15
`03
`
`00
`
`SELF-MOD-CODE EVENT — Location 0x103C modified by address 0x105C
`
`a: IP Ox103E::
`MCT will amalate —POP
`— ((WRITE.W AX (READ.W (SS-SHIFT (READ.W SP))))
`(WRITE.W SP (+ (READ.W SP) 0x2)))
`SELF-MOD-CODE EVENT — Location 0x103E modified by address 0x105C
`
`10
`
`
`
`

`

`MCT will simulate —CMP
`
`at IP 0x103F ::
`
`—- ((WRITE.F AF (AUX (- (READ.B DH) (READ. A3))))
`(WRITE.F CF (CARRY(- (READ.B DH) (READ.B AH))))
`(WRITE.F OF (OVERFLOW (- (READ.B DH) (READ.B AB))))
`(WRITE.F PF (PARITY (- (READ.B DH) (READ.B AH))))
`
`——— (WRITE.F SF (SIGN (- (READ.B DH) (READ.B AH))))
`(WRITE.F ZF (ZERO (- (READ.B DH) (READ.8 AH)))))
`SELF-MOD-CODE EVENT — Location 0x103F modified by address 0x105C
`
`MCT will simalate —JNZ at IP 0x1041 ::
`— ((JUMP ((=0 (READ.F ZF)) 0x1063) (T 0x1043)))
`SELF-MOD-CODE EVENT — Loaation 6x1041 modified by address 0x105C
`
`MCT will smulate —AND=at IP 0x1043 ::
`—- ((WRITE.F AF UNDEF) (WRITE-F CF 0x0) (WRITE.F OF 0x0}
`(WRITE.B CL (LOGAND (READ.B CL) (READ.B (CONST 0x1))))
`(WRITE.F PF (PARITY (READ.B CL))) (WRITE.F SF (SIGN (READ.B CL)))
`(WRITE.F ZF (ZERO (READ.B CL))))
`SELF-MOD-CODE EVENT — Location 0x1043 modified by sddress 0x105C
`
`MCT will smulate ~CMP at IP 0x1046 ::
`— ((WRITE.F AF (AUX (- (READ.B CL) (READ.B (CONST 6x1)))))
`(WRITE.F CF (CARRY (- (READ.B CL) (READ.B (CONST 0x1)))))}
`(WRITE.F OF (OVERFLOW(- (READ.B CL) (READ.B (CONST O6x1)))))
`(WRITE.F PF (PARITY (- (READ.B CL) (READ.B (CONST 0x1)))))
`{WRITE-F SF (SIGN (- (READ.B CL) (READ.B (CONST 0x1))))}
`(WRITE.F ZF (ZERO (- (READ.B CL) (READ.B (CONST 6x1))))))
`SELF-MOD-CODE EVENT — Location 0x1046 modified by address 0x105C
`
`MCT will smriate —JNZ at IP 0x1049 ::
`— ((JUMP ((=0 (READ.F ZF)} 0x1063) (T 0x104B)))}
`SELF-MOD-CODE EVENT — Location 0x1049 modified by address 0x105C
`
`MCT wil smulaee —UINTI
`— ((LIB INT 0x19))
`WN
`Program attempting a Reboot — OK to proceed?
`SELF-MOD-CODE EVENT — Location 0x104B modified by address 0x105C
`
`az IP 0x104B ::
`
`[ Decrypted Code Reads Clock to Check if being Single-Stepped under Debugger, |
`If Not Being Watched, It Then Attempts a Reboot
`
`
`
`-
`
`
`
`

`

`7 Future Research
`
`One direction to pursue is to focus on the so-called “polymorphic” viruses that, by
`using unknown “Mutation Engines”, can easily evade static scanners, Our event-based
`dynamic analysis techniques should be able to handle all polymorphic “mutants”, since
`all polymorphic variants of a given virus should share a common behavioral profiie —
`acommon dynamic canonical form.
`We also plan to return to umplementing one of our origimal design goals for the
`MCT,namely integrating static analysis tools [1] into the common MCT environment.
`We feel there is great potential for the complementary use of these two families of
`analysis techniques, leading ultimately to the development of a more rigorous detection
`methodology.
`t may be that a two-tier detection scheme will be warranted — an efficient-running
`coarse-zrain eventfilter that could quickly screen large sections of code, and a slower,
`more thorough, fine-grain event mesh for especially critical systems or software.
`
`Acknowledgements
`
`We thank Doug Mansur for his valuable insights.
`
`References
`
`{1] R. Crawford, R. Lo, J. Crossley, G. Fink, P. Kerchen, W. Ho, K. Levitt, R. Olsson,
`M. Archer. “A Testbed for Malicious Code Detection: A Synthesis of Static and Dy-
`namic Analysis Techniques ”, Secure Networks — Proceedings: Fifth International
`Computer Virus & Security Conference, March 1992, pp. 225-236.
`[2] F. Cohen. “Computer Viruses — Theory and Experiments”, Computers & Securify,
`Vol. 6, 1987, pp. 22-35.
`[3] Len Adleman. “An Abstract Theory of Computer Viruses” (abstract), CRYPTO
`"88.
`
`[4] M-. Harrison, W. Razzo. and J. Ullman. “Protection in Operating Systems”, CACM,
`VoL 19, No. & Aug. 1976, pp. 461-471.
`[5] K. Thompson. “Reflections on Trusting Trust”, Comm. ACM, VoL 27, No. 8, 1984,
`pp. 761-763.
`[6] F. Cohen. “A Cryptographic Checksum for Integrity Protection”, Computers &
`Security, Vol. 6. 1987, pp. 505-510.
`[*] Y. Lapid, N. Ahituv, and S. Neumann. “Approaches to Handling ‘Trojan Horse’
`Threats”, Computers & Security, Vol. 5, 1986, pp. 251-256.
`[8] A. Mahmood and E_ J. McCluskey. “Concurrent Error Detection Using Watchdog
`Processors—A Survey”, JEEE Transactions on Computers, Vol 37, No. 2, 1988, pp.
`160-174.
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket