`(12) Patent Application Publication (10) Pub. No.: US 2011/0023115 A1
`Wright
`(43) Pub. Date:
`Jan. 27, 2011
`
`US 2011 00231.15A1
`
`(54)
`
`(76)
`
`(21)
`(22)
`
`(63)
`
`HOST INTRUSION PREVENTION SYSTEM
`USING SOFTWARE AND USER BEHAVOR
`ANALYSIS
`
`Inventor:
`
`Clifford C. Wright, Oxfordshire
`(GB)
`Correspondence Address:
`STRATEGIC PATENTS PC.
`C/O PORTFOLIOIP, P.O. BOX52050
`MINNEAPOLIS, MN 55402 (US)
`Appl. No.:
`12/750,840
`
`Filed:
`
`Mar. 31, 2010
`Related U.S. Application Data
`Continuation-in-part of application No. 12/506,749,
`filed on Jul. 21, 2009.
`
`Publication Classification
`
`(51) Int. Cl.
`G06F2L/00
`
`(2006.01)
`
`(52) U.S. Cl. .......................................................... 726/22
`
`ABSTRACT
`(57)
`In embodiments of the present invention improved capabili
`ties are described for threat detection using a behavioral
`based host-intrusion prevention method and system for moni
`toring a user interaction with a computer, Software
`application, operating system, graphic user interface, or some
`other component or client of a computer network, and per
`forming an action to protect the computer network based at
`least in part on the user interaction and a computer code
`process executing during or in association with a computer
`usage session.
`
`:
`: REEMENT DEFINITIONS
`:
`12
`114
`Liz
`} SECURITY
`NETWORK
`MANAGEMENT ACCESS RULES
`: 122
`124
`: DETECTION
`THREAT
`: TECHNIQUES || RESEARCH
`130
`132
`
`TESTING
`118
`REMEDIAL
`ACTIONS
`128
`
`
`
`THREAT MANAGEMENTFACILITY 100
`
`i
`:
`;
`; :
`; :
`;
`:
`;
`:
`; :
`
`- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
`
`UPDATES
`120
`
`WEBSITE || EMAIL IM || VOIP || APPS
`158
`160
`162
`164
`168
`
`:
`
`NETWORK THREATS 104
`
`:
`
`N N ----------------------------------
`ECONDARY LOCATION
`108
`
`s HREATS
`
`
`
`FIREWALL
`138B
`
`APPLIANCE
`14OB
`
`
`
`
`
`
`
`
`
`scLIENT144c.
`CLIENT 144D: s
`s N
`152-
`
`.
`
`:
`
`-l
`Ps
`se
`-
`S)
`170727 174/478/180
`
`PHYSICAL
`PROXIMITY
`THREATS 110
`
`:
`;
`
`CLIENT144F
`it is
`
`IPR2023-01465
`CrowdStrike EX1005 Page 1
`
`
`
`Patent Application Publication
`
`Jan. 27, 2011 Sheet 1 of 5
`
`US 2011/00231.15 A1
`
`
`
`IPR2023-01465
`CrowdStrike EX1005 Page 2
`
`
`
`Patent Application Publication
`
`Jan. 27, 2011 Sheet 2 of 5
`
`US 2011/00231.15 A1
`
`
`
`
`
`7?Ž HO LINOWN HOIAVHEg
`
`
`
`5Õ? CII ENES)
`
`
`
`7DE NOI LOV/
`
`
`
`
`
`5TZ EGOO CHOLS
`
`
`
`
`
`ZZZ SISATV/N\/ _LNE LNO O
`
`IPR2023-01465
`CrowdStrike EX1005 Page 3
`
`
`
`Patent Application Publication
`
`Jan. 27, 2011 Sheet 3 of 5
`
`US 2011/00231.15 A1
`
`
`
`IPR2023-01465
`CrowdStrike EX1005 Page 4
`
`
`
`Patent Application Publication
`
`Jan. 27, 2011 Sheet 4 of 5
`
`US 2011/00231.15 A1
`
`
`
`Ozy o BN=9 =GOOF?F
`
`
`
`
`gly a ENEO EGOONO||LOEITTOO ENE|5)
`
`
`
`| Z ECJOO CHO LS
`
`
`
`
`
`?ŽZ NOLLOW TWICJEWEIH
`
`IPR2023-01465
`CrowdStrike EX1005 Page 5
`
`
`
`Patent Application Publication
`
`Jan. 27, 2011 Sheet 5 of 5
`
`US 2011/00231.15 A1
`
`
`
`IPR2023-01465
`CrowdStrike EX1005 Page 6
`
`
`
`US 2011/00231.15 A1
`
`Jan. 27, 2011
`
`HOST INTRUSION PREVENTION SYSTEM
`USING SOFTWARE AND USER BEHAVOR
`ANALYSIS
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`0001. This application is a continuation-in-part of the fol
`lowing commonly owned U.S. patent application Ser. No.
`12/506,749 filed on Jul. 21, 2009 and entitled “BEHAV
`IORAL-BASED HOST INTRUSTION PREVENTION
`SYSTEM,” which is incorporated herein by reference in its
`entirety.
`
`BACKGROUND
`
`0002 1. Field
`0003. The present invention is related to behavioral-based
`threat detection.
`0004 2. Description of the Related Art
`0005. There exists within the art of anti-malware many
`methods for inspection of run time behaviors of applications,
`executables and processes. Based upon the behaviors of said
`objects, policy decisions can be made about the legitimacy of
`the application and execution halted and changes reversed to
`prevent damage, unauthorized access and so on. Behavioral
`inspection is used as a means of malware detection due to the
`increasing propensity for malware authors to obfuscate and
`randomize content, making conventional deterministic con
`tent-based analysis mechanisms increasingly ineffective.
`Existing behavioral monitoring systems have been with a
`database of actions and resources that are blacklisted and
`indicate malicious intent. During run time, if a given process,
`application or executable (perhaps being manipulated while
`the content of the executable itself is legitimate or where
`interpreting malicious data causes unintended and malicious
`behavior in a legitimate entity) performs any one of the
`actions in the list of negative actions the process may be
`identified as malicious and terminated or the administrator
`alerted. To avoid false positives, these existing implementa
`tions may also have exception lists for rules based on known
`processes and behaviors.
`0006 Behavioral inspection is already a corner stone of
`modern malware protection; however these implementations
`are limited in their ability to remediate and to differentiate
`between minor events and definitive malicious activity exhib
`ited by a known class of malware. This makes driving simple
`remediation challenging as the process can be killed, but any
`other potential files and alike remain untouched (and unasso
`ciated with the event).
`
`SUMMARY
`0007. In embodiments of the present invention, a behav
`ioral-based host-intrusion prevention method and system
`may be used for monitoring a user interaction with a com
`puter, software application, operating system, graphic user
`interface, or some other component of a computer (e.g. a
`client) within a computer network, and performing an action
`to protect the computer and/or computer network based at
`least in part on the user interaction. Such protection may be
`provided based at least in part by monitoring a user interac
`tion with a computer, and/or computer network client device,
`during a usage session for an indication of a user behaviorand
`monitoring a computer code process executing during the
`usage session for an indication of a code operation. The
`
`indication of the user behavior may be a result of comparing
`the user interaction, or plurality of user interactions, with a
`predetermined behavior (hereinafter, a “behavioral gene').
`For example, markers of user behaviors with a computer or
`client device of a computer network may be monitored,
`recorded, stored, classified, and/or analyzed based at least in
`part on the indicia of behavior, and data relating to the behav
`ior, that are created by the user behavior. For example, a user
`clicking on a link may be detected, recorded, stored, classi
`fied, and/or analyzed. Other sample behaviors may include,
`but are not limited to, a user Scrolling down a page, opening or
`closing a window, clicking on a link on a webpage, down
`loading a file, saving a file, running a file, typing a keyword,
`or some other user interaction. For each of the user behaviors
`occurring within the computer network, or derived from
`behaviors recorded from interaction with another computer
`network, a behavioral gene may be stored in a database that is
`associated with the computer network in which the data relat
`ing to the user behavior is recorded and stored. For example,
`a behavioral gene called “Internet download may be created
`and stored in association with the computer network. This
`behavioral gene may Summarize the user behaviors of 1)
`executing a hyperlink on the Internet, and 2) performing a
`“save as action in which a file is saved to a location in the
`computer network, Such as a hard drive. A single user behav
`ior (e.g., closing an application) may be recorded and stored
`as a behavioral gene. Alternatively, a plurality of user behav
`iors may be recorded within a single behavioral gene. The
`user behaviors embodied in a behavioral gene may be derived
`from a user's single usage session, for example, one instance
`of running an Internet browser, or may be derived from mul
`tiple usage sessions that are temporally distinct, Such as a first
`usage session in which an Internet browser is used, and a
`second usage session in which a file is opened that was down
`loaded to the user's hard drive during the first usage session.
`0008. In embodiments, the indication of the code opera
`tion may be a result of comparing an operation with a prede
`termined code behavior (hereinafter, a “code gene'). For
`example, a code gene may be a virus definition, a malware
`definition, an adware definition, or some other code behavior.
`The behavioral gene and the code gene may be stored for
`reference in a database. The monitoring of the computer code
`process executing during the usage session may be performed
`one or more times to collect a plurality of code genes. The step
`of identifying a behavioral gene and code gene may be inde
`pendent steps. A combination of one or more behavioral
`genes may be combined with one or more code genes and the
`combination compared to a predetermined collection of user
`behavior-code operation indications (“phenotypes). The
`phenotypes may comprise a grouping of specific behavioral
`and code gene combinations that are typically present in a
`type of malicious usage session or operation with a computer.
`The identification of a phenotype that is indicative and/or
`associated with a malicious computer operation may cause an
`action, such as an intrusion preventive action to occur within
`the computer network. A phenotype may comprise one
`behavioral gene and a plurality of code genes, a plurality of
`behavioral genes and one code gene, or some other combina
`tion of behavioral genes and code genes. In some embodi
`ments, a phenotype may comprise a combination of one or
`more behavioral genes, one or more code genes, including
`embodiments in which there is a temporal disparity between
`the identification of the behavioral gene and the identification
`of the code gene.
`
`IPR2023-01465
`CrowdStrike EX1005 Page 7
`
`
`
`US 2011/00231.15 A1
`
`Jan. 27, 2011
`
`0009. In embodiments, the step of causing an action, such
`as a intrusion prevention action, may include creating an audit
`record. The creation of an audit record may include updating
`a log file. In some embodiments, the step of causing an action
`may include, at least in part, forcing the usage session to occur
`within a restricted operating environment (e.g. in a sandbox,
`etc.). In embodiments, the step of causing an action may
`include, at least in part, preventing a process that would cause
`a failure, including, but not limited to a computer system
`failure, a device failure, a network state failure, or some other
`failure. The step of causing an action may include, at least in
`part, isolating, via quarantine, the computer on which the user
`interaction occurs and limiting network access to or from the
`computer. The step of causing an action may include, at least
`in part, reverting a computer software state back to a prior
`state that is known to be acceptable. The step of causing an
`action may also include, at least in part, disconnecting a
`device. Such as a peripheral, a thumb drive, a Zip drive, a disk
`drive, or some other device.
`0010. In embodiments, the user interaction with a com
`puter may include, but is not limited to, executing a link,
`executing an Internet link, executing an intranet link, down
`loading a content item from a server, forwarding an URL,
`forwarding an email, opening an application, closing an
`application, updating an application, patching an application,
`bookmarking a website, placing an VoIP call, adding an item
`to a shopping cart, entering a financial datum in a field,
`making a purchase, using a webcam, creating an application
`macro, an application shortcut, entering a novel keystroke
`combination, remotely accessing a computer, Submitting a
`password, attaching a peripheral device, a temporal measure
`(e.g., time spent on a page, or time between a first and second
`user action), a change in a network behavior, a change in a
`firewall setting, a bridging of two previously unconnected
`networks, or some other type of user interaction.
`0011. In embodiments, a user interaction with a computer
`may include an indicator of inactivity, Such as activation of a
`screen saver or hibernation process.
`0012. In embodiments, a user interaction may occur on a
`keyboard, a mouse, a display device, a biometric reader, a
`camera, a scanning device, a printer a telecommunication
`component, or some other device or client of a computer
`network. The device or client of the computer network may be
`remotely associated with the computer network, for example
`a cell phone that is used to remotely access and interact with
`the computer network. The methods and systems of the
`present invention may distinguish between behaviors occur
`ring on remote devices and clients and those behaviors that
`are occurring on devices local to the computer network, Such
`as wired devices and clients.
`0013. In embodiments, indication of a code operation may
`include, but is not limited to, an automatic code behavior,
`Such as a system startup, login, scheduled job process, system
`service, or some other type of code operation.
`0014. The gene may be at least one of disables OS tools in
`a registry, disables a firewall in a registry, adds itself to a
`firewall authorized applications list in a registry, injects code
`into an application, copies itself to a system folder, sets a run
`key to itself in a registry, sets a second run key to itself in a
`registry in a different location, and opens a hidden file with
`write access. The gene may be at least one of a system modi
`fication and a behavior of process. The phenotype may be a
`combination of more than one gene.
`
`0015 These and other systems, methods, objects, fea
`tures, and advantages of the present invention will be appar
`ent to those skilled in the art from the following detailed
`description of the preferred embodiment and the drawings.
`All documents mentioned herein are hereby incorporated in
`their entirety by reference.
`
`BRIEF DESCRIPTION OF THE FIGURES
`0016. The invention and the following detailed description
`of certain embodiments thereof may be understood by refer
`ence to the following figures:
`0017 FIG. 1 depicts a block diagram of a threat manage
`ment facility providing protection to an enterprise against a
`plurality of threats.
`0018 FIG. 2 depicts a block diagram of a security man
`agement facility.
`(0019 FIG. 3 depicts a flowchart for behavioral-based
`threat detection.
`0020 FIG. 4 depicts a block diagram of a security man
`agement facility based at least in part on monitoring of behav
`ioral and code genes and associated phenotypes.
`0021 FIG.5 depicts a flowchart summarizing an embodi
`ment of monitoring user interactions with a computer as part
`of a security management facility.
`0022 While the invention has been described in connec
`tion with certain preferred embodiments, other embodiments
`would be understood by one of ordinary skill in the art and are
`encompassed herein.
`0023 All documents referenced herein are hereby incor
`porated by reference.
`
`DETAILED DESCRIPTION
`0024 FIG. 1 depicts a block diagram of a threat manage
`ment facility providing protection to an enterprise against a
`plurality of threats. An aspect of the present invention relates
`to corporate policy management and implementation through
`a unified threat management facility 100. As will be explained
`in more detail below, a threat management facility 100 is used
`to protect computer assets from many threats, both computer
`generated threats and user-generated threats. The threat man
`agement facility 100 is multi-dimensional in that it is
`designed to protect corporate assets from a variety of threats
`and it is adapted to learn about threats in one dimension (e.g.
`worm detection) and apply the knowledge in another dimen
`sion (e.g. spam detection). Corporate policy management is
`one of the dimensions for which the threat management facil
`ity can control. The corporation may institute a policy that
`prevents certain people (e.g. employees, groups of employ
`ees, types of employees, guest of the corporation, etc.) from
`accessing certain types of computer programs. For example,
`the corporation may elect to prevent its accounting depart
`ment from using a particular version of an instant messaging
`service or all such services. In this example, the policy man
`agement facility 112 may be used to update the policies of all
`corporate computing assets with a proper policy control facil
`ity or it may update a select few. By using the threat manage
`ment facility 100 to facilitate the setting, updating and control
`of Such policies the corporation only needs to be concerned
`with keeping the threat management facility 100 up to date on
`such policies. The threat management facility 100 can take
`care of updating all of the other corporate computing assets.
`0025. It should be understood that the threat management
`facility 100 may provide multiple services and policy man
`
`IPR2023-01465
`CrowdStrike EX1005 Page 8
`
`
`
`US 2011/00231.15 A1
`
`Jan. 27, 2011
`
`agement may be offered as one of the services. We will now
`turn to a description of the threat management system 100
`0026. Over recent years, malware has become a major
`problem across the internet 154. From both technical and user
`perspectives, the categorization of a specific threat type,
`whether as virus, worm, spam, phishing exploration, spy
`ware, adware, or the like, is becoming reduced in signifi
`cance. The threat, no matter how it's categorized, may need to
`be stopped at all points of the enterprise facility 102, includ
`ing laptop, desktop, server facility 142, gateway, and the like.
`Similarly, there may be less and less benefit to the user in
`having different solutions for known and unknown threats. As
`Such, a consolidated threat management facility 100 may
`need to be applied to the same set of technologies and capa
`bilities for all threats. The threat management facility 100
`may provide a single agent on the desktop, and a single scan
`of any Suspect file. This approach may eliminate the inevi
`table overlaps and gaps in protection caused by treating
`viruses and spyware as separate problems, while simulta
`neously simplifying administration and minimizing desktop
`load. As the number and range of types of threats has
`increased, so may have the level of connectivity available to
`all IT users. This may have lead to a rapid increase in the
`speed at which threats may move. Today, an unprotected PC
`connected to the internet 154 may be infected quickly (per
`haps within 10 minutes) which may require acceleration for
`the delivery of threat protection. Where once monthly updates
`may have been sufficient, the threat management facility 100
`may automatically and seamlessly update its product set
`against spam and virus threats quickly, for instance, every five
`minutes, every minute, continuously, or the like. Analysis and
`testing may be increasingly automated, and also may be per
`formed more frequently; for instance, it may be completed in
`15 minutes, and may do so without compromising quality.
`The threat management facility 100 may also extend tech
`niques that may have been developed for virus and malware
`protection, and provide them to enterprise facility 102 net
`work administrators to better control their environments. In
`addition to stopping malicious code, the threat management
`facility 100 may provide policy management that may be able
`to control legitimate applications, such as VoIP, instant mes
`saging, peer-to-peer file-sharing, and the like, that may under
`mine productivity and network performance within the enter
`prise facility 102.
`0027. The threat management facility 100 may provide an
`enterprise facility 102 protection from computer-based mal
`ware, including viruses, spyware, adware, Trojans, intrusion,
`spam, policy abuse, uncontrolled access, and the like, where
`the enterprise facility 102 may be any entity with a networked
`computer-based infrastructure. In an embodiment, FIG. 1
`may depict a block diagram of the threat management facility
`providing protection to an enterprise against a plurality of
`threats. The enterprise facility 102 may be corporate, com
`mercial, educational, governmental, or the like, and the enter
`prise facility's 102 computer network may be distributed
`amongst a plurality of facilities, and in a plurality of geo
`graphical locations. The threat management facility 100 may
`include a plurality of functions, such as security management
`facility 122, policy management facility 112, update facility
`120, definitions facility 114, network access rules facility
`124, remedial action facility 128, detection techniques facil
`ity 130, testing facility 118, threat research facility 132, and
`the like. In embodiments, the threat protection provided by
`the threat management facility 100 may extend beyond the
`
`network boundaries of the enterprise facility 102 to include
`client facility's 144 that have moved into network connectiv
`ity not directly associated or controlled by the enterprise
`facility 102. Threats to enterprise facility 102 client facilities
`144 may come from a plurality of Sources, such as from
`network threats 104, physical proximity threats 110, second
`ary location threats 108, and the like. In embodiments, the
`threat management facility 100 may provide an enterprise
`facility 102 protection from a plurality of threats to multiplat
`form computer resources in a plurality of locations and net
`work configurations, with an integrated system approach.
`0028. In embodiments, the threat management facility 100
`may be provided as a stand-alone solution. In other embodi
`ments, the threat management facility 100 may be integrated
`into a third-party product. An application programming inter
`face (e.g. a source code interface) may be provided Such that
`the threat management facility 100 may be integrated. For
`instance, the threat management facility 100 may be stand
`alone in that it provides direct threat protection to an enter
`prise or computer resource, where protection is Subscribed to
`directly 100. Alternatively, the threat management facility
`may offer protection indirectly, through a third-party product,
`where an enterprise may subscribe to services through the
`third-party product, and threat protection to the enterprise
`may be provided by the threat management facility 100
`through the third-party product.
`0029. The security management facility 122 may include a
`plurality of elements that provide protection from malware to
`enterprise facility 102 computer resources, including end
`point security and control, email security and control, web
`security and control, reputation-based filtering, control of
`unauthorized users, control of guest and non-compliant com
`puters, and the like. The security management facility 122
`may be a Software application that may provide malicious
`code and malicious application protection to a client facility
`144 computing resource. The security management facility
`122 may have the ability to scan the client facility 144 files for
`malicious code, remove or quarantine certain applications
`and files, prevent certain actions, perform remedial actions
`and perform other security measures. In embodiments, scan
`ning the client facility 144 may include Scanning some or all
`of the files stored to the client facility 144 on a periodic basis,
`may scan applications once the application has been
`requested to execute, may scan files as the files are transmitted
`to or from the client facility 144, or the like. The scanning of
`the applications and files may be to detect known malicious
`code or known unwanted applications. In an embodiment,
`new malicious code and unwanted applications may be con
`tinually developed and distributed, and updates to the known
`code database may be provided on a periodic basis, on a
`demand basis, on an alert basis, or the like.
`0030. In an embodiment, the security management facility
`122 may provide for email security and control, where secu
`rity management may help to eliminate spam, viruses, spy
`ware and phishing, control of email content, and the like. The
`security management facilities 122 email security and control
`may protect against inbound and outbound threats, protect
`email infrastructure, prevent data leakage, provide spam fil
`tering, and the like. In an embodiment, security management
`facility 122 may provide for web security and control, where
`security management may help to detect or block viruses,
`spyware, malware, unwanted applications, help control web
`browsing, and the like, which may provide comprehensive
`web access control enabling safe, productive web browsing.
`
`IPR2023-01465
`CrowdStrike EX1005 Page 9
`
`
`
`US 2011/00231.15 A1
`
`Jan. 27, 2011
`
`Web security and control may provide internet use policies,
`reporting on Suspect devices, security and content filtering,
`active monitoring of network traffic, URI filtering, and the
`like. In an embodiment, the security management facility 122
`may provide for network access control, which may provide
`control over network connections. Network control may stop
`unauthorized, guest, or non-compliant systems from access
`ing networks, and may control network traffic that may not be
`bypassed from the client level. In addition, network access
`control may control access to virtual private networks (VPN),
`where VPNs may be a communications network tunneled
`through another network, establishing a logical connection
`acting as a virtual network. In embodiments, a VPN may be
`treated in the same manner as a physical network.
`0031. In an embodiment, the security management facility
`122 may provide for host intrusion prevention through behav
`ioral based protection, which may guard against unknown
`threats by analyzing behavior before software code executes.
`Behavioral based protection may monitor code when it runs
`and intervene if the code is deemed to be suspicious or mali
`cious. Advantages of behavioral based protection over runt
`ime protection may include code being prevented from run
`ning, whereas runtime protection may only interrupt code that
`has already partly executed; behavioral protection may iden
`tify malicious code at the gateway or on the file servers and
`deletes it before reaching end-point computers and the like.
`0032. In an embodiment, the security management facility
`122 may provide for reputation filtering, which may target or
`identify sources of known malware. For instance, reputation
`filtering may include lists of URIs of known sources of mal
`ware or known Suspicious IP addresses, or domains, say for
`spam, that when detected may invoke an action by the threat
`management facility 100. Such as dropping them immedi
`ately. By dropping the source before any interaction can ini
`tiate, potential threat sources may be thwarted before any
`exchange of data can be made.
`0033. In embodiments, information may be sent from the
`enterprise back to a third party, a vendor, or the like, which
`may lead to improved performance of the threat management
`facility 100. For example, the types, times, and number of
`virus interactions that a client experiences may provide useful
`information for the preventions of future virus threats. This
`type offeedback may be useful for any aspect of threat detec
`tion. Feedback of information may also be associated with
`behaviors of individuals within the enterprise, such as being
`associated with most common violations of policy, network
`access, unauthorized application loading, unauthorized exter
`nal device use, and the like. In embodiments, this type of
`information feedback may enable the evaluation or profiling
`of client actions that are violations of policy that may provide
`a predictive model for the improvement of enterprise policies.
`0034. In an embodiment, the security management facility
`122 may provide for the overall security of the enterprise
`facility 102 network or set of enterprise facility 102 networks,
`may provide updates of malicious code information to the
`enterprise facility 102 network, and associated client facili
`ties 144. The updates may be a planned update, an update in
`reaction to a threat notice, an update in reaction to a request
`for an update, an update based on a search of known malicious
`code information, or the like. The administration facility 134
`may provide control over the security management facility
`122 when updates are performed. The updates may be auto
`matically transmitted without an administration facility's 134
`direct control, manually transmitted by the administration
`
`facility 134, or the like. The security management facility 122
`may include the management of receiving malicious code
`descriptions from a provider, distribution of malicious code
`descriptions to enterprise facility 102 networks, distribution
`of malicious code descriptions to client facilities 144, or the
`like. In an embodiment, the management of malicious code
`information may be provided to the enterprise facility's 102
`network, where the enterprise facility's 102 network may
`provide the malicious code information through the enter
`prise facility's 102 network distribution system.
`0035. The threat management facility 100 may provide a
`policy management facility 112 that may be able to block
`non-malicious applications, such as VoIP 164, instant mes
`saging 162, peer-to-peer file-sharing, and the like, that may
`undermine productivity and network performance within the
`enterprise facility 102. The policy management facility 112
`may be a set of rules or policies that may indicate enterprise
`facility 102 access permissions for the client facility 144,
`Such as access permissions associated with the network,
`applications, external computer devices, and the like. The
`policy management facility 112 may include a database, a text
`file, a combination of databases and text files, or the like. In an
`embodiment, a policy database may be a block list, a black
`list, an allowed list, a white list, or the like that may provide a
`list of enterprise facility 102 external network locations/ap
`plications that may or may not be accessed by the client
`facility 144. The policy management facility 112 may include
`rules that may be interpreted with respect to an enterprise
`facility 102 network access request to determine if the request
`should be allowed. The rules may provide a generic rule for
`the type of access that may be granted; the rules may be
`related to the policies of an enterprise facility 102 for access
`rights for the enterprise facility's 102 client facility 144. For
`example, there may be a rule that does not permit access to
`sporting websites. When a website is requested by the client
`facility 144, a security facility may access the rules within a
`policy facility to determine if the requested access is related to
`a sporting website. In an embodiment, the security facility
`may analyze the requested website to determine if the website
`matches with any of the policy facility rules.
`0036. The policy management facility 112 may be similar
`to the security management facility 122 but with the addition
`of enterprise facility 102 wide access rules and policies that
`may be distributed to maintain control of client facility 144
`access to enterprise facility 102 network resources. The poli
`cies may be defined for application type, Subset of application
`capabilities, organization hierarchy, computer facility type,
`user type, network location, time of day, connection type, or
`the like. Policies may be maintained by the administration
`facility 134, through the threat management facility 100, in
`association with a third party, or the like. For example, a
`policy may restrict IM 162 activity to only support personnel
`for communicating with customers. This may allow commu
`nication for departments requiring access, but may maintain
`the network bandwidth for other activities by restricting the
`use of IM 162 to only the personnel that need access to IM162
`in support of the enterprise facility 102. In an embodiment,
`the policy management facility 112 may be a stand-alone
`application, may be part of the network server facility 142,
`may be part of the enterprise facility 102 network, may be part
`of the client facility 144, or the like.
`0037. In embodiments, the threat management facility 100
`may provide configuration management, which may be simi
`lar to policy management, but may specifically examine