`Couper et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 9,135,797 B2
`Sep. 15, 2015
`
`US009 135797B2
`
`(54) AUDIO DETECTION USING DISTRIBUTED
`MOBILE COMPUTING
`
`(75) Inventors: Christopher C. Couper, Shingle
`Springs, CA (US); Neil A. Katz,
`Parkland, FL (US); Victor S. Moore,
`Lake City, FL (US)
`
`(73) Assignee: INTERNATIONAL BUSINESS
`MACHINES CORPORATION,
`Armonk, NY (US)
`s
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 1046 days.
`(21) Appl. No.: 11/616,973
`
`- r
`c
`(*) Notice:
`
`1-1.
`(22) Filed:
`
`Dec. 28, 2006
`
`(65)
`
`Prior Publication Data
`US 2008/O162133 A1
`Jul. 3, 2008
`
`(2013.01)
`(2013.01)
`3.08:
`(2006.01)
`(2006.01)
`(2013.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`
`(51) Int. Cl.
`GOL 5/00
`GIOL 2L/00
`E. So
`G08B I3/16
`GOL 5/20
`GIOL 2L/0208
`GSB 25/08
`GSB 2.5/10
`HO4R5/027
`HO4R5/04
`(52) U.S. Cl.
`CPC ............ G08B 13/1672 (2013.01); G08B 25/08
`(2013.01); G08B 25/10 (2013.01); G10L 15/00
`(2013.01); G 10L 15/20 (2013.01); G 10L 2 1/00
`(2013.01); G 10L 21/0208 (2013.01); H04R
`5/00 (2013.01); H04R5/027 (2013.01); H04R
`5/04 (2013.01)
`
`(58) Field of Classification Search
`USPC ................. 704/231, 236-240, 243-250, 270,
`
`704/273-274, E17.001–E17.016,
`704/E15.001-E15.05, E11.001–E11.007;
`381/1, 17 20, 300 307, 56–60,
`381 f71.1 71.14
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,787,399 A * 7/1998 Lee et al. ...................... 704/27O
`6,012,030 A *
`1/2000 French-St. George
`et al. ............................. 704/275
`6,028,514 A
`2/2000 Lemelson et al.
`(Continued)
`FOREIGN PATENT DOCUMENTS
`
`WO
`WO
`WO
`
`9, 2002
`O2/O75688 A2
`9, 2004
`2004/079395 A2
`2005/093680 A 10/2005
`
`OTHER PUBLICATIONS
`
`Patent Cooperation Treaty, International Search Report from corre
`sponding patent application (Aug. 1, 2008).
`
`Primary Examiner — Pierre-Louis Desir
`Assistant Examiner — David Kovacek
`7. Attorney, Agent, or Firm — Cuenot, Forsythe & Kim,
`
`ABSTRACT
`(57)
`A method of identifying incidents using mobile devices can
`include receiving a communication from each of a plurality of
`mobile devices. Each communication can specify informa
`tion about a detected Sound. Spatial and temporal information
`can be identified from each communication as well as an
`indication of a sound signature matching the detected Sound.
`The communications can be compared with a policy specify
`ing spatial and temporal requirements relating to the Sound
`signature indicated by the communications. A notification
`can be selectively sent according to the comparison.
`
`17 Claims, 2 Drawing Sheets
`
`200
`
`Mobile area sound
`
`203
`
`Mobile device me that sound
`matches sound signature
`20
`
`Mobile device sends communication to
`
`H
`Event processor analyzes communications
`and deterinines whethera valid incidentis
`indicated
`
`220
`
`Event processor optionally obtains audio of
`detected sound(s)
`
`225
`
`Event processor notifies dispatch center if
`incident is validated
`
`230
`
`Begin
`
`Exhibit 1008
`Page 01 of 11
`
`
`
`US 9,135,797 B2
`Page 2
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5/2000 Barnes et al. ..................... 345/7
`6,069,594. A *
`7/2000 Heed et al.
`6,091,327 A
`6, 2001 Monroe
`6,246,320 B1
`8/2001 Lerg et al.
`6,281,792 B1
`9, 2001 Casais
`6,288,641 B1
`6,459,371 B1 * 10/2002 Pike ........................... 340,539.1
`6,538,623 B1
`3/2003 Parnian et al.
`6.857,312 B2
`2/2005 Choe et al.
`7,929,720 B2 * 4/2011 Ishibashi et al. .............. 381,300
`2002fOOO3470 A1
`1/2002 Auerbach
`2002/0023020 A1* 2/2002 Kenyonet al. .................. 705/26
`2002/0026311 A1
`2/2002 Okitsu .......................... TO4,201
`
`2002/0107694 A1* 8/2002 Lerg.............................. 704/273
`2003/004O903 A1* 2, 2003 GerSon ....
`... 704/211
`2003, OO69002 A1* 4, 2003 Hunter et al.
`... 455,404
`2003/0069727 A1* 4/2003 Krasny et al. ................. TO4,228
`2003/01 19523 A1* 6/2003 Bulthuis ....................... 455,456
`2003/0194350 A1 10, 2003 Stamatelos et al.
`2004/0070515 A1
`4/2004 Burkley et al.
`2004/0107104 A1* 6/2004 Schaphorst ................... 704/27O
`2004/01 19591 A1
`6/2004 Peeters
`2004/0192384 A1* 9, 2004 Anastasakos et al. ........ 455/557
`2006/0210101 A1* 9, 2006 Ishibashi et al. ....
`381,300
`2007, 01836O4 A1* 8, 2007 Araki et al. .....
`... 381.58
`2008/0162133 A1* 7/2008 Couper et al.
`TO4,239
`2009/0055170 A1* 2/2009 Nagahama .................... TO4/226
`* cited by examiner
`
`
`
`Exhibit 1008
`Page 02 of 11
`
`
`
`U.S. Patent
`
`Sep. 15, 2015
`
`Sheet 1 of 2
`
`US 9,135,797 B2
`
`Dispatch
`Center
`130
`
`
`
`Communication
`Network
`135
`
`
`
`
`
`Event
`Processor
`125
`
`
`
`
`
`SMS
`Message
`155
`
`Recording
`1.65
`
`145
`
`110
`
`
`
`
`
`Rcquest
`160
`
`
`
`105
`
`Signature
`Detector
`115
`
`Sound
`Signatures
`120
`
`Signature
`Detector
`115
`
`Sound
`Signatures
`120
`
`FIG. 1
`
`Exhibit 1008
`Page 03 of 11
`
`
`
`U.S. Patent
`
`Sep. 15, 2015
`
`Sheet 2 of 2
`
`US 9,135,797 B2
`
`
`
`Mobile device detects Sound
`
`205
`
`Mobile device determines that Sound
`matches Sound signature
`
`210
`
`Mobile device sends communication to
`event processor
`
`215
`
`Event processor analyzes communications
`and determines whether a valid incident is
`indicated
`
`220
`
`Event processor optionally obtains audio of
`detected Sound(s)
`
`225
`
`Event processor notifies dispatch center if
`incident is validated
`
`230
`
`FIG. 2
`
`Exhibit 1008
`Page 04 of 11
`
`
`
`1.
`AUDO DETECTION USING DISTRIBUTED
`MOBILE COMPUTING
`
`2
`FIG. 2 is a flow chart illustrating a method in accordance
`with another aspect of the present invention.
`
`US 9,135,797 B2
`
`BACKGROUND OF THE INVENTION
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`Some municipalities have come to rely upon Sound detec
`tion as a tool for crime prevention. Fixed-location audio sen
`sors are distributed throughout a geographic area, such as a
`neighborhood, a town, or a city. The audio sensors are net
`worked with a central processing system. The central pro
`cessing system continually monitors the sounds provided by
`the various sensors to determine whether any detected Sound
`is indicative of a potential crime.
`Audio provided from the sensors to the central processing
`system is compared with signatures of various sounds. For
`example, audio from the sensors can be compared with sig
`natures for gunshots, breaking glass, or the like. If a portion of
`audio matches the signature of one, or more, of the Sounds,
`the central processing system can determine that the event,
`e.g., a gunshot, a window being broken, likely happened in
`the vicinity of the sensor that sent the audio.
`When Such a sound is detected, a response team can be
`dispatched to the location at which the sound was detected.
`While this sort of system has been successfully used to reduce
`crime, it can be costly to deploy. The system requires the
`installation of specialized audio sensors and networking
`equipment throughout a geographic area. The cost of install
`ing the audio sensors alone can be significant even before the
`other components of the system are considered.
`
`10
`
`15
`
`25
`
`30
`
`BRIEF SUMMARY OF THE INVENTION
`
`40
`
`45
`
`The present invention relates to a method of identifying
`35
`incidents using mobile devices. A communication from each
`of a plurality of mobile devices can be received. Each com
`munication can specify information about a detected Sound.
`Spatial and temporal information can be identified from each
`communication as well as an indication of a Sound signature
`matching the detected Sound. The spatial and temporal infor
`mation from the communications can be compared with a
`policy specifying a spatial constraint and a temporal con
`straint, each of the constraints relating to the Sound signature
`indicated by the communications and varying according to
`the Sound signature. A notification can be selectively sent
`according to the comparison.
`The present invention also relates to a method of identify
`ing incidents using mobile devices including receiving a com
`munication from each of a plurality of mobile devices. The
`50
`communications can be compared with a validation policy
`that specifies a spatial constraint and a temporal constraint,
`each of which relates to a sound signature matching the
`detected Sound and varies according to the Sound signature. A
`notification can be selectively sent according to the compari
`55
`SO.
`The present invention also relates to a computer program
`product including a computer-usable medium having com
`puter-usable program code that, when executed by an infor
`mation processing system, performs the various steps and/or
`functions disclosed herein.
`
`60
`
`BRIEF DESCRIPTION OF THE SEVERAL
`VIEWS OF THE DRAWINGS
`
`FIG. 1 is a block diagram illustrating a system in accor
`dance with one aspect of the present invention.
`
`65
`
`As will be appreciated by one skilled in the art, the present
`invention may be embodied as a method, system, or computer
`program product. Accordingly, the present invention may
`take the form of an entirely hardware embodiment, an entirely
`Software embodiment, including firmware, resident software,
`micro-code, etc., or an embodiment combining software and
`hardware aspects that may all generally be referred to herein
`as a “circuit”, “module', or “system”.
`Furthermore, the invention may take the form of a com
`puter program product accessible from a computer-usable or
`computer-readable medium providing program code for use
`by, or in connection with, a computer or any instruction
`execution system. For the purposes of this description, a
`computer-usable or computer-readable medium can be any
`apparatus that can contain, store, communicate, propagate, or
`transport the program for use by, or in connection with, the
`instruction execution system, apparatus, or device.
`Any suitable computer-usable or computer-readable
`medium may be utilized. For example, the medium can
`include, but is not limited to, an electronic, magnetic, optical,
`electromagnetic, infrared, or semiconductor system (or appa
`ratus or device), or a propagation medium. A non-exhaustive
`list of exemplary computer-readable media can include an
`electrical connection having one or more wires, an optical
`fiber, magnetic storage devices such as magnetic tape, a
`removable computer diskette, a portable computer diskette, a
`hard disk, a rigid magnetic disk, an optical storage medium,
`Such as an optical disk including a compact disk-read only
`memory (CD-ROM), a compact disk-read/write (CD-R/W),
`or a DVD, or a semiconductor or solid state memory includ
`ing, but not limited to, a random access memory (RAM), a
`read-only memory (ROM), or an erasable programmable
`read-only memory (EPROM or Flash memory).
`A computer-usable or computer-readable medium further
`can include a transmission media Such as those Supporting the
`Internet or an intranet. Further, the computer-usable medium
`may include a propagated data signal with the computer
`usable program code embodied therewith, either in baseband
`or as part of a carrier wave. The computer-usable program
`code may be transmitted using any appropriate medium,
`including but not limited to the Internet, wireline, optical
`fiber, cable, RF, etc.
`In another aspect, the computer-usable or computer-read
`able medium can be paper or another Suitable medium upon
`which the program is printed, as the program can be electroni
`cally captured, via, for instance, optical scanning of the paper
`or other medium, then compiled, interpreted, or otherwise
`processed in a suitable manner, if necessary, and then stored
`in a computer memory.
`Computer program code for carrying out operations of the
`present invention may be written in an object oriented pro
`gramming language such as Java, Smalltalk, C++ or the like.
`However, the computer program code for carrying out opera
`tions of the present invention may also be written in conven
`tional procedural programming languages, such as the “C”
`programming language or similar programming languages.
`The program code may execute entirely on the user's com
`puter, partly on the user's computer, as a stand-alone software
`package, partly on the user's computer and partly on a remote
`computer or entirely on the remote computer or server. In the
`latter scenario, the remote computer may be connected to the
`user's computer through a local area network (LAN) or a
`
`Exhibit 1008
`Page 05 of 11
`
`
`
`3
`wide area network (WAN), or the connection may be made to
`an external computer (for example, through the Internet using
`an Internet Service Provider).
`A data processing system suitable for storing and/or
`executing program code will include at least one processor
`coupled directly or indirectly to memory elements through a
`system bus. The memory elements can include local memory
`employed during actual execution of the program code, bulk
`storage, and cache memories which provide temporary stor
`age of at least some program code in order to reduce the
`number of times code must be retrieved from bulk storage
`during execution.
`Input/output or I/O devices (including but not limited to
`keyboards, displays, pointing devices, etc.) can be coupled to
`the system either directly or through intervening I/O control
`lers. Network adapters may also be coupled to the system to
`enable the data processing system to become coupled to other
`data processing systems or remote printers or storage devices
`through intervening private or public networks. Modems,
`cable modems, and Ethernet cards are just a few of the cur
`rently available types of network adapters.
`The present invention is described below with reference to
`flowchart illustrations and/or block diagrams of methods,
`apparatus (systems) and computer program products accord
`ing to embodiments of the invention. It will be understood
`that each block of the flowchart illustrations and/or block
`diagrams, and combinations of blocks in the flowchart illus
`trations and/or block diagrams, can be implemented by com
`puter program instructions. These computer program instruc
`tions may be provided to a processor of a general purpose
`computer, special purpose computer, or other programmable
`data processing apparatus to produce a machine, such that the
`instructions, which execute via the processor of the computer
`or other programmable data processing apparatus, create
`means for implementing the functions/acts specified in the
`flowchart and/or block diagram block or blocks.
`These computer program instructions may also be stored in
`a computer-readable memory that can direct a computer or
`other programmable data processing apparatus to function in
`a particular manner, Such that the instructions stored in the
`computer-readable memory produce an article of manufac
`ture including instruction means which implement the func
`tion/act specified in the flowchart and/or block diagram block
`or blocks.
`The computer program instructions may also be loaded
`onto a computer or other programmable data processing
`apparatus to cause a series of operational steps to be per
`formed on the computer or other programmable apparatus to
`produce a computer implemented process such that the
`instructions which execute on the computer or other program
`mable apparatus provide steps for implementing the func
`tions/acts specified in the flowchart and/or block diagram
`block or blocks.
`The present invention relates to detecting sounds that are
`indicative of incidents including, but not limited to, crime,
`safety hazards, terrorist threats, or any other event for which
`a response team may be dispatched or needed. Mobile devices
`can be loaded with audio analysis Software that can recognize
`particular sounds. This allows the mobile devices to be lever
`aged throughout a geographic area as Sound sensors using the
`built in audio detection capabilities of the mobile devices.
`Upon detecting a selected Sound, the mobile device can send
`a communication to an event processor. The communication
`can specify information relating to the detected Sound.
`The event processor can evaluate communications
`received from one or more mobile devices to determine or
`validate whether an incident associated with the detected
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 9,135,797 B2
`
`10
`
`15
`
`4
`Sounds has occurred or is occurring. If the information iden
`tified from the communications conforms to predetermined
`criteria, for example, as specified within an incident valida
`tion policy, the event processor can take further action. For
`example, the event processor can provide a notification to an
`emergency services dispatch center indicating a given Sound
`or incident has been detected.
`FIG. 1 is a block diagram illustrating a system 100 in
`accordance with one aspect of the present invention. The
`system 100 can include a plurality of mobile devices 105 and
`110, an event processor 125, as well as a dispatch center 130.
`The mobile devices 105 and 110, the event processor 125, and
`the dispatch center 130 can be communicatively linked via
`the communication network 135.
`The communication network 135 can be implemented as,
`or include, without limitation, a WAN, a LAN, the Public
`Switched Telephone Network (PSTN), the Web, the Internet,
`and one or more intranets. The communication network 135
`further can include one or more wireless networks, whether
`short or long range. For example, in terms of short range
`wireless networks, the communication network 135 can
`include a local wireless network built using Bluetooth or one
`of the IEEE 802 wireless communication protocols, e.g.,
`802.11a/b/g/i, 802.15, 802.16, 802.20, Wi-Fi Protected
`Access (WPA), or WPA2. In terms of long range wireless
`networks, the communication network 135 can include a
`mobile, cellular, and or satellite-based wireless network and
`Support Voice, video, text, and/or any combination thereof,
`e.g., GSM, TDMA, CDMA, and/or WCDMA network.
`The mobile devices 105 and 110 can be implemented as
`mobile phones, personal digital assistants, or any other device
`capable of sending and receiving data over wireless commu
`nication links 140 and 145 via the communication network
`135. For example, each of the mobile devices 105 and 110 can
`include a microphone, a processor, and memory. The mobile
`devices 105 and 110 further can include a wireless transceiver
`capable of establishing the wireless communication links 140
`and 145 with the communication network 135. The wireless
`transceiver can Support one or more of the various commu
`nication protocols noted herein, though the present invention
`is not intended to be limited by the type of communication
`scheme or channel used.
`The mobile devices 105 and 110 can include a signature
`detector 115 and one or more sound signatures 120. The
`signature detector 115 can be a computer program that is
`executed by each respective mobile device 105 and 110. The
`signature detector 115 can cause each mobile device 105 and
`110 to perform the various functions to be described herein. In
`one embodiment, the mobile devices 105 and 110 can be
`shipped with the signature detector 115 and the sound signa
`tures 120. In another embodiment, the signature detector 115
`and/or the sound signatures 120 can be installed on the mobile
`devices 105 and 110 at some other point in time, for example,
`after purchasing the device by downloading the computer
`programs and data via a wireless connection. In this regard, it
`should be appreciated that additional sound signatures 120
`can be downloaded over time and that existing Sound signa
`tures 120 can be updated and/or deleted. The signature detec
`tor 115 also can be updated in this manner.
`In general, the signature detector 115 can compare audio
`that is received by the internal microphone of the mobile
`device 105 with the sound signatures 120. The sound signa
`tures 120 are audio profiles of sounds that have been deter
`mined to be indicative of an incident. For example, the sound
`signatures 120 can specify audio profiles for Sounds includ
`ing, but not limited, to, gunshots, explosions, Sirens, alarms,
`breaking glass, auto accidents, yelling or Screaming, calls for
`
`Exhibit 1008
`Page 06 of 11
`
`
`
`5
`help, etc. The potential range of audio events for which a
`sound signature 120 can be included in a mobile device is
`limited only by the available memory or storage capacity of
`the respective mobile device.
`The signature detector 115 can extract information from
`the detected Sound and compare that information to the Sound
`signatures 120. Examples of the sort of data that can be
`extracted or determined from the detected sound can include,
`but are not limited to, volume or sound-pressure-level infor
`mation, the frequency range of the detected Sound, the
`amount of energy detected in different frequency bands of
`interest, a spectrum analysis, transient characteristics, the
`actual waveform of the detected sound, a Fourier Transform
`or FFT information, formant information, or the like. These
`parameters further can be measured over time. The various
`parameters listed hereinare intended as examples only and, as
`Such, are not intended to limit the present invention in any
`way.
`In addition to comparing received, or detected, Sounds with
`the sound signatures 120, the signature detector 115 can
`control one or more other functions of the mobile devices 105
`and 110. For example, the signature detector 115 can cause
`the mobile devices 105 and 110 to keep the microphone active
`so that Sounds are continually monitored, recorded, and com
`pared with the stored sound signatures 120. The mobile
`25
`devices 105 and 110 further can communicate with the event
`processor 125 under the control of the signature detector 115,
`interpret communications received from the event processor
`125, as well as respond to requests from the event processor
`125. Such as providing recorded audio.
`The event processor 125 can be implemented as an infor
`mation processing system executing suitable operational
`software, e.g., a server. The event processor 125 can receive
`information from the mobile devices 105 and 110, analyze
`that information, and based upon the analysis, contact the
`dispatch center 130. The event processor 125 further can
`query the mobile devices 105 and 110 for additional informa
`tion as described herein.
`The dispatch center 130 can include any of a variety of
`communication systems capable of receiving information
`40
`from the event processor 125. The dispatch center 130, for
`example, can be a police dispatch center, an emergency Ser
`vices dispatch center, 911 call center, or the like. The dispatch
`center 130 can be any sort of facility that is linked with the
`event processor 125 that, for example, can dispatch resources
`to address an incident detected using the mobile devices 105
`and 110.
`In operation, the mobile devices 105 and 110 can detect an
`audio event, or sound, 150. The signature detector 115 within
`each respective mobile device 105 and 110 can process the
`detected sound 150 and compare the sound 150 against the
`sound signatures 120 stored within the mobile devices 105
`and 110. It should be appreciated that as the location of each
`mobile device 105 and 110 will be different, the characteris
`tics of the detected Sound 150, e.g., Volume, frequency range,
`55
`and the like, may differ as well, particularly if one or both of
`the mobile devices 105 and 110 is in motion.
`In any case, each mobile device 105 and 110 can detect the
`sound 150 independently of the other and perform its own
`independent analysis. If the signature detector 115 within
`either one or both of mobile devices 105 and 110 can deter
`mine that the sound 150 matches a sound signature 120, a
`communication can be sent to the event processor 125 from
`that mobile device. It should be appreciated that since each
`mobile device 105 and 110 performs its own analysis, mobile
`device 105 may determine that the sound 150 matches a
`sound signature 120, while mobile device 110 determines that
`
`30
`
`45
`
`50
`
`60
`
`65
`
`US 9,135,797 B2
`
`10
`
`15
`
`35
`
`6
`the sound 150 does not match any sound signatures 120 or
`possibly a different sound signature 120. Such can be the
`case, as noted, due to distance, motion, or the possibility that
`one mobile device is located in a noisy environment, while the
`other is not.
`While mobile device 105 is depicted as being in commu
`nication with the event processor 125, it should be appreci
`ated that communication device 110 also can conduct the
`same sort of information exchange with the event processor
`125 as is described herein with respect to communication
`device 105. Further, though only two mobile devices are
`shown, it should be appreciated that many more mobile
`devices can detect the sound 150 and perform the various
`processing functions disclosed herein. Such mobile devices
`also can communicate with the event processor 125. The
`present invention lends itself to having many mobile devices,
`dispersed throughout an area and potentially in motion, con
`tinually detecting sounds.
`Upon determining that the sound 150 matches a sound
`signature 120, the mobile device 105 can send a communica
`tion to the event processor 125. In one embodiment, the
`communication can be a Short Message Service (SMS) mes
`sage 155. The communication can provide one or more items
`of information relating to the detected sound 150. For
`example, one or more parameters of the detected sound 150
`that may be determined or extracted for comparison with the
`sound signatures 120 can be sent. Other data generated by the
`signature detector 115 also can be sent Such as, for instance,
`a measure of how closely the sound 150 matches the sound
`signature 120. The information sent within the communica
`tion, i.e., SMS message 155, can include any information
`determined by the signature detector 115 and, in one embodi
`ment, initially exclude any audio that have been recorded or
`collected by the mobile device 105.
`Spatial information, such as the location of the mobile
`device 105 when the sound 150 is detected can be included in
`the communication. Such information can be determined
`through conventional mobile triangulation techniques, using
`a Global Positioning System (GPS) receiver included in the
`mobile device 105, or the like. Regardless of how location
`information is ascertained, spatial information can be deter
`mined by the mobile device 105 and inserted into the com
`munication. Temporal information, e.g., a timestamp speci
`fying the time of day and the date, when the sound 150 is
`detected also can be specified within the communication.
`Further, an identifier capable of uniquely specifying the par
`ticular sound signature 120 that was matched to the sound 150
`can be included in the communication.
`The event processor 125 can receive communications, such
`as SMS 155, from any mobile device that detects the sound
`150, or any other sound matching a sound signature 120, for
`that matter. The event processor 125 can analyze the received
`information from the various mobile devices and compare
`that information with an incident validation policy to deter
`mine when a valid incident has taken place or is taking place.
`The event processor 125 can be programmed to identify
`communications corresponding to a same detected Sound. In
`Some cases, the Sound signatures indicated by Such messages
`will be the same. Further, the temporal and spatial informa
`tion indicated by the communications will be close. That is,
`the mobile devices that detected the sound 150 will have been
`located within a predetermined distance of one another as
`determined from the spatial information in the communica
`tions from each respective mobile device. Further, the time at
`which each mobile device detected the sound 150 will have
`
`Exhibit 1008
`Page 07 of 11
`
`
`
`US 9,135,797 B2
`
`5
`
`10
`
`15
`
`25
`
`40
`
`7
`been within a predetermined amount of time of one anotheras
`determined from the temporal information specified within
`the communications.
`It should be appreciated the interpretation of spatial and
`temporal information can vary according to the particular
`Sound signatures that are detected. The temporal and/or spa
`tial constraints used to interpret data received from mobile
`devices can vary according to the particular sound signatures
`detected. This can be specified as part of the validation policy,
`for example.
`In illustration, if two mobile devices detect glass breaking,
`but are located more than a mile apart when the Sound is
`detected, the event processor can determine that the mobile
`devices detected two separate instances of glass breaking.
`The same can be determined from temporal information, e.g.,
`if two detections of glass breaking occur more than 5 or 10
`seconds apart, the two detections can be interpreted as sepa
`rate incidents of glass breaking. If, however, an explosion is
`detected by two mobile devices located approximately one
`mile apart, the event processor can apply different spatial
`constraints and determine that one explosion likely occurred,
`but was detected by two mobile devices.
`In some cases, however, the two mobile devices 105 and
`110 may detect the same audio event, but interpret the audio
`event differently. One mobile device, for example, can match
`the audio event with an incorrect sound signature 120. In that
`case, the event processor 125 can determine that the audio
`events are either the same audio event or are at least related,
`for example, if the audio events were detected by each respec
`30
`tive mobile device within a predetermine amount of time of
`one another and the mobile devices that detected the audio
`event were within a predetermined distance of one another. In
`such cases, the event processor 125 can determine that one of
`the devices interpreted the detected Sound incorrectly. Again,
`35
`the rules applied can vary according to the particular Sound
`signatures detected.
`In other cases, the event processor 125 can determine that
`the sounds are actually different, but likely relate to a same
`incident. For example, mobile device 105 detects glass break
`ing and mobile device 110 detects a siren. If both devices
`detect the sounds within a predetermine amount of time of
`one another and are located within a predetermined distance
`of one another, the event processor 125 can determine that
`although two different audio events are detected, the audio
`events are indicative of a single incident, e.g., a burglary.
`The communications can be compared with a policy that
`determines when, and under what circumstances, a notifica
`tion 170 can be provided to the dispatch center or that a valid
`incident is or has occurred. For example, the validation policy
`can specify a minimal number of communications (mobile
`devices) that must detect a particular Sound signature before
`a valid incident is determined to have occurred and a notifi
`cation 170 is sent. The validation policy further can specify a
`minimal confidence score that the event processor 125 must
`calculate from information specified within the communica
`tions to determine that a valid incident has occurred. The
`policy further can specify spatial and/or location proximity
`guidelines for sound detections by multiple mobile devices
`that are indicative of a valid incident or indicate that multiple
`Sounds relate to a same incident.
`In one embodiment, the event processor can directly send
`the notification 170 to the dispatch center. In another embodi
`ment, the event processor 125 can send a request 160 to the
`mobile device 105 asking for recorded audio of the detected
`sound 150. The request 160 can be formatted and/or sent as
`any of a variety of different communication, e.g., an SMS
`
`45
`
`50
`
`55
`
`60
`
`65
`
`8
`message. In one aspect, the validation policy can specify
`when further information is to be requested from a mobile
`device 105 or 110.
`The mobile device 105 can send a recording 165 of the
`detected sound over the Voice channel of the mobile network.
`This embodiment presumes that the mobile device 105 and
`110 can be configured to continually record audio. In one
`embodiment, for example, the mobile device can record a
`continuous loop and, if a detected Sound matches a Sound
`signature 120, the signature detector 115 can prevent that
`audio from being recorded over until the mobile device either
`provides the audio to the event processor 125 or determines
`that the audio is not needed by the event processor 125.
`In either case, an