throbber
PTO/AIA/15 {03-13}
`01/91/2014, OME 0654-0032
`Approve
`Y
`
`DEPARTMENT OF COMM
`U.S. Patent and Tra
`
`
`
`
`equired to respond to ac
`Under the Paperwork Reduction Act of 1995 ne sersa
`ion
`of information unless it displays a valid OMB cantrol:
`
`
`
`er
`
`
`
`ALL-OSOACON’
`Attorney Docket No.
`UTILITY
`
`PATENT APPLICATION first Named inventoriGregory C. Burnett
`
`TRANSM PTTAL
`(Onlyfor new nonprovisional applications uader
`
`APPLICATION ELEMENTS
`See MPEP chapter G00 concerning utility patent application contents.
`
`-[_] Fee transinitis! Form
`(PTO/SB/17 or equivalent}
`Applicant asserts small antity status.
`See 47 CFR 1.27
`.
`-
`;
`.
`Applicant certifies micro antity status. See 37 CFR 1.28.
`Appiicant must attach form PT!
`SA or Bor equivalent.
`
`Tiel
`
`FORMING VIRTUAL MICROPHONE ARRAYS...
`
`i ADDRESS TO:
`
`Commissioner for Patents
`P.O. Box 1459
`Alexandria, VA 22373-1450
`ACCOMPANYING APPLICATION PAPERS
`j 10. [| Assignment Papers
`{cover sheet & documert(s})
`jamie of
`Assignee
`Name OF ASSnent
`
`37 CFR 3.73{c} Statament
`iwhen there is an assignee}
`
`| Power of Attorney
`
`beayOoooood Certified Copy of Priority Document(s}
`
`[Total Pages 54
`Specification
`Both the claims and abstract must start on a new pag
`(See MPEP § 603.0i{a} for information on the preferred arrangement}
`i
`Drawing(s} (35 U.S.C. 113)
`[Total Sheets _|
`inventor's Oath or Declaration
`{Tetal Pages _
`including substitute statements under 37 CFR 1.64 and assignments
`serving as an oath ar declaration under 37 CFR 1.63fe}}
`2 | Newly executed foriginal or copy
`,
`BOONE
`py)
`:
`‘
`ic
`ication
`($7 CER
`b.
`A copy frorn a prior application (37 CFR
`.[] Application Bata Sheet See note below.
`See 37 CFR 176 (PTO/AIA/14 av equivalent)
`CD-ROMay COR
`in duplicate, large table, or Computer Program (Aqnendix}
`[| Landscape Table on CD
`9. Nucleotide and/or Amine Acid Sequence Submission
`(if applicable, eras a. ~c. are required}
`a. | Computer Readable Form (CRF)
`b. | Soecification Sequence Listing on:
`i
`CD-ROM or CD-R (2 copies}: or
`
`
`6.
`
`
`
`1.83{d}
`1.63(d})
`
`j
`
`English Translation Document
`(# appiicabies
`information Disclosure Statement
`(PTO/SB/08 ar PTO-14.49)
`[| Capies
`of citations attached
`Saeed
`:
`Preliminary Amendment
`.
`.
`Return Receipt Postcard
`(MPEP § 505} (Should be specifically Itemized)
`
`(ifforelan priority is claimed
`Nonpublication Request
`Under 35 U.S.C. 122)OMB. Applicant suuist attach form FTO/SB/3S
`or equivalent.
`
`
`
`
`a. [] Statements verifying identity of above cop
`*Note:
`(1) Benefi
`ras under 37 CFR 4.78 and foreign prierity claims under 1.55 must be included in an Application Data Sheet {ADS}.
`ions filed under 25 U.S.C. 111, the spplication must contain an ADS specifying the agpiicant if the anplicant is an
`on ta whom the inventor is under an obligation to assign, ar person who otherwise showssufficient proprietary
`interest in the matter. See 37 CFR 1.45{b}.
`18. CORRESPONDENCE ADDRESS
`.
`.
`.
`bea
`at
`ae
`:
`i
`54
`ihe address associated with Customer Number: 15576
`
`OR [| Correspondence address below
`
`RANE
`
`Sy sf
`
`ome
`
`Name
`Registration No.
`
`Print/Type}|SCOLL S. Kokka dagen
`iAttorney/Agent)
`
`ina benefit by the public whichis to file (and by the USPTO
`
`tc precess) an application. Confidentiality is gov
`12 minutes ta compl
`rned by 35 USC. 122 and 37 CFR 1.44 and 1.44. This collectionjs estimated to take
`
`laNTS On
`
`including gathering, preparing, and submitting
`individual case. Any ca
`ed application form
`he USPTO.T:
`wil vary dep:
`Ng UGA tha
`
`
`
`
`
`
`
`
`
`the amount of time yourequ
`a complete this form
`to the Chief Information Officer, U.S. B:
`and/or suggestions 4
`ing this bur
`houic be sent
`
`
`Trademark Office, U.S. Department of Commerce, P.O. Box 1450, Alexandria, VA 22313-1450. DO NOT SEND FEES OR COMPLETED FORMS TO THIS AGDRESS. SEND
`
`
`TO: Cammissioner for Patests, P.O. fax 1440, Alexandria, VA 22318-1456.
`
`ifyou need ass.
`¢ in completing the form, cail 1-8
`
` I-PTO-II99 and select agtian 2.
`
`Exhibit 1002
`
`Samsung v. Jawbone
`IPR2022-01321
`
`Page 1 of 566
`
`GOOGLE EXHIBIT 1002
`
`

`

`plication Data Sheet
`
`
`
`
`Cross-Reference to Related Applications
`
`This aoptication is a continuation of U.S. Nonprovisional Patent Application No.
`
`12/139,333, filed June 13, 2008, entitled “Forming Virtual Microphone Arrays Using Dual
`
`Omnidirectional Microphone Array (DOMA),” which claims the benefit of US. Provisional
`
`Paient Application No. 60/034,551, filled June 13, 2007, U.S. Provisional Patent
`
`Application No. 60/953444. filed August 7, 2007, US. Provisional Patent Application
`
`No.60/854 712, filed August 8, 2007, and U.S. Provisional Patent Application No.
`
`§61/045,377, fled April 16, 2008, all of which are incorporated by reference herein in their
`
`entirety for all purposes.
`
`Application Information
`
`Filing Date::
`
`Application Type::
`
`Sublect Matter:
`
`Suggested Group Art Unit:
`
`CD-ROM or CD-R?::
`
`Title::
`
`August 5, 2073
`
`Continuation
`
`Utility
`
`None
`
`None
`
`FORMING VIRTUAL MICROPHONE
`ARRAYS USING DUAL OMNIDIRECTIONAL
`MICROPHONE ARRAY (DOMA)
`
`Attorney Docket Number::
`
`AL!I-O50ACON1
`
`Request for Early Publication’?::
`
`Request for Non-Publication?::
`
`No
`
`No
`
`suggested Drawing Figure:
`
`FIG. 4
`
`Total Drawing Sheets::
`
`Smail Entity::
`
`Patition included?::
`
`secrecy Order in Parent Appl. ?::
`
`17
`
`Yes
`
`No
`
`No
`
`Page 2 of 566
`
`Page 2 of 566
`
`

`

`Applicant Information
`
`Applicant Authority type:
`
`Inventor
`
`Primary Citizenship Counitry::
`
`United States of America
`
`Status::
`
`Given Name::
`
`Family Names:
`
`Full Capacity
`
`Gregory ©.
`
`Burnet
`
`City of Residence::
`
`Dodge Center
`
`State or Province of Resicdence::
`
`MN
`
`Country of Residence::
`
`United States of America
`
`Sireet of mailing address::
`
`10550 First Timberlane Drive
`
`City of mailing address::
`
`Dodge Center
`
`Country of mailing address::
`
`United States of America
`
`State or Province of mailing address:
`
`MN
`
`Postal or Zip Code of mailing address::
`
`55057
`
`Correspondence information
`
`Customer Nurmber::
`
`15516
`
`Names:
`
`Kokka & Backus, PC
`
`Sireet of mailing address::
`
`703 High Street
`
`City of mailing address::
`
`Palo Alto
`
`Country of mailing address::
`
`USA
`
`State or Province of mailing address::
`
`CA
`
`Postal or Zip Code of mailing address::
`
`94301
`
`Page 3 of 566
`
`Page 3 of 566
`
`

`

`Telephone::
`
`Fax::
`
`{650} 566-9921
`
`(850) 566-9922
`
`Representative information
`
`Representative Customer Number::
`
`15516
`
`‘Representative=| |RegistrationNumber::[Name
`Designation:
`|
`Primary—™~—~*™:51893|ScottS.Kokka|
`
`
`
`sere
`
`scott S. Kokka
`
`Reg. No. 51,893
`
`a
`
`Date
`
`Page 4 of 566
`
`Page 4 of 566
`
`

`

`ALLOSOACON 1
`
`APPLICATION FOR UNITED STATES PATENT
`
`FORMING VIRTUAL MICROPHONE ARRAYS USING DUAL
`
`OMNIDIRECTIONAL MICROPHONE ARRAY (DOMA)
`
`By Inventor:
`
`Gregory Burnett
`10546 First Timberiane Drive
`Dodge Center, MN 53057
`A Citizen of the United States of America
`
`Assignee:
`
`AliphCorm
`
`KOKKA & BACKUS, PC
`703 High Street
`Palo Alio, CA 94301-2447
`Tel: (680) 566-9912
`Fax: (630) 566-9922
`
`Page 5 of 566
`
`Page 5 of 566
`
`

`

`ALLOSOACON 1
`
`FORMING VIRTUAL MICROPHONE ARRAYS USING DUAL
`
`19002 |
`
`This application is a continuation of U.S. Nonprovisional Patent Application No.
`
`12/139,333, filed Fine 13, 2008, entitled “Forming Virtual Microphone Arrays Using Dual
`
`Ommidirectional Microphone Array (DOMA),” which claims the benefit of U.S, Provisional
`
`Patent Application No. 60/934,551, fled June 13, 2007, U.S. Provisional Patent Application No.
`
`60/953 444, filed August 1, 2007, U.S. Provisional Patent Application No.60/054,712, filed
`
`August 8, 2007, and U.S. Provisional Patent Application No, 61/045,377, filed April 16, 2008,
`
`all of which are incorporated by reference herem imtheir entirety for all purposes.
`
`TECHNICAL FIELD
`
`{0002}
`
`The disclosure herein relates generally to noise suppression. In particular, this
`
`disclosure relates to noise suppression systems, devices, and methods for use in acoustic
`
`applications.
`
`BACKGROUND
`
`10683]
`
`Conventional adaptive noise suppression algorithms have been around for somotinic.
`
`These conventional algorithms have used two or more microphones ta sample both an
`
`(unwanted) acoustic noise field and the (desired) speech of a user. 20 The noise relationship
`
`between the microphones is then determined using an adaptivefilter (such as Least-Mean-
`
`Squares as described in Haykin & Widrow, ISBN# 0471215708, Wiley, 2002, but any adaptive
`
`or stationary system identification algorithm may be used) and that relationship used to filter the
`
`noise from the desired signal.
`
`(Ha!
`
`Most conventional noise suppression systems currently im use for speech
`
`communication systems are based on a single-microphone speciral subtraction technique first
`
`develop in the 1970's and described, for example, by S. FP. Bollin “Suppression of Acoustic
`
`Noise in Speech using Spectral Subtraction,” HEEE Trans. on ASSP, pp. 113-120, 1979. These
`
`bo
`
`Page 6 of 566
`
`Page 6 of 566
`
`

`

`ALLOSOACON 1
`
`techniques have been refined over the years, 30 bul the basic principles of operation have
`
`remained the same, See, for example, US PatentNumber 5,687,245 of McLaughlin,et al, and
`
`US Patent Number 4,611,404 of Vilmur, et al. There have also been several attempts at multi-
`
`microphone noise suppression systems, such as those outlined in US Patent Number 5,406,622 of
`
`Silverberg ct ai. and OS Patent Nunrber 3,463,694 of Bradloyct al. Multi-microphone systems
`
`have not been very successful for a variety of reasons, the mast compelling being poor noise
`
`cancellation performanceand/or sigmificant speech distortion. Primarily, conventional multi-
`
`microphone systenis attempt to incroase the SNR of the user's speoch by “steering” the mulls of
`
`the system to the strongest noise sources, This approach is limited imthe number of noise sources
`
`removed by the number of available nulls.
`
`(0805)
`
`The Jawbone carpiece (referred to as the "Jawbone), introduced in December
`
`2006 by AliphCom of San Francisco, California, was the first known commercial 10 product to
`
`use a pair of physical directional microphones (instead of omnidirectional micraphones) to
`
`reduce environniental acoustic noise. The technology supporting the Jawbone is currently
`
`described under one or more of US PatentNumber 7,246,058 ty Burnett and/or US Patent
`
`Application Numbers 10/400,282, 10/667,207, and/or 10/769,302. Generally, multi-microphone
`
`techniques make use of an acoustic-based Voice Activity Detector (VAD) to determine the
`
`background noise characteristics, where “voice” is generally understood to include human voiced
`
`speech, unvoiced speech, or a combination of voiced and unvoiced speech. The Jawbone
`
`improved on this by using a microphone-based sensor te construct a VAD signal usme directly
`
`detected speech vibrations in the user's cheek. This alowed the Jawbone to aggressively remove
`
`noise when the user was not producing speech. However, the Jawbone uses a directional
`
`microphone array.
`
`
`Each patent, patent application, and/or publication mentioned in this specificationis
`
`10006]
`
`herein incorporated by reference in tis entirety to the same extent as ifeach individual patent,
`
`patent application, and/or publication was specifically and individually indicated to be
`
`incorporated byreference.
`
`Page 7 of 566
`
`Page 7 of 566
`
`

`

`ALLOSOACON 1
`
`10007|
`
`Figure | is a two-mucrophone adaptive noise suppression system, under an
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`embodiment,
`
`iGG03}
`
`Figure 2 is an array and spooch source (S} configuration, under an embodimont. The
`
`microphones are separated bya distance approximately equal ta 2d), and the speech source is
`
`located a distance d, away from the midpomt ofthe array at an angle 8. The system is axially
`
`symmetric so only dy and @ need be specified.
`
`[0009]
`
`‘igare 3 is a block diagram for a first order gradiont microphone using two
`
`omnidirectional elernents 0) and O:, under an embodiment.
`
`{0610}
`
`Figure 4 is a block diagram for a DOMAincluding two physical microphones
`
`configured to formtwo virtual microphones V, and V>, under an ombodiment,
`
`i001]
`
`Figure § is a block diagram for a DOMAinchiding two physical microphones
`
`configured to form N virtual microphones V; through Vx, where N is any numbergroater than
`
`one, under an embodiment.
`
`10012]
`
`Figure 6 is an example of aheadset or head-worn device that mcludes the DOMA,as
`
`described herein, under an embodiment,
`
`{O613]
`
`‘igure 7 is a flowdiagram for denoising acoustic signals using the DOMA,underan
`
`embodiment,
`
`(001 4]
`
`10015]
`
`Figure 8 is a flow diagram for forming theDOMA, under an embodiment.
`
`Figure 9 is a plot of linear response of virlual microphone V2 loa 1 kHz speech
`
`source at a distance of 0.1 m, under an embodiment. The null is at G degrees, where the speechis
`
`normally located,
`
`{6016}
`
`Figure 10 is a plot of linear response of virtual microphone V2 to a 1 kHz noise
`
`source at a distance of 1.0 m, under an embodiment, There is no null and all noise sources are
`
`detected,
`
`10017 |
`
`Figure 11 is a plot of lmear response of virtual microphone V; ta a | kHz speech
`
`source at a distance of O.f m, under an embodiment. There is no nell and the response for speech
`
`is greater than that shown in Figure 9,
`
`fa
`
`Page8 of 566
`
`Page 8 of 566
`
`

`

`ALLOSOACON 1
`
`i0G18}
`
`Figure 12 1s a plot of lear response of virlual microphone V; to a 1] kHye noise
`
`source at a distance of 1.0 m, ander an embodiment, There is no null and the response is very
`
`similar to V> shown in Figure 10.
`
`10019]
`
`Figure 13 is a plot of linear response of virtual microphone V; to a speech source at a
`
`distance of 6.1 mfor frequencies of 100, 500, 1006, 2000, 3600, and 4000 Hy, under an
`
`embodiment,
`
`(0820)
`
`Figure #4 is a plot showing comparison offrequency responses for speech for the
`
`array of an embodiment and for a conventional cardicid microphone.
`
`(G02 8)
`
`Figure 15 is a plot showing speech response for V: (top, dashed) and Vo (bottom,
`
`solid) versus B with d, assurned to be 0.1 m, under an embodiment, The spatial null in V2is
`
`relatively broad.
`
`10022}
`
`‘ignre 16 is a plot showing a ratio of Vi/V>2 speech responses shown in Figure 10
`
`versus B, under an embodiment. The ratio 1s above 10 dB fer ali G.8 < B< 1.1. This means that
`
`the physical § of the system need not be exactly modeled for good performance,
`
`{0023}
`
`Figure 17 is a plot of B versus actual d, assuming that d, = 10cmand theta = 0,
`
`under an embodiment.
`
`[0624]
`
`Figure 18 is a plot of B versus theta with d, = 10 cm and assuming d, = 10 cm, under
`
`an embodiment.
`
`{0825}
`
`Figure 19 is a plot of amplitude (top) and phase (hottom) response of Nés} with B= 1
`
`and D = -7.2 usec, under an ernbodiment. The resulting phase difference clearly alfects high
`
`frequencies more than low.
`
`[0026]
`
`Figure 20 is a plot of amplitude (Lop} and phase (bottom) response of Nfs) with B =
`
`1.2 and D = -7.2 usec, under an embodiment. Non-unity B affects the entire frequency range.
`
`{0027}
`
`Figure 21 is a plot of amplitude (top) and phasc (bottom) response of the offoct on
`
`the speech cancellation in V2 due to a mistake in the location of the speech source with ql = 0
`
`degrees and g2 = 30 degrees, under an embodiment. ‘Phe cancellation remaims below -10 dB for
`
`frequencies below 6 kHz,
`
`{G023}
`
`Figure 22 is a plot of amplitude (top) and phase (bottom) response of the effect on
`
`the speech cancellation in V2 due to a mistake in the location of the spoech source with ql =0
`
`degrees and q2 = 45 dogrocs, under an ombodiment. The cancellation is bclow-10 dB only for
`
`frequencies belowabout 2.8 kHz and a reduction m performance is expected.
`
`Uh
`
`Page 9 of 566
`
`Page 9 of 566
`
`

`

`ALLOSOACON 1
`
`10629}
`
`Figure 23 shows experimental rosults lor a 2de= 19 mm array using a finear § of 0.83
`
`ona Broel and Kjaer Head and Torso Sumulator (ATS) in very loud (~85 dBA) music/speech
`
`noise environment, under an embodiment. The noise has been reduced by about 25 dB and the
`
`speoch hardly affected, with no noticeable distortion.
`
`(0830)
`
`The present mvention provides for dual omnidirectional microphonearray devices
`
`SUMMARYOF THE INVENTION
`
`systems and methods.
`
`106032]
`
`tn accordance with on embodiment, a microphone array is formed with a first virtual
`
`microphone that includes a first combination ofa first microphone signal and a second
`
`microphone signal, wherein the first microphone signal is generated by a first physical
`
`microphone and the second microphone signal is generated by a second physical microphone;
`
`and a second virtual microphone that mchides a second cambinatian ofthefirst micraphone
`
`signal and the second microphone signal, wherein the second combination is different fromthe
`
`first combination. The first virtual microphone and the second virtual microphoneare distinct
`
`virtual directional microphones with substantially similar responses to noise and substantially
`
`dissimilar responses to speech.
`
`(0032)
`
`in accordance with another embodiment, a microphonearray is formed witha first
`
`virtual microphone formed from a first combination of a first microphone signal and a second
`
`microphone signal, wherem the first microphone signal is generated by a first ommidirectional
`
`microphone and the second microphone signal is generated by a second ommuidirectional
`
`microphone; and a second virtual microphone formed from a second combination of the first
`
`microphone signal and the second microphone signal, wherein the second combinationis
`
`different fromthe first combination. The first virtual microphone hasa first lincar response to
`
`speechthat has a single null oriented in adirection toward a source of the speech, wherein the
`
`speech is hurnan speech.
`
`10033]
`
`to accordance with another embodiment, a device includes afirst microphone
`
`outputting 2 first microphone signal and a second microphone outputting a second microphone
`
`signal; and a processing component coupledto the first microphonesignal and the second
`
`microphone signal, the processing component gonorating a virtual microphone array comprismg
`
`a first virtual microphone and a second virtual microphone, wherein the first virtual microphone
`
`Page 10 of 566
`
`Page 10 of 566
`
`

`

`ALLOSOACON 1
`
`comprises a firsi combmation of the first microphone signal and the second microphone signal,
`
`and wherein the second virtual microphone comprises a second combination ofthe first
`
`microphone signal and the second microphone signal. The second virtual microphone have
`
`substantially similar responses to noise and substantially dissimilar responses to speech,
`
`{0834}
`
`in accordance with another ombodiment, a devise includes a first microphone
`
`quiputting a first microphone signal and a second microphone outputting a second microphone
`
`signal, whereimthe first microphone and the second microphone are omnidirectional
`
`microphones; and a virtual microphone array comprising a first virtual microphone and a second
`
`virtual microphone, wherein the first virtual microphone comprises a first combination ofthe
`
`first microphone signal and the second microphone signal, and the second virtual microphone
`
`comprises a second combination of the first microphone signal and the second microphone
`
`signal. The second combination is different from the first combmation, and the first virtual
`
`microphone and the second virtual microphoneare distinct virtual directional microphanes.
`
`{0035}
`
`in accordance with another embodiment, a device includes a first physical
`
`microphone generating a first microphone signal; a second physical microphone generating a
`
`second microphone signal; and a processing component coupled tothe first microphonesignal
`
`and the second microphone signal, the processing component generating a virtual microphone
`
`array comprising a first virtual microphone and a second virtual microphone. The first virtual
`
`microphone comprises the second microphone signal subtracted trom a delayed version ofthe
`
`first microphone signal, and the second virtual microphone comprises a delayed version af the
`
`first microphone signal subtracted fromthe second microphone signal.
`
`10036!
`
`In accordance with another embodiment, a sensor includes a physical microphone
`
`array including a first physical microphone and a second physical microphone, the first physical
`
`microphone outputting a first microphone signal and the sccond physical microphone outputting
`
`a second microphone signal; and a virtual oucrophone array comprising a first virtual
`
`microphone and a second virtual microphone, the first curtail microphone comprising a first
`
`combimation ofthe first microphone signal and the second microphonesignal, the second virtual
`
`microphone comprising a second conybination ofthe first microphone signal and the second
`
`microphone signal. The second combination is different fromthe first combination, and the
`
`virtual microphone array incudes a single null orionted in a direction toward a source of spooch
`
`of 4 human speaker.
`
`J
`
`Page 11 of 566
`
`Page 11 of 566
`
`

`

`ALLOSOACON 1
`
`
`A deal omnidirectional microphone array (OMA) that provides improved noise
`
`10037}
`
`suppression is described herein. Compared to conventional arrays andalgorithms, which seck to
`
`reduce noise by nulling out noise sources, the array of an embodiment is used to form two
`
`distinct virtual directional microphones which arc configured to have very similar noise
`
`responses and very dissimilar speech responses. The only null formed by the DOMAis one used
`
`to remove the speech of the user fromm V2. The two virtual rmcrophones of an embodiment can be
`
`paired with an adaptive filter algorithm and/or VAD algorithin to significantly reduce the noise
`
`without distorting the speech, significantly improving the SNRofthe desired speech over
`
`conventional noise suppression systems. The embodiments described heroin are stable in
`
`operation, flexible with respect to virtual microphone pattern choice, and have proven to be
`
`robust with respect to speech source-to-array distance and orientation as well as temperature and
`
`calibration techniques. In the following description, numerous specific details are mtroducedto
`
`provide a thorough understanding of, and enabling description for, embodiments of the DOMA.
`
`Ome skilled in the relevant art, however, will recognize that these embodiments can be practiced
`
`without one or more of the specific details, or with other components, systems, ete. In other
`
`instances, well-known structures or operations are not shown, or are not described in detail, to
`
`avoid obscuring aspects of the disclosed embodiments.
`
`{0838}
`
`Unless otherwise specified, the following terms have the corresponding meanings in
`
`addition to any moaning or understanding they may convey to one skilledin the art.
`
`10039)
`
`10040]
`
`The term "bleedthrough” means the undesired presence of noise during speech.
`
`The term “denoising” means removing unwanted noise from Mici, and also reters to
`
`the amount of reduction of noise energyin a signal in decibels (dB).
`
`{0043}
`
`10642}
`
`The term "devoicing” means removing/distorting the desired spooch from Mict.
`
`The term "directional microphone (DM)}" means a physical directional microphone
`
`thal is vented on both sides of the sensing diaphragm.
`
`10043]
`
`The term "Micl OM)" means a general designation for an adaptive noise suppression
`
`system microphone that usually contains more speech than noise.
`
`10044]
`
`The term "Mic? (M2)" means a general designation for an adaptive noise suppression
`
`systemmicrophone that usually contains more noise than specch,
`
`10045]
`
`The term “noise” means unwanted environmental acoustic noise.
`
`Page 12 of 566
`
`Page 12 of 566
`
`

`

`ALLOSOACON 1
`
`10046}
`
`The term “null” means a vere or minima in ihe spatial response of a physical or
`
`virtual directional microphone,
`
`10047|
`
`The term "O," means a first physical omnidirectional microphone used to form a
`
`microphone array,
`
`{6643}
`
`The term "O)' moans a sccond physical onmidircctional microphone uscd to form a
`
`microphone array.
`
`19649]
`
`{06S6|
`
`The term "speech" means desired speech of the user.
`
`The term "Skin Surface Microphone (SSM)”is a microphone used in an earpiece
`
`(e.g., the Jawbone earpiece available from Aliph of San Francisco, California) to detect speech
`
`vibrations on the user's skin.
`
`10053]
`
`10682]
`
`The term "V\" means the virtual directional "speech" microphone, which has no nulls.
`
`The term "V.' means the virtual directional "noise" microphone, which has a null for
`
`the user's speech.
`
`{0033}
`
`The term "Voice Activity Detection (VAD) signal" means a stenal indicating when
`
`user speech is detected.
`
`10084]
`
`The term"virtual microphones (VM)" or "virtual directional microphones” means a
`
`microphone constructed using two or more omnidirectional microphones and associated signal
`
`processing.
`
`{0855}
`
`Figure 1 is a two-microphone adaptive noise suppression system 100, underan
`
`embodiment. The iwo-microphone sysiem 100 including the combination of physical
`
`microphones MIC f and MIC 2 along with the processing or circuitry components to which the
`
`microphones couple (described in detail below, but not shown in this figure} is referred to herem
`
`as the dual omnidirectional microphone array (DOMA) 110, but the embodiment is not so
`
`limited. Reforring to Figure f, in analyzing the singic noise source 101 andthe direct path to the
`
`microphones, the total acoustic information coming into MIC 1 (102, which can be an physical
`
`or 5 virtual microphone) is denoted by m,(n}. The total acoustic information coming into MIC 2
`
`(103, which can also be an physical or virtual microphone) is similarly labeled mofn). In the z
`
`{digital frequency} domain, these are represented as Mi(z} and Mo(Z). Then,
`
`M,(z) = S(z) + No(z)
`
`M,(2) = N{z) + S2(z),
`
`Page 13 of 566
`
`Page 13 of 566
`
`

`

`ALLOSOACON 1
`
`with
`
`so that
`
`Noa) = NaH, (2)
`
`§o(2} = S(z) Ho (2),
`
`M,€z) = 5(z) + N(2)H, (2)
`
`Mo(2) = NG) 4+§@3Hotz}.
`
`Eq. |
`
`This is the general case for all bwo microphone systems. Equation | has four unknowns and only
`
`two known relationships and therefore cannot be solved explicitly.
`
`[0656]
`
`However, there is another wayto solve for some of the unknowns in Equation 1. The
`
`analysis starts with an exarmtnation of the case where the speechis not being generated, that is,
`
`where a signal from the VAD subsystern 104 (optional) equals zero. In this case, s(n} = Sz) = 0,
`
`and Equation | reduces to
`
`Myy(z) = NZ)
`
`Mon{z) = N(z),
`
`where the N subscript on the M variables indicate that only noiseis being received.
`
`This leads to
`
`Miy(z) = Mon{2)H,(@)
`Min(2}
`H,@)=—=.
`Montz}
`“
`
`;
`Eg. 2
`
`The function H)(z) can be calculated using anyof the available systemidentification algorithms
`
`and the microphone outputs when the systemis certamthat only noise is being received. The
`
`calculation can be done adaptively, so that the system can react to changes in the noise.
`
`iG857}
`
`A sohition is now available for H(z}, one of the unknowns in Equation 1. The final
`
`unknown, Hofz)}, can be determined by usme the instances where speech is being produced and
`
`the VAD equals one. Whenthis is occurring, but the recent (perhaps less than I second) history
`
`19
`
`Page 14 of 566
`
`Page 14 of 566
`
`

`

`of the microphones mdicate lowlevels of 10 noise, it can be assumed that n(s} = N(@)~ O. Then
`
`ALLOSOACON 1
`
`Equation 1 reduces io
`
`which imturn leads to
`
`M,.(2) = S@)
`
`My,(2) = S(2) Ay @).
`
`Mos (z) = My. (2) Ho (23
`
` Cte Mo 5(Z)
`Hp (z) wa
`alt fo?Mis)
`
`which is the inverse of the Hi(z)} calculation. However, it is noted that different inputs are being
`
`used (nowonly the speech is occurring whereas before only the noise was occurring). While
`
`calculating H(z), the values calonlatedfor A,(z) are held constant (and vice versa) and itis
`
`assumed thai the noise level is not high enoughio cause errors in the Ho(z) calculation.
`
`(0058]
`
`After calculating Hy(@) and Ho(z), they are used to remove the noise fromthe signal,
`
`if Equation 1 is rewritien as
`
`S(@) = My(z@) ~ NG@)H@)
`
`Nz} = Mo(2) - S(2)H2(2)
`
`S(z) = My(z) ~ (Mo@) ~ S@aa (2)A @)
`
`S(a)|1 — H(z) Ay (Z)] = My) — My (2) 8),
`
`then N(z) may be substituted as shawnto salve for S{z} as
`
`(2)
`Sta) =
`
`My (2)- Ma (2)Hy(2)
`
`=oie)
`
`“
`Eq. 3
`
`10659]
`
`if the transfer functions H(z} and H(z} can be described with sufficient accuracy,
`
`then the noise can be completely removed and the original signal recovered. This remains true
`
`without respect to the amplitude or spectral characteristics of the noise. If there is verylittle or no
`
`loakage from the speech source into Mo, then Hp (z} ~ 0 and Equation 3 reduces to
`
`Sizy = Mite) — MoH, Cz).
`
`Eq. 4
`
`it
`
`Page 15 of 566
`
`Page 15 of 566
`
`

`

`ALLOSOACON 1
`
`10060]
`
`Equation 4 is much simpler to unplement and is verystable, assuming Hy(2) is stable.
`
`However, if significant speech energy is in Mot}, devoicing can occur. in order to construct a
`
`well-performing system and use Equation 4, consideration is given to the following conditions:
`
`RL Availability ofa perfect (or at least very goad} VAD in noisy conditions
`
`RZ. Sufficiently accurate Hi(z)
`
`R3. Very small Gdeally zero} He{Z}.
`
`R4. During speech production, Hi(z) cannot change substantially,
`
`RS. During noise, H(z} cannot change substantially.
`
`10064}
`
`Condition RJ is easyto satisfy if the SNRof the desired speech to the unwanted noise
`
`is high enough. "Enough” means different things denending on the raethad of VADgeneration.
`
`ifa VAD vibration sensor is used, as in Burnett 7,256,048, accurate VAD in very low SNEs (-10
`
`dB orless} is possible. Acoustic- only methods using information from CQ; and O» can aise return
`
`accurate VADs, but are limited to SNRs of ~3 dB or greater for adequate performance.
`
`[0662]
`
`Condition RS is normally simple to satisfy because for most applications the
`
`microphones will not change position with respect to the user's mouth very offen or rapiddy. In
`
`those applications where t may happen (such as hands-free conferencing systems} tt can be
`
`salisfied by configuring Mic2 so thal Hz(z)~0.
`
`10063}
`
`Satisfying conditions R2, R3, and R4 are more difficult but are possible given the
`
`right combination of V; and V>. Methads are examined belowthat have proven to be effective in
`
`satisfying the above, resulting in excellent noise suppression performance and minimal speech
`
`removal and distortion in an embodiment.
`
`(0004)
`
`The DOMA, in various embodiments, can be used with the Pathfinder systemas the
`
`adaptive filler system or noise removal. The Pathfinder system, available from AliphCom, San
`
`Francisco, CA, is described in detail in other patents and patent applications referenced herein.
`
`Alternatively, any adaptive filter or noise removal algorithm can be used with theDOMA im one
`
`or more various alternative embodiments or configurations.
`
`10005)
`
`When the DOMA is used with the Pathfinder systom, the Pathfinder systemgoncrally
`
`provides adaptive noise cancellation by combining the two microphone signals (e.g., Micl, Mic2}
`
`Page 16 of 566
`
`Page 16 of 566
`
`

`

`ALLOSOACON 1
`
`by Gllering and summing im the lime domain, The adaptive filter generally uses the signal
`
`received from a first microphone of theDOMAto remove noise from the speechreceived from
`
`at least one other microphone of the DOMA, which relies on a slowly varying linear transfer
`
`function between the two microphones for sources of noise. Following processing of the two
`
`channcls of the DOMA,an output signal is gonorated in which tho noise contont is attonuatcd
`
`with respect to the speech coritent, as described in detail below.
`
`(0866)
`
`Figure 2 is a generalized tbwo-rmerophone array (DOMA)including an array 201/202
`
`and speech source S configuration, under an ombodimient. Figure 3 is a system 300 for
`
`generating or producing a first order gradient microphone V using two omnidirectional elements
`
`O) and Oo, under an embodiment. The array of an embodiment inchides two physical
`
`microphones 201 and202 (¢.g., omnidirectional microphones) placed a distance 2dy apart and a
`
`speech source 200 is located a distance d, away at an angle af @. This array is axially symmetric
`
`(at least in free space}, so no other angle is needed. The output from each rmerophone 207 and
`
`202 can be delayed (7, and Z;)}, multiplied by a gain (A, and Ao}, and then summed with the
`
`other as demonstrated in Figure 3. The output of the array is or forms at least one virtual
`
`mucrophone, as described in detail below. This operation can be over any frequency range
`
`desired. By varying the magnitude and sign of the delays and gains, a wide variety of virtual
`
`microphones (VMs), also referred to herein as virtual directional microphones, can be realized.
`
`There are other methods known to those skilled in the art for constructing VMs but this is a
`
`common one and will be used in the enablement below.
`
`10067|
`
`As an exaniple, Figure 4 is a block diagrarn for aDOMA 400 including two physical
`
`microphones configured to form two virtual microphones V; and Vo, under an embodiment, The
`
`DOMAincludes twofirst order gradient microphones V; and V> formed using the outputs

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket