throbber
as) United States
`a2) Patent Application Publication co) Pub. No.: US 2007/0165875 Al
`(43) Pub. Date: Jul. 19, 2007
`
`Rezvaniet al.
`
`US 20070165875A1
`
`(54)
`
`(76)
`
`HIGH FIDELITY MULTIMEDIA WIRELESS
`HEADSET
`
`Inventors: Behrooz Rezvani, San Ramon, CA
`(US); Andrea Goldsmith, Menlo Park,
`CA (US)
`
`Correspondence Address:
`PERKINS COIE LLP
`P.O. BOX 2168
`
`MENLO PARK, CA 94026 (US)
`
`(21)
`
`Appl. No.:
`
`11/607,431
`
`(22)
`
`Filed:
`
`Dec. 1, 2006
`
`Related U.S. Application Data
`
`(60)
`
`Provisional application No. 60/741,672, filed on Dec.
`1, 2005.
`
`Publication Classification
`
`(51)
`
`Int. Cl.
`(2006.01)
`HO4R 1/10
`(52) US. Che
`creccsccssssssssssessesestesesenseene 381/74; 381/367
`
`(57)
`
`ABSTRACT
`
`The invention provides a multiple-antenna wireless multi-
`media headset with high fidelity sound, peer-to-peer net-
`working capability,
`seamless handoff between multiple
`wireless
`interfaces, multimedia storage with advanced
`search capability, and ultra low power such that the device
`is capable of operation without recharging. The headset
`supports multiple wireless systems such as Wifi (802.11a/
`b/g/n), Wimax, 3G cellular, 2G cellular, GSM-EDGE,radio
`(e.g. AM/FM/XM), 802.15 (Bluetooth, UWB, and Zigbee)
`and GPS. The headset also provides a platform such that
`applications can access the high fidelity sound system, the
`speech recognition engine,
`the microprocessor, and the
`wireless systems on the device.
`
`microphonearray for high fidelity sound
`
`receiver
`
`echo
`canceller
`
`echo
`canceller
`
`405
`Ear Canal
`
`4)
`
`mic
`artay
`
`-
`
`+@
`
`455
`445
`frequency domain
`
`460
`speech quality
`enhancement
`
`412
`background
`noise canceller
`
`ambient noise
`microphone
`
`speech parameter
`extraction
`
`echo
`
`430
`
`450
`
`Piezo or
`
`LI-——)
`
`425
`distorted voice
`parameter
`extraction
`
`APPLE 1016
`APPLE 1016
`
`1
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 1 of 10
`
`US 2007/0165875 Al
`
`100
`
`Function Block Diagram of Headset
`
`audio subsystem
`
`microphone
`alray processing
`
`GPS
`AM/FM/XM
`Radio
`
`peer-to-peer
`networking
`
`userinterface
`
`voice
`recognition
`noise
`
`GSM/EDGE/3G
`and/or Wimax
`
`Power Management(enhanced battery life)
`
` Antennapee cancellation|comands wes|Wifi
`
`
`
`
`
`Algorithms
`
`(802.11a/b/g/n)
`
`applications
`
`802.15
`(Bluetooth,
`Zigbee, UWB)
`
`135
`
`FIG. 1
`
`2
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 2 of 10
`
`US 2007/0165875 Al
`
`200
`
`227
` audio interface
`
`
`226
`
`Microphone
`control
`
`Array
`
`buttons
`
`
`
`USB
`Interface
`
` 237
`
`baseband
`processor
`
`
`
`270_~275 265
`
`
`
` 260
`
`battery/charger
`
`FIG. 2
`
`power supplies
`
`3
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 3 of 10
`
`US 2007/0165875 Al
`
`300
`
`modelfor user interface via speech recognition
`
`
`
`310 315 305
`
`
`
`
`
`
`
`language model on
`one time per language,
`
`languagetraining for
`device storage
`
`(HMM)
`
`500 common word
`
`
`
`occasional userinput to
`augmentlanguage model
`for user specific vocabulary,
`for example proper names
`
`
`sonal
`322
`320
`partes
`335
`
`
`
`325
`dictionary/translation
`
`
`324
`prnone
`speech recognizer
`assistance
`
`
`
`
`
`
`extraction
`(HMM)
`(on other device, €.9.,
`
`
`
`
`
` peer-to-peer laptop)
`
`
`network
`330
`
`343
`
`automated metadata
`extraction
`(music, video, contactinfo
`
`sources of indexed data 342
`
`344
`
`user entered metadata
`(personal photos)
`
`355
`
`action
`
`360
`
`search
`results
`350
`
`FIG. 3
`
`
`
`
`
`action
`345
`
`
`
`
`
`345
`search results user
`dial a number
`play song matching
`search results
`interface
`
`
`
`4
`
`

`

`extraction 0139FWAYSL007GI'INEUoHLIqngUoHedyddyyuayeg
`
`microphonearray for high fidelity sound
`
`receiver
`
`echo
`canceller
`
`echo
`canceller
`
`405
`
`background
`noise canceller
`
`ambient noise
`microphone
`
`
`
`
`
`455
`45
`frequency domain
`naise enhancer
`
`speech quality
`enhancement
`
`speech parameter
`extraction
`
`distarted voice
`parameter
`
`Piezo or
`Ear Canal
`
`FIG. 4
`
`TVSZ88910/L00¢SN.
`
`5
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 5 of 10
`
`US 2007/0165875 Al
`
`500
`
`microphone array beamsteering in direction of user speaking
`
`510
`
`ambient noise |
`microphone
`|
`
`6
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 6 of 10
`
`US 2007/0165875 Al
`
`600
`
`flowchart for search engine locating music file orfiles
`
`605
`
`user inputs a requestto initiate
`a search for one or morefiles
`
`645
`
`No
`
`
`
`
`
`SE queries user for search
`erms(s
`~~
`~~
`~_~
`
`
`
`SE queries
`
`user if he wishes to
`SE scans library for files
`
`
`matching search terns
`change search
`
`
`term(s)
`
`
`
`one or morefiles
`match type and
`
`
`
`
`
`
`
`
`
`more than one
`No
`file matches search
`
`terms(s)?
`
`user has requested
`file/files sent to user
`more than1file
`
`FIG. 6
`
`7
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 7 of 10
`
`US 2007/0165875 Al
`
`700
`
`power managementalgorithm
`
`DSP &
`
`analog/RF
`
`advanced power management
`algorithms
`
`multi-antenna
`
`pwroptimization
`
`ooo ee ee ei ei ee sia ee eS SO LC |
`
`
`
`shutting down
`most functions
`
`70%sc” goNS
`
`
`
`shutting down
`
`mostcircuits
`
`8
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 8 of 10
`
`US 2007/0165875 Al
`
`S800
`
`simultaneous operation and seamless handoff
`
`VoIP call
`
`handoff
`
`9
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 9 of 10
`
`US 2007/0165875 Al
`
`
`
`10
`
`10
`
`

`

`Patent Application Publication Jul. 19,2007 Sheet 10 of 10
`
`US 2007/0165875 Al
`
`1000
`
`flowchart for a new nodejoining the network
`
`1010
`
`a new node wishingto join
`the network broadcasts
`a requestto join
`
`new node tries a different
`interface and/or waits a
`random time
`
`
`
`a neighboring
`node hears the
`requestto join?
`
`
`
`
`
`
`
`
`
`
`
`neighboring node establishes
`connection with new node
`
`neighboring node exchanges
`information aboutexisting
`network with new node
`(e.g. its routing table)
`
`neighboring node informs other
`nodes on the network
`
`about new node
`
`new node becomespart of the
`established network and
`enables routing to other nodes
`
`FIG. 10
`
`11
`
`11
`
`

`

`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`HIGH FIDELITY MULTIMEDIA WIRELESS
`HEADSET
`
`BACKGROUND
`
`[0001]
`
`1. Field of the Invention
`
`[0002] The present invention generally relates to wireless
`multimedia headsets.
`
`[0003]
`
`2. Description of the State-of-the-Art
`
`[0004] Wireless headsets are common devices used for
`hands-free operation in conjunction with cell phones and
`VoIP phones, as well as with portable music players such as
`digital MP3 players. Such headsets typically include radio
`technology to access a given wireless system. For example,
`cell phone headsets use wireless technology to communica-
`tion with the cell phone handset such that the voice signals
`received by the handset over the cell phone system can be
`transferred to the headset. Similarly, wireless headsets for
`MP3players use wireless technologyto transfer musicfiles
`from the player to the headset.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0005] FIG. 1 provides a functional block diagram show-
`ing the features of the headset according to some embodi-
`ments.
`
`[0006] FIG. 2 illustrates the subsystems that support the
`various functionalities according to some embodiments.
`
`FIG.3 illustrates a modelfor the user interface via
`[0007]
`speech recognition according to some embodiments.
`
`[0008] FIG. 4 illustrates a microphone array for a high
`fidelity sound system according to some embodiments.
`
`[0009] FIG. 5 illustrates an exemplary situation in which
`a human speaker (user) is making sounds or an utterance
`toward an array of microphones including a plurality of
`individual microphones or microphone sets according to
`some embodiments.
`
`[0010] FIG. 6 illustrates a flowchart of the multistep
`process by which a user locates a desired music file or set of
`files (or other type of multimedia file) according to some
`embodiments.
`
`FIG.7 illustrates details of the power management
`[0011]
`algorithm according to some embodiments.
`
`[0012] FIG. 8 illustrates simultaneous operation over a
`cellular system and a Wifi system according to some
`embodiments.
`
`FIG.9 illustrates a peer-to-peer networking proto-
`[0013]
`col used to establish direct or multihop connections with
`other wireless devices for real-time interaction and file
`
`exchange according to some embodiments.
`
`[0014] FIG. 10 illustrates a flow chart describing this
`process according to some embodiments.
`
`SUMMARY
`
`In one aspect the invention provides a High-Fidel-
`[0015]
`ity Multimedia Wireless Headset. In another aspect,
`the
`invention provides a wireless multimedia headset that can
`include multiple features such as multimedia storage with
`advanced search capability, a high fidelity sound system,
`
`peer-to-peer networking capability, and ultra low power
`such that the device is capable of operation without recharg-
`ing.
`
`In another aspect, the invention provides a multi-
`[0016]
`media headset and method for designing and operating a
`headset comprising: a plurality of multiple wireless inter-
`faces; an advanced search engine with media search capa-
`bility; a high fidelity sound processor; power management
`meansfor ultra low power operation; and network connec-
`tivity for peer-to-peer networking.
`
`DETAILED DESCRIPTION
`
`[0017] The present disclosure is generally directed to a
`wireless multimedia headset that can include multiple fea-
`tures and support multiple wireless systems. These features
`can include any combination of a multimedia storage with
`advanced search capability; a high fidelity sound system;
`peer-to-peer networking capability; and an ultra low power
`consumption, such that the device is capable of operation
`without recharging. The headset can also provide a platform
`for both existing and new headset applications (such as
`“push-to-talk” between headscts) to enable access to the
`device features.
`
`[0018] FIG. 1 provides a functional block diagram show-
`ing the features of the headset according to some embodi-
`ments. In these embodiments, the headset 100 comprises a
`user interface 105 having query 110 and command 115
`functionality for voice recognition. The headset 100
`includes a peer-to-peer networking 120 functionality that
`will allow any headset within range of other wireless devices
`to self-configure with them into a multihop network.
`
`[0019] The headset 100 is capable of several applications
`125,
`in addition to power management 130 to enhance
`battery life. The headset 100 supports Voice over IP (VoIP)
`135 directly through any of the interfaces that allow it to
`connect to the Internet, as well as an audio subsystem 140
`that includes several functionalities such as, for example,
`noise cancellation 145 (and beamforming) through micro-
`phonearray processing 150, in addition to voice recognition
`155 and MP3 support 160. Multiple wireless systems may be
`integrated into the headset 100, including, but not limitedto,
`GPSand different radio systems (AM/FM/XM)165,various
`cellular phone standards (3G/2G/GSM/Edgeand/or Wimax)
`170, different Wifi standards
`(802.1la/b/g/n) 175, and
`802.15 (Bluetooth, Zigbee, and/or UWB) 180.
`In most
`embodiments, an antenna, or array of antennas, having
`antenna algorithms 185is used aspart of the wireless system
`or subsystems disclosed herein.
`
`[0020] FIG. 2 illustrates the subsystems that support the
`various functionalities according to some embodiments. In
`these embodiments,
`the signals 205 received from the
`antenna 210, or array of antennas 215,
`through antenna
`interface 217 are processed by a MIMO RFsystem 220 and
`a basebandprocessor 225. The subsystems include an audio
`interface 227 having a microphone array 227 having an
`input 228 and an output 229. The subsystemsinclude control
`buttons 230 for the user interface 105 as well as voice
`
`recognition 155. And, a microprocessor 235 having a USB
`interface 237 is present to perform the arithmetic, logic, and
`control operations for the various functionalities through the
`assistance of an internal memory 240.
`
`12
`
`12
`
`

`

`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`[0021] The device also includes a SIM card 245that, for
`example, identifies the user account with a network, handles
`authentication, provides data storage for basic user data and
`network information, and may also contain applications. The
`powers subsystems 250 include advanced power manage-
`ment 255 functionality to control energy use through power
`supplies 260. Solar cells 265 are also available to assist in
`sustaining the supply of power. The solar cells 265 can
`charge the battery 270 from ambient light as well as solar
`light. A battery charger 275 is included and can charge the
`battery, for example, through the input of a DC current 280.
`
`[0022] FIG. 3 illustrates a model for the user interface via
`speech recognition according to some embodiments.
`In
`these embodiments, there is a set language models stored on
`the device called Hidden Markov Models (HMMs) 305 for
`the speech data, and these models may be enhanced through
`some amountofinitial user training 310. In addition, occa-
`sional user input 315 either as commands, or through users
`speaking for a voice application, can be used to augmentthe
`HMMs305. The HMMs 305 are included in the speech
`recognizer 320 within in a speech recognition algorithm.
`
`[0023] The speech recognizer 320 receives information
`from a digital signal processor (DSP) 322, which collects,
`processes, compresses, transmits and displays analog and
`digital data for feature extraction from an acoustic signal
`324. The speech recognizer 320 is designed such that the
`wireless interfaces 325 and/or peer-to-peer network 330 can
`be used to provide an additional
`input to the algorithm.
`Specifically, the algorithm will have the ability to use any of
`the available wireless interfaces 325 and/or peer-to-peer
`network 330 to connect to another device 335 such as, for
`example, a laptop, computer, or handset to include other
`capabilities including, but not
`limited to, expanding the
`vocabulary base or providing translation assistance to the
`engine in the speech recognizer 320.
`
`[0024] The algorithm will take the user’s speech as a
`query 110 or command 115 input and initiate an indexing
`function 340. The sources of indexed data 342 include, but
`are not limited to, automated metadata extraction 343 and
`user entered metadata 344. Automated metadata 343
`
`includes, for example, music, video, and contact informa-
`tion. User entered metadata 344 includes, for example,
`personal photographs. For commands,the indexing function
`340 will
`take the appropriate action 345 to satisfy the
`command 115. For queries, 110 the indexing function 340
`will enable the search engine to locate the desired file and
`provide search results 350 to the user interface 355 and then
`take the appropriate action 345, such as dialing a number
`357 or playing a desired song 360.
`
`[0025] The algorithms for noise cancellation (and beam-
`forming) 145 based on the microphone array input 228
`speech can be designed relative to the speech recognition
`algorithm, such that the feature extraction of the input 228
`speech is optimized. One of skill will appreciate that noise
`cancellation/beamforming algorithms designed independent
`of the speech recognition algorithms can degrade speech
`recognition performance by introducing undesired speech
`artifacts. The speech recognition will categorize recognized
`speech as either a query 100 (e.g. look for a particular song)
`or a command115 (e.g. dial a specific number).
`
`[0026] FIG. 4 illustrates a microphone array for a high
`fidelity sound system according to some embodiments. The
`
`microphonearray 405 is coupled with a noise cancellation
`algorithm to pick up sound. The microphone array 405
`includes an ambient noise microphone 410 located on a part
`of the headset optimized to pick up background ambient
`noise and cancel it through a background noise canceller
`412, as well as additional microphone elements 415,420,430
`in different locations on the headset. The plurality of acous-
`tic microphone signals are transduced into corresponding
`electrical microphoneoutput signals by the microphones and
`communicated to the beam forming block 444.
`
`[0027] An additional antenna element may be placed
`inside the ear canal with signal processing through a dis-
`torted voice parameter extraction component 425 to invert
`the distortion of the ear canal transmission and enhance the
`
`voice parameters. The antenna elements 435,440,445,450 in
`the microphone array will have weights assigned to each
`antenna input. Different algorithms can be used to determine
`the weights, depending on the performance criteria,
`the
`numberof antenna elements available and their nature, and
`the algorithm complexity. For example, the weights may be
`used to minimize ambient noise, to make the antenna array
`gain independent of frequency, to minimize the expected
`mean square distortion or error of the signal, or to steer the
`direction of the microphone array 227 towards the speaker
`as shown in FIG. 5. Other functions of the microphonearray
`include a frequency domain noise enhancer 455, a speech
`quality enhancer 460, and speech parameter extraction 465.
`
`FIG.5 illustrate an exemplary situation in which a
`[0028]
`human speaker (user) is making sounds or an utterance
`toward an array of microphones including a plurality of
`individual microphones or microphone sets according to
`some embodiments. The plurality of microphones 410,415,
`420 in the array 515 receive a somewhat different acoustic
`signal from the human speaker (or user) 505 due to their
`different relative positions or distances from the human
`speaker (or user) 505. The different acoustical signals may
`be due for example to a different distance or angle of
`incidence of the acoustic wave generated by the human
`speakers utterance, and it may also include or be affected by
`reflective and/or reflective surfaces in the room or other
`
`environment in which the speech, sound, or utterance 510
`picked up by the microphonearray 515takes place. The time
`of arrival of the signal may also differ and be used alone or
`in conjunction with signal magnitude informationto assist in
`beam steering.
`
`[0029] The beam forming block 444 may include analog
`circuits, digital circuits, a combination of analog anddigital
`circuits, hardwired or programmable processing, or other
`means for processing the input signals and altering the
`individual microphonesand the microphonearray and/or the
`processing of the individual microphone 410,415,420 output
`signals to achieve the desired beam steering. The beam
`steering has the effect of focusing the sensitivity of the
`microphone array 515 as a whole toward a desired sound
`source, such as the human speaker 505. It may alternatively
`be used to steer the sensitivity away from an objectionable
`sound source.
`
`[0030] Advantageously, the beam steering will be used to
`increase the human speaker 505 (or other sound source)
`signal to background noise ratio or to otherwise achieve a
`desired result. The output 545 of the beam forming block
`444 is combined with an output 560 from a background
`
`13
`
`13
`
`

`

`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`noise cancellation block 565. The backgroundnoise cancel-
`ler 412 receives a background noise input signal 570 as the
`output electrical signal of an ambient noise microphone 410.
`This ambient noise microphone 410 is primarily responsible
`for sensing or detecting an acoustic ambient noise signal and
`transducing or otherwise converting it
`into an electrical
`ambient noise signal 570 which it communicates to a
`background noise canceller 412. Since the microphonearray
`515 may advantageously be steered toward the user 505 and
`may advantageously includea directional characteristic such
`that most of the sensitivity of the microphone array 515 is in
`the direction of the user 505, the amountor signal strength
`of the steerable microphone array 515 relative to the user
`will be higher for the user signal and lower for the ambient
`noise.
`
`[0031] The amountor signal strength of the ambient noise
`microphone 410 relative to the user 505 will be lowerfor the
`user signal and higher for the ambient noise because of the
`non-steerable and typically non-directional character of the
`ambient noise microphone 410. In at least one non-limiting
`embodiment,
`the use of a plurality of microphones for
`sensing the user’s 50 or speakers sounds may provide added
`sensitivity over the sensitivity of a single ambient noise
`microphone. It should however be appreciated that multiple
`microphones may be used for the ambient noise sensing.
`
`[0032] The output signal 545 from the beam forming
`block 444 is combined with the output signal 560 from the
`backgroundnoise canceller 412 to generate a signal 585 that
`is communicated to other processing circuitry, such as for
`example to the frequency domain noise enhancer in the
`embodimentif FIG. 4.
`
`[0033] The headset will have nonvolatile storage for mul-
`timedia datafiles, typically musicfiles, for example through
`a Flash RAM. There are many methods by which the
`multimedia data files may be loaded into the headset
`memory,
`for example via a wireless connection to the
`Internet, via a cellular telephone connection, via a satellite
`(e.g. XM or Sirius) or AM/FM radio receiver, via a USB
`high-speed data port, or via a wired or wireless connection
`to another device (e.g. a wireless connection to a computer,
`music server, handset, PDA, or other wireless device). The
`library may bepartitioned by media type, for example, there
`may be onepartition of the memory for music, one for phone
`numbers, and the like.
`
`include the capability to add
`[0034] File storage will
`“tags” to files. The tagging is done to facilitate searching
`based on tags that the user selects for each media type. For
`example, a music file might have a tag or tags such asfile
`title, songtitle, artist, keywords, genre, album name, music
`sample or clip, and the like. The headset will contain
`intelligent software for searching multimediafiles stored on
`the headset based on multiple search criteria and by the type
`offile of interest. Alternatively, a user can set up certain tags
`for all files downloaded under the given tagging criterion.
`Theuser need only enter this tag or set of tags once, and then
`change the tag or tags when a changeis desired so that, for
`example, all music downloadedat a given time will have the
`same tag. This is particularly useful for a headset sinceit is
`very hard to do manual entry for each newfile.
`
`re-use a numberof similar functions for different kind of
`searches such as speech recognition and name recognition.
`The search engine (SE) interacts with the user through the
`user interface, which for example can be control buttons or
`via speech. In the case of speech commands, the headset
`synthesizes a speech signal to query the user, and the user’s
`speech commands are processed by a speech recognition
`engine and then sent to the SE. The noise cancellation (and
`beamforming) 145 capabilities of the microphonearray,
`described above, can be combined with the speech recog-
`nition engine to improve its performance.
`
`[0036] FIG. 6 illustrates a flowchart of the multistep
`process by which a userlocates a desired musicfile or set of
`files (or other type of multimedia file) according to some
`embodiments. More particularly,
`in the non-limiting
`embodiment of the process 600 in FIG. 6, a user inputs a
`request to initiate a search for one or morefiles (step 605).
`The search engine (SE) then queries the user for search
`term(s) or other search criteria or logic (step 610). The
`search engine sans a library (or other database, source, or
`storage) for files or content matching the search terms(step
`615). If the search engine determines (step 620) that one or
`more files match type and search term(s) or other specified
`search criteria (yes), then the process proceeds to make a
`second determination (step 625) as to whether more than one
`file or content matches the search term(s) or other search
`criteria. If the determination is that they do match (yes), the
`process continues to determine if the user has requested
`more than one file or content (step 630). If the user has
`requested more than onefile or content (yes), thefile orfiles
`or other content are sent to the user making the search (step
`635).
`
`[0037] Retuning to the step of determination (step 625) as
`to whether more than one file or content matches the search
`term(s) or other search criteria. If the determination is that
`only one file or content matches (no), that file or contentis
`sent to the user (step 635). If either the step of determining
`if one or more files match type and search term(s) (step 620),
`or the step of determiningif the user has requested more than
`1 file are negative (step 630), then a determination is made
`in which the search engine queries the user to determine if
`the user wishes to change the search term(s) or other search
`criteria (step 640). If the answer is yes, then the step of the
`search engine scanning the library or other database, storage,
`or other potential
`file or content source (step 615)
`is
`repeated. If the determination (step 640) is no, then the
`search terminates (step 645). The user may of course repeat
`the search at any time with different search terms. It may be
`appreciated that this search engine logic is exemplary and
`non-limiting and that other search engine logic or proce-
`dures may be implemented. Furthermore, although the
`search may bedirected to files or content such as music, it
`may alternatively be directed to other types of content such
`as audio books, pod casts, or other content.
`
`the headset may have an
`[0038] As shown in FIG. 2,
`optional power management algorithm that minimizes
`power consumption based on the usage of the handset. FIG.
`7 illustrates details of the power management algorithm
`according to some embodiments. As shown inthis figure of
`FIG. 7, components of the power management algorithm
`[0035] The search engine (SE) will implement a search
`and procedure 700 may advantageously include managing
`
`algorithm consisting of a multistep process to locateafile or power consumption associated with audio, memory, DSP,
`set of files of interest. This generalized search engine will
`and/or processors to be minimized while supporting the
`
`14
`
`14
`
`

`

`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`applications in use. For example these may be accomplished
`by utilizing multiple antennas (MIMO)in the mostefficient
`way to minimize the power consumption required for wire-
`less transmission; shutting downcertain nonessential device
`functionality, and turning off nonessential device circuitry.
`
`[0039] The headset may be designed such that a certain
`application or set of applications that require relatively low
`power can be maintained for an indefinite time period under
`solar power alone, for example using solar cells embedded
`in the device and aggressive power managementwill allow
`the device to support the given application(s) indefinitely
`without recharging by shutting downall nonessential func-
`tions except those associated with the specific application or
`applications. For example, the device may operate indefi-
`nitely without recharging in Bluetooth-only or Zigbee-only
`mode by shutting downall functions not associated with
`maintaining a low-rate wireless connection to the handset
`through Bluetooth or Zigbee; in voice-only mode the device
`may operate indefinitely without recharging by shutting
`down all functionality of the device not associated with
`making a voice call (e.g. certain memory access, audio
`processing, noise cancellation, and search algorithms)
`through one or moreinterfaces that support such calls (e.g.,
`2G, 3G, GSM, VoIP over Wifi), and the like. Exemplary
`strategies and processesare illustrated in the embodiment of
`FIG. 7, and are provided by way of example but not of
`limitation.
`
`[0040] The headset may advantageously support simulta-
`neous operation on the different wireless interfaces, such as
`for example simultaneous operation on at least two systems
`that may include Wifi (802.11a/b/g/n), Wimax, 3G cellular,
`2G cellular, GSM-EDGE,radio (e.g. AM/FM/XM), 802.15
`(Bluetooth, UWB, and Zigbee) and GPS. These systems
`often operate at different frequencies and may require dif-
`ferent antenna characteristics. The simultaneous operation
`over different frequencies can be done, for example, by
`using someset of antennas for one system and using another
`set of antennas for another system.
`
`use of all wireless interfaces that can establish a direct
`connection with other wireless devices. For example,
`it
`could use an 802.11a/b/g/n interface operating in peer-to-
`peer mode, an 802.15 interface, a proprietary peer-to-peer
`radio interface, and/or an infrared communication link. The
`user may select to establish peer-to-peer networks on all
`available interfaces simultaneously, on a subset of inter-
`faces, or on a single interface based on a prioritized list of
`possible interfaces. Alternatively, the peer-to-peer network
`maybeestablished based ona list or set of lists of specific
`devices or user IDs that the user wishes to interact with.
`
`[0043] There are two main components to the peer-to-peer
`networking protocol: neighbor discovery and routing. In
`neighbor discovery a handset determines which other
`devices it can establish a direct connection with. This may
`be done, for example, by setting aside a given control
`channel for neighbor discovery, where nodesthat are already
`in the peer-to-peer network listen on the control channel for
`new nodes beginning the process of neighbor discovery.
`Whena nodefirst begins the process of neighbor discovery,
`it broadcasts a beacon identifying itself over a control
`channel set up for this purpose. Established nodes on the
`network periodically listen on the control channel for new
`nodes. If an established node on the network hears a
`broadcast beacon,
`it will establish a connection with the
`broadcasting node. The existing node will exchange infor-
`mation with the new node about the existing network to
`whichit belongs, e.g. it may exchangethe routingtableit has
`for other nodes in the network with the new node. The
`neighboring node will also inform other nodes on the
`network about the existence of the new node, and that it can
`be reached via the neighboring node, e.g. by exchanging
`updated routing tables with the other nodes. At that point the
`new node becomes part of the network and activates the
`routing protocol
`to communicate with all nodes in the
`network. FIG. 10 illustrates a flow chart describing this
`process according to some embodiments.
`
`[0044] The routing protocol will take advantage of link
`layer flexibility in establishing and utilizing single and
`multihop routes between nodes with the best possible end-
`to-end performance. The routing protocol will typically be
`based on least-cost end-to-end routing by assigning costs for
`each link used in an end-to-end route and computing the
`total cost based on these link costs. The cost function is
`
`[0041] FIG. 8 illustrates simultaneous operation over a
`cellular system and a Wifi system according to some
`embodiments. In these embodiments, a headset 805 having
`a plurality of antennas 810-1, 810-2, 810-3, and 810-4 is
`able to connectto a wi-fi access point 820 via its one or more
`antennas 830, 835 and to a cellular base station 840 via one
`designed to optimize end-to-end performance. For example,
`or more basestation antennas 850, 855. A voice overIP call
`it may take into account the data rates, throughput, and/or
`handoff between a wi-fi and cellular connection may advan-
`delay associated with a given link in coming up with a cost
`tageously be implemented. Another mechanism to support
`of using that link. It may also adjust link layer parameters
`this simultaneous multifrequency operation is time division.
`such as constellation size, code rate, transmit power, use of
`In addition to simultaneous operation,
`the handset can
`multiple antennas, etc., to reduce the cost of a link and
`support
`seamless handoff between two systems. For
`thereby the cost of an end-to-end route.
`example,
`the handset could switch a VoIP call from a
`wide-area wireless network such as Wimax or 3G toalocal
`[0045]
`In addition, for nodes with multiple antennas, mul-
`area network such as Wifi. FIG. 8 also illustrates the
`tiple independent paths can be established between these
`seamless handoff of a VoIP call between a cellular and Wifi
`nodes, and these independent paths can comprise separate
`system.
`links over which a link cost
`is computed. The routing
`protocol can also include multiple priorities associated with
`routing of each data packet depending on data priority, delay
`constraints, user priority, and the like.
`
`FIG.9 illustrates a peer-to-peer networking proto-
`[0042]
`col used to establish direct or multihop connections with
`other wireless devices for real-time interaction and file
`
`exchange according to some embodiments. As shownin the
`exemplary embodiment 900 of FIG. 9, peer-to-peer connec-
`tivity may be accomplished betweena plurality of headsets,
`handsets, or other network elements. This protocol

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket