`a2) Patent Application Publication 0) Pub. No.: US 2007/0165875 Al
`(43) Pub. Date: Jul. 19, 2007
`
`Rezvaniet al.
`
`US 20070165875A1
`
`(54) HIGH FIDELITY MULTIMEDIA WIRELESS
`HEADSET
`
`(76)
`
`Inventors: Behrooz Rezvani, San Ramon, CA
`(US); Andrea Goldsmith, Menlo Park,
`CA (US)
`
`Correspondence Address:
`PERKINS COIE LLP
`P.O. BOX 2168
`
`MENLO PARK, CA 94026 (US)
`
`(21) Appl. No.:
`
`11/607,431
`
`(22)
`
`Filed:
`
`Dec. 1, 2006
`
`Related U.S. Application Data
`
`(60) Provisional application No. 60/741,672,filed on Dec.
`1, 2005.
`
`Publication Classification
`
`(51)
`
`Int. CL
`(2006.01)
`HOAR 1/10
`(52) US. CMe
`ccsccsssessssessseestenesensesevesee 381/74; 381/367
`
`(57)
`
`ABSTRACT
`
`The invention provides a multiple-antenna wireless multi-
`media headset with high fidelity sound, peer-to-peer net-
`working capability,
`seamless handoff between multiple
`wireless
`interfaces, multimedia storage with advanced
`search capability, and ultra low power such that the device
`is capable of operation without recharging. The headset
`supports multiple wireless systems such as Wifi (802.11a/
`b/g/n), Wimax, 3G cellular, 2G cellular, GSM-EDGE,radio
`(e.g. AM/FM/XM), 802.15 (Bluetooth, UWB, and Zigbee)
`and GPS. The headset also provides a platform such that
`applications can access the high fidelity sound system, the
`speech recognition engine,
`the microprocessor, and the
`wireless systems on the device.
`
`microphone array for highfidelity sound
`
`405
`
`receiver
`
`echo
`canceller
`
`echo
`
`canceller
`+@
`
`cS
`
`mic
`array
`
`+)
`
`°
`
`-
`
`455
`445
`frequency domain
`
`460
`speech quality
`enhancement
`
`412
`background
`noise canceller
`
`(>
`ambientnoise
`microphone
`
`speech parameter
`extraction
`
`echo
`
`430
`garCanal >)
`
`450
`
`425
`distorted voice
`parameter
`extraction
`
`APPLE 1016
`
`APPLE 1016
`
`1
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 1 of 10
`
`US 2007/0165875 A1
`
`100
`
`Function Block Diagram of Headset
`
`peer-to-peer
`networking
`
`audio subsystem
`
`microphone
`array processing
`
`GPS
`AM/FM/XM
`Radio
`
`voice
`
`noise
`
`Power Management(enhanced battery life)
`
`userInterface
`recognition
`|
`GSM/EDGE/3G
`
` Antennapoe cancellation}commands|ws|Wifi
`
`and/or Wimax
`
`(802.11 a/b/g/n)
`
`
`
`Algorithms
`
`applications
`
`802.15
`(Bluetooth,
`Zigbee, UWB)
`
`135
`
`FIG. 1
`
`2
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 2 of 10
`
`US 2007/0165875 A1
`
`200
`
`audio interface
`
`227
`
`226 Microphone control
`
`buttons
`
`
`usB.5SZ2852208
`Interface
`
` 237 baseband
`
`
`processor
`
`270
`
`275
`
`battery/charger
`
`
`
`265
`
`260
`
`power supplies
`
`
`
`
`FIG. 2
`
`3
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 3 of 10
`
`US 2007/0165875 A1
`
`300
`
`modelfor user interface via speech recognition
`
`
`
`315
`305
`310
`
`
`occasional userinput to
`
`language model on
`one time per language,
`
`
`augmentlanguage model
`
`device storage
`languagetraining for
`
`
`
`
`for user specific vocabulary,
`500 common word
`(HMM)
`
`
`
`for example proper names
`
`
`
`
`‘cous
`a2
`a0
`eee
`x
`
`
`
`
`
`dictionary/transiation
`325
`
`prnome
`speech recognizer
`assistance
`324
`
`
`
`
`
`
`(on other device, €.g.,
`.
`(HMM)
`
`
`extraction
`laptop)
` peer-to-peer
`
`network
`330
`
`115
`
`340
`
`action
`345
`
`343
`
`extraction
`
`
`SeaanmetadataSeaanvideo, contactinfo
`
`sources of indexed data 342VEE"userenteredmetadata
`ntactphotos)
`
`
`
`
`
`
`
`search
`results
`
`350
`
`345
`
`
`355
`
`dial a number
`
`
`
`search results user
`interface
`
`360
`action
`
`play song matching
`
`search results
`
`
`FIG. 3
`
`4
`
`
`
`Patent Application Publication
`
`Jul. 19,2007 Sheet 4 of 10
`
`US 2007/0165875 A1
`
`
`
`S91I0Apapo}jsip
`
`v-Sls
`
`10028ld
`
`punoiByeg
`
`
`
`J91;90UeSVsiOU
`
`ASIOU
`
`QuUOUdOJOIW
`
`SOV
`
`
`
`punosAyjapyyBiysoyAueSuoydousitw
`
`
`
`
`
`
`
`Jajawesedyoaeds
`
`UOHOENXS
`
`JUBWISOUeBYUD
`
`4Q9UBYUaesiou
`
`Ayenbyooads
`
`
`ureuicpAouenbey
`
`oye
`
`Jajjaoueo
`
`oysa
`
`Jay}a0ues
`
`euRy1Jajaweled(+)IBuEDJS
`UoI}OBIXKS
`
`JUSIqUIE
`
`5
`
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 5 of 10
`
`US 2007/0165875 A1
`
`500
`
`microphone array beamsteering in direction of user speaking
`
`510
`
`ambient noise |
`microphone
`!
`
`6
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 6 of 10
`
`US 2007/0165875 A1
`
`600
`
`flowchart for search engine locating music file orfiles
`
`605
`
`
`
`
`
`
`user inputs a requestto initiate
`a search for one or morefiles
`
`SE queries user for search
`~~
`~~
`~_~
`
`645
`
`No
`
`erms{s
`
`
`
`SE queries
`
`
`user if he wishes to
`SE scans library for files
`
`
`matching search terms
`change search
`
`
`term(s)
`
`
`
`one or morefiles
`match type and
`
`
`
`
`
`
`
`
`
`more than one
`No
`file matches search
`
`terms(s)?
`
`file/files sent to user
`
`user has requested
`more than 1 file
`
`FIG. 6
`
`7
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 7 of 10
`
`US 2007/0165875 A1
`
`700
`
`power managementalgorithm
`
`DSP &
`
`analog/RF
`
`pwroptimization
`
`advanced power management
`algorithms
`
`multi-antenna
`
`shutting down
`mostfunctions
`
`shutting down
`mostcircuits
`
`mode)
`basaSsee a
`
`[TF0% reduction
`||
`(conserve “>
`
`
`
`8
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 8 of 10
`
`US 2007/0165875 A1
`
`S800
`
`simultaneous operation and seamless handoff
`
`
`
`9
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 9 of 10
`
`US 2007/0165875 A1
`
`peer-to-peer networking capability
`
`915
`
`VV
`hs
`
`
`
`FIG. 9
`
`10
`
`10
`
`
`
`Patent Application Publication Jul. 19,2007 Sheet 10 of 10
`
`US 2007/0165875 A1
`
`1000
`
`flowchart for a new nodejoining the network
`
`1010
`
`a new node wishingto join
`the network broadcasts
`a requestto join
`
`new node tries a different
`interface and/or waits a
`random time
`
`
`
`a neighboring
`node hears the
`request to join?
`
`
`
`
`
`
`
`
`
`
`
`neighboring node establishes
`connection with new node
`
`neighboring node exchanges
`information about existing
`network with new node
`(e.g. its routing table)
`
`neighboring node informs other
`nodes on the network
`
`about new node
`
`new node becomespart of the
`established network and
`enables routing to other nodes
`
`FIG. 10
`
`11
`
`11
`
`
`
`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`HIGH FIDELITY MULTIMEDIA WIRELESS
`HEADSET
`
`BACKGROUND
`
`[0001]
`
`1. Field of the Invention
`
`[0002] The present invention generally relates to wireless
`multimedia headsets.
`
`[0003]
`
`2. Description of the State-of-the-Art
`
`[0004] Wireless headsets are common devices used for
`hands-free operation in conjunction with cell phones and
`VoIP phones, as well as with portable music players such as
`digital MP3 players. Such headsets typically include radio
`technology to access a given wireless system. For example,
`cell phone headsets use wireless technology to communica-
`tion with the cell phone handset such that the voice signals
`received by the handset over the cell phone system can be
`transferred to the headset. Similarly, wireless headsets for
`MP3players use wireless technologyto transfer musicfiles
`from the player to the headset.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0005] FIG. 1 provides a functional block diagram show-
`ing the features of the headset according to some embodi-
`ments.
`
`[0006] FIG. 2 illustrates the subsystems that support the
`various functionalities according to some embodiments.
`
`FIG.3 illustrates a model for the user interface via
`[0007]
`speech recognition according to some embodiments.
`
`[0008] FIG. 4 illustrates a microphone array for a high
`fidelity sound system according to some embodiments.
`
`peer-to-peer networking capability, and ultra low power
`such that the device is capable of operation without recharg-
`ing.
`
`In another aspect, the invention provides a multi-
`[0016]
`media headset and method for designing and operating a
`headset comprising: a plurality of multiple wireless inter-
`faces; an advanced search engine with media search capa-
`bility; a high fidelity sound processor; power management
`meansfor ultra low poweroperation; and network connec-
`tivity for peer-to-peer networking.
`
`DETAILED DESCRIPTION
`
`[0017] The present disclosure is generally directed to a
`wireless multimedia headset that can include multiple fea-
`tures and support multiple wireless systems. These features
`can include any combination of a multimedia storage with
`advanced search capability; a high fidelity sound system;
`peer-to-peer networking capability; and an ultra low power
`consumption, such that the device is capable of operation
`without recharging. The headset can also provide a platform
`for both existing and new headset applications (such as
`“push-to-talk” between headsets) to enable access to the
`device features.
`
`FIG. 1 provides a functional block diagram show-
`[0018]
`ing the features of the headset according to some embodi-
`ments. In these embodiments, the headset 100 comprises a
`user interface 105 having query 110 and command 115
`functionality for voice recognition. The headset 100
`includes a peer-to-peer networking 120 functionality that
`will allow any headset within range of other wireless devices
`to self-configure with them into a multihop network.
`
`[0019] The headset 100 is capable of several applications
`[0009] FIG.5illustrates an exemplary situation in which
`125,
`in addition to power management 130 to enhance
`a human speaker (user) is making sounds or an utterance
`battery life. The headset 100 supports Voice over IP (VoIP)
`toward an array of microphones including a plurality of
`135 directly through any of the interfaces that allow it to
`individual microphones or microphone sets according to
`connect to the Internet, as well as an audio subsystem 140
`some embodiments.
`that includes several functionalities such as, for example,
`noise cancellation 145 (and beamforming) through micro-
`phonearray processing 150, in addition to voice recognition
`155 and MP3 support 160. Multiple wireless systems may be
`integrated into the headset 100, including, but not limited to,
`GPSand different radio systems (AM/FM/XM)165,various
`cellular phone standards (3G/2G/GSM/Edge and/or Wimax)
`170, different Wifi standards
`(802.1la/b/g/n) 175, and
`802.15 (Bluetooth, Zigbee, and/or UWB) 180.
`In most
`embodiments, an antenna, or array of antennas, having
`antenna algorithms 185 is usedas part of the wireless system
`or subsystems disclosed herein.
`
`[0010] FIG. 6 illustrates a flowchart of the multistep
`process by which a userlocates a desired musicfile or set of
`files (or other type of multimedia file) according to some
`embodiments.
`
`FIG.7 illustrates details of the power management
`[0011]
`algorithm according to some embodiments.
`
`[0012] FIG. 8 illustrates simultaneous operation over a
`cellular system and a Wifi system according to some
`embodiments.
`
`[0013] FIG. 9 illustrates a peer-to-peer networking proto-
`col used to establish direct or multihop connections with
`other wireless devices for real-time interaction and file
`
`exchange according to some embodiments.
`
`[0014] FIG. 10 illustrates a flow chart describing this
`process according to some embodiments.
`
`SUMMARY
`
`In one aspect the invention provides a High-Fidel-
`[0015]
`ity Multimedia Wireless Headset. In another aspect,
`the
`invention provides a wireless multimedia headset that can
`include multiple features such as multimedia storage with
`advanced search capability, a high fidelity sound system,
`
`FIG. 2 illustrates the subsystems that support the
`[0020]
`various functionalities according to some embodiments. In
`these embodiments,
`the signals 205 received from the
`antenna 210, or array of antennas 215,
`through antenna
`interface 217 are processed by a MIMO RFsystem 220 and
`a baseband processor 225. The subsystems include an audio
`interface 227 having a microphone array 227 having an
`input 228 and an output 229. The subsystemsinclude control
`buttons 230 for the user interface 105 as well as voice
`
`recognition 155. And, a microprocessor 235 having a USB
`interface 237 is present to perform the arithmetic, logic, and
`control operations for the various functionalities through the
`assistance of an internal memory 240.
`
`12
`
`12
`
`
`
`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`[0021] The device also includes a SIM card 245that, for
`example, identifies the user account with a network, handles
`authentication, provides data storage for basic user data and
`network information, and may also contain applications. The
`powers subsystems 250 include advanced power manage-
`ment 255 functionality to control energy use through power
`supplies 260. Solar cells 265 are also available to assist in
`sustaining the supply of power. The solar cells 265 can
`charge the battery 270 from ambient light as well as solar
`light. A battery charger 275 is included and can charge the
`battery, for example, through the input of a DC current 280.
`
`FIG.3 illustrates a model for the user interface via
`[0022]
`speech recognition according to some embodiments.
`In
`these embodiments, there is a set language models stored on
`the device called Hidden Markov Models (HMMs)305 for
`the speech data, and these models may be enhanced through
`some amountofinitial user training 310. In addition, occa-
`sional user input 315 either as commands, or through users
`speaking for a voice application, can be used to augment the
`HMMs 305. The HMMs305 are included in the speech
`recognizer 320 within in a speech recognition algorithm.
`
`[0023] The speech recognizer 320 receives information
`from a digital signal processor (DSP) 322, which collects,
`processes, compresses, transmits and displays analog and
`digital data for feature extraction from an acoustic signal
`324. The speech recognizer 320 is designed such that the
`wireless interfaces 325 and/or peer-to-peer network 330 can
`be used to provide an additional
`input to the algorithm.
`Specifically, the algorithm will have the ability to use any of
`the available wireless interfaces 325 and/or peer-to-peer
`network 330 to connect to another device 335 such as, for
`example, a laptop, computer, or handset to include other
`capabilities including, but not
`limited to, expanding the
`vocabulary base or providing translation assistance to the
`engine in the speech recognizer 320.
`
`[0024] The algorithm will take the user’s speech as a
`query 110 or command 115 input and initiate an indexing
`function 340. The sources of indexed data 342 include, but
`are not limited to, automated metadata extraction 343 and
`user entered metadata 344. Automated metadata 343
`
`includes, for example, music, video, and contact informa-
`tion. User entered metadata 344 includes, for example,
`personal photographs. For commands, the indexing function
`340 will
`take the appropriate action 345 to satisfy the
`command 115. For queries, 110 the indexing function 340
`will enable the search engine to locate the desired file and
`provide search results 350 to the user interface 355 and then
`take the appropriate action 345, such as dialing a number
`357 or playing a desired song 360.
`
`[0025] The algorithms for noise cancellation (and beam-
`forming) 145 based on the microphone array input 228
`speech can be designed relative to the speech recognition
`algorithm, such that the feature extraction of the input 228
`speech is optimized. One of skill will appreciate that noise
`cancellation/beamforming algorithms designed independent
`of the speech recognition algorithms can degrade speech
`recognition performance by introducing undesired speech
`artifacts. The speech recognition will categorize recognized
`speech as either a query 100 (e.g. look for a particular song)
`or a command 115 (e.g. dial a specific number).
`
`[0026] FIG. 4 illustrates a microphone array for a high
`fidelity sound system according to some embodiments. The
`
`microphonearray 405 is coupled with a noise cancellation
`algorithm to pick up sound. The microphone array 405
`includes an ambient noise microphone 410 located on a part
`of the headset optimized to pick up background ambient
`noise and cancel it through a background noise canceller
`412, as well as additional microphone elements 415,420,430
`in different locations on the headset. The plurality of acous-
`tic microphone signals are transduced into corresponding
`electrical microphone output signals by the microphones and
`communicated to the beam forming block 444.
`
`[0027] An additional antenna element may be placed
`inside the ear canal with signal processing through a dis-
`torted voice parameter extraction component 425 to invert
`the distortion of the ear canal transmission and enhance the
`
`voice parameters. The antenna elements 435,440,445,450 in
`the microphone array will have weights assigned to each
`antenna input. Different algorithms can be used to determine
`the weights, depending on the performance criteria,
`the
`number of antenna elements available and their nature, and
`the algorithm complexity. For example, the weights may be
`used to minimize ambient noise, to make the antenna array
`gain independent of frequency, to minimize the expected
`mean square distortion or error of the signal, or to steer the
`direction of the microphone array 227 towards the speaker
`as shownin FIG. 5. Other functions of the microphonearray
`include a frequency domain noise enhancer 455, a speech
`quality enhancer 460, and speech parameter extraction 465.
`
`FIG.5 illustrate an exemplary situation in which a
`[0028]
`human speaker (user) is making sounds or an utterance
`toward an array of microphones including a plurality of
`individual microphones or microphone sets according to
`some embodiments. The plurality of microphones 410,415,
`420 in the array 515 receive a somewhat different acoustic
`signal from the human speaker (or user) 505 due to their
`different relative positions or distances from the human
`speaker (or user) 505. The different acoustical signals may
`be due for example to a different distance or angle of
`incidence of the acoustic wave generated by the human
`speakers utterance, and it may also includeorbe affected by
`reflective and/or reflective surfaces in the room or other
`
`environment in which the speech, sound, or utterance 510
`picked up by the microphonearray 515 takes place. The time
`of arrival of the signal may also differ and be used alone or
`in conjunction with signal magnitude informationto assist in
`beam steering.
`
`[0029] The beam forming block 444 may include analog
`circuits, digital circuits, a combination of analog and digital
`circuits, hardwired or programmable processing, or other
`means for processing the input signals and altering the
`individual microphones and the microphonearray and/or the
`processing ofthe individual microphone 410,415,420 output
`signals to achieve the desired beam steering. The beam
`steering has the effect of focusing the sensitivity of the
`microphone array 515 as a whole toward a desired sound
`source, such as the human speaker 505. It may alternatively
`be used to steer the sensitivity away from an objectionable
`sound source.
`
`[0030] Advantageously, the beam steering will be used to
`increase the human speaker 505 (or other sound source)
`signal to background noise ratio or to otherwise achieve a
`desired result. The output 545 of the beam forming block
`444 is combined with an output 560 from a background
`
`13
`
`13
`
`
`
`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`noise cancellation block 565. The backgroundnoise cancel-
`ler 412 receives a background noise input signal 570 as the
`output electrical signal of an ambient noise microphone 410.
`This ambient noise microphone 410 is primarily responsible
`for sensing or detecting an acoustic ambient noise signal and
`transducing or otherwise converting it
`into an electrical
`ambient noise signal 570 which it communicates to a
`background noise canceller 412. Since the microphone array
`515 may advantageously be steered toward the user 505 and
`may advantageously include a directional characteristic such
`that mostof the sensitivity of the microphonearray 515 is in
`the direction of the user 505, the amountor signal strength
`of the steerable microphone array 515 relative to the user
`will be higher for the user signal and lower for the ambient
`noise.
`
`[0031] The amountorsignal strength of the ambient noise
`microphone 410 relative to the user 505 will be lower for the
`user signal and higher for the ambient noise because of the
`non-steerable and typically non-directional character of the
`ambient noise microphone 410. In at least one non-limiting
`embodiment,
`the use of a plurality of microphones for
`sensing the user’s 50 or speakers sounds may provide added
`sensitivity over the sensitivity of a single ambient noise
`microphone. It should however be appreciated that multiple
`microphones maybe used for the ambient noise sensing.
`
`[0032] The output signal 545 from the beam forming
`block 444 is combined with the output signal 560 from the
`background noise canceller 412 to generate a signal 585 that
`is communicated to other processing circuitry, such as for
`example to the frequency domain noise enhancer in the
`embodimentif FIG. 4.
`
`[0033] The headset will have nonvolatile storage for mul-
`timedia datafiles, typically musicfiles, for example through
`a Flash RAM. There are many methods by which the
`multimedia data files may be loaded into the headset
`memory,
`for example via a wireless connection to the
`Internet, via a cellular telephone connection, via a satellite
`(e.g. XM or Sirius) or AM/FM radio receiver, via a USB
`high-speed data port, or via a wired or wireless connection
`to another device (e.g. a wireless connection to a computer,
`music server, handset, PDA, or other wireless device). The
`library may be partitioned by media type, for example, there
`maybe onepartition of the memory for music, one for phone
`numbers, and the like.
`
`include the capability to add
`[0034] File storage will
`“tags” to files. The tagging is done to facilitate searching
`based on tags that the user selects for each media type. For
`example, a music file might have a tag or tags such asfile
`title, song title, artist, keywords, genre, album name, music
`sample or clip, and the like. The headset will contain
`intelligent software for searching multimediafiles stored on
`the headset based on multiple search criteria and by the type
`offile of interest. Alternatively, a user can set up certain tags
`for all files downloaded under the given tagging criterion.
`The user need only enterthis tag or set of tags once, and then
`change the tag or tags when a changeis desired so that, for
`example, all music downloadedat a given time will have the
`same tag. This is particularly useful for a headset since it is
`very hard to do manual entry for each newfile.
`
`[0035] The search engine (SE) will implement a search
`algorithm consisting of a multistep process to locate a file or
`set of files of interest. This generalized search engine will
`
`re-use a number of similar functions for different kind of
`searches such as speech recognition and namerecognition.
`The search engine (SE) interacts with the user through the
`user interface, which for example can be control buttons or
`via speech. In the case of speech commands, the headset
`synthesizes a speech signal to query the user, and the user’s
`speech commandsare processed by a speech recognition
`engine and then sent to the SE. The noise cancellation (and
`beamforming) 145 capabilities of the microphonearray,
`described above, can be combined with the speech recog-
`nition engine to improve its performance.
`
`FIG. 6 illustrates a flowchart of the multistep
`[0036]
`process by which a userlocates a desired musicfile or set of
`files (or other type of multimedia file) according to some
`embodiments. More particularly,
`in the non-limiting
`embodimentof the process 600 in FIG. 6, a user inputs a
`request to initiate a search for one or more files (step 605).
`The search engine (SE) then queries the user for search
`term(s) or other search criteria or logic (step 610). The
`search engine sans a library (or other database, source, or
`storage) for files or content matching the search terms (step
`615). If the search engine determines (step 620) that one or
`morefiles match type and search term(s) or other specified
`search criteria (yes), then the process proceeds to make a
`second determination (step 625) as to whether more than one
`file or content matches the search term(s) or other search
`criteria. If the determination is that they do match (yes), the
`process continues to determine if the user has requested
`more than one file or content (step 630). If the user has
`requested morethan onefile or content (yes), the file orfiles
`or other content are sent to the user making the search (step
`635).
`
`[0037] Retuningto the step of determination (step 625) as
`to whether more than onefile or content matches the search
`
`term(s) or other search criteria. If the determination is that
`only one file or content matches (no), that file or content is
`sent to the user (step 635). If either the step of determining
`if one or morefiles match type and search term(s) (step 620),
`or the step of determiningif the user has requested more than
`1 file are negative (step 630), then a determination is made
`in which the search engine queries the user to determine if
`the user wishes to change the search term(s) or other search
`criteria (step 640). If the answeris yes, then the step of the
`search engine scanningthe library or other database, storage,
`or other potential
`file or content source (step 615)
`is
`repeated. If the determination (step 640) is no, then the
`search terminates (step 645). The user may of course repeat
`the search at any time with different search terms. It may be
`appreciated that this search engine logic is exemplary and
`non-limiting and that other search engine logic or proce-
`dures may be implemented. Furthermore, although the
`search maybe directed to files or content such as music, it
`mayalternatively be directed to other types of content such
`as audio books, pod casts, or other content.
`
`the headset may have an
`[0038] As shown in FIG. 2,
`optional power management algorithm that minimizes
`power consumption based on the usage of the handset. FIG.
`7 illustrates details of the power management algorithm
`according to some embodiments. As shownin this figure of
`FIG. 7, components of the power management algorithm
`and procedure 700 may advantageously include managing
`power consumption associated with audio, memory, DSP,
`and/or processors to be minimized while supporting the
`
`14
`
`14
`
`
`
`US 2007/0165875 Al
`
`Jul. 19, 2007
`
`applications in use. For example these may be accomplished
`by utilizing multiple antennas (MIMO) in the mostefficient
`way to minimize the power consumption required for wire-
`less transmission; shutting down certain nonessential device
`functionality, and turning off nonessential device circuitry.
`
`[0039] The headset may be designed such that a certain
`application or set of applications that require relatively low
`power can be maintained for an indefinite time period under
`solar power alone, for example using solar cells embedded
`in the device and aggressive power managementwill allow
`the device to support the given application(s) indefinitely
`without recharging by shutting down all nonessential func-
`tions except those associated with the specific application or
`applications. For example, the device may operate indefi-
`nitely without recharging in Bluetooth-only or Zigbee-only
`mode by shutting down all functions not associated with
`maintaining a low-rate wireless connection to the handset
`through Bluetooth or Zigbee; in voice-only mode the device
`may operate indefinitely without recharging by shutting
`down all functionality of the device not associated with
`making a voice call (e.g. certain memory access, audio
`processing, noise cancellation, and search algorithms)
`through one or more interfaces that support suchcalls (e.g.,
`2G, 3G, GSM, VoIP over Wifi), and the like. Exemplary
`strategies and processesare illustrated in the embodimentof
`FIG. 7, and are provided by way of example but not of
`limitation.
`
`[0040] The headset may advantageously support simulta-
`neous operation on the different wireless interfaces, such as
`for example simultaneous operation on at least two systems
`that may include Wifi (802.11a/b/g/n), Wimax, 3G cellular,
`2G cellular, GSM-EDGE,radio (e.g. AM/FM/XM), 802.15
`(Bluetooth, UWB, and Zigbee) and GPS. These systems
`often operate at different frequencies and may require dif-
`ferent antenna characteristics. The simultaneous operation
`over different frequencies can be done, for example, by
`using someset of antennas for one system and using another
`set of antennas for another system.
`
`[0041] FIG. 8 illustrates simultaneous operation over a
`cellular system and a Wifi system according to some
`embodiments. In these embodiments, a headset 805 having
`a plurality of antennas 810-1, 810-2, 810-3, and 810-4 is
`able to connectto a wi-fi access point 820 via its one or more
`antennas 830, 835 and to a cellular base station 840 via one
`or more basestation antennas 850, 855. A voice overIP call
`handoff between a wi-fi and cellular connection may advan-
`tageously be implemented. Another mechanism to support
`this simultaneous multifrequency operation is time division.
`In addition to simultaneous operation,
`the handset can
`support
`seamless handoff between two systems. For
`example,
`the handset could switch a VoIP call from a
`wide-area wireless network such as Wimaxor 3G to a local
`area network such as Wifi. FIG. 8 also illustrates the
`seamless handoff of a VoIP call between a cellular and Wifi
`system.
`
`[0042] FIG. 9 illustrates a peer-to-peer networking proto-
`col used to establish direct or multihop connections with
`other wireless devices for real-time interaction and file
`
`use of all wireless interfaces that can establish a direct
`connection with other wireless devices. For example,
`it
`could use an 802.11a/b/g/n interface operating in peer-to-
`peer mode, an 802.15 interface, a proprietary peer-to-peer
`radio interface, and/or an infrared communication link. The
`user may select to establish peer-to-peer networks on all
`available interfaces simultaneously, on a subset of inter-
`faces, or on a single interface based on a prioritized list of
`possible interfaces. Alternatively, the peer-to-peer network
`may beestablished based on a list or set of lists of specific
`devices or user IDs that the user wishes to interact with.
`
`[0043] There are two main components to the peer-to-peer
`networking protocol: neighbor discovery and routing. In
`neighbor discovery a handset determines which other
`devices it can establish a direct connection with. This may
`be done, for example, by setting aside a given control
`channel for neighbor discovery, where nodesthat are already
`in the peer-to-peer network listen on the control channel for
`new nodes beginning the process of neighbor discovery.
`Whena nodefirst begins the process of neighbor discovery,
`it broadcasts a beacon identifying itself over a control
`channel set up for this purpose. Established nodes on the
`network periodically listen on the control channel for new
`nodes. If an established node on the network hears a
`broadcast beacon,
`it will establish a connection with the
`broadcasting node. The existing node will exchange infor-
`mation with the new node about the existing network to
`whichit belongs, e.g. it may exchangethe routing tableit has
`for other nodes in the network with the new node. The
`neighboring node will also inform other nodes on the
`network about the existence of the new node, andthat it can
`be reached via the neighboring node, e.g. by exchanging
`updated routing tables with the other nodes. At that point the
`new node becomes part of the network and activates the
`routing protocol
`to communicate with all nodes in the
`network. FIG. 10 illustrates a flow chart describing this
`process according to some embodiments.
`
`[0044] The routing protocol will take advantage of link
`layer flexibility in establishing and utilizing single and
`multihop routes between nodes with the best possible end-
`to-end performance. The routing protocol will typically be
`based on least-cost end-to-end routing by assigning costs for
`each link used in an end-to-end route and computing the
`total cost based on these link costs. The cost function is
`
`designed to optimize end-to-end performance. For example,
`it may take into account the data rates, throughput, and/or
`delay associated with a given link in coming up with a cost
`of using that link. It may also adjust link layer parameters
`such as constellation size, code rate, transmit power, use of
`multiple antennas, etc., to reduce the cost of a link and
`thereby the cost of an end-to-end route.
`
`In addition, for nodes with multiple antennas, mul-
`[0045]
`tiple independent paths can be established between these
`nodes, and these independent paths can comprise separate
`links over which a link cost
`is computed. The routing
`protocol can also include multiple priorities associated with
`routing of each data packet depending on data priority, delay
`constraints, user priority, and the like.
`
`exchange according to some embodiments. As shownin the
`exemplary embodiment 900 of FIG. 9, peer-to-peer connec-
`tivity may be accomplished between a plurality of headsets,
`handsets, or other ne