`
`c19) United States
`
`02)Patent Application Publication
`cw) Pub. No.: US 2008/0031475 Al
`
`(43) Pub. Date: Feb. 7, 2008
`GOLDSTEIN
`
`
`
`
`
`
`
`
`
`
`
`
`
`1111111111111111 IIIIII IIIII 11111 1111111111 11111 lllll lllll 111111111111111 1111111111 11111111
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`US 20080031475Al
`
`(54)PERSONAL AUDIO ASSISTANT
`DEVICE
`AND METHOD
`
`
`
`Publication Classification
`
`(75)Inventor:
`
`Beach, FL (US)
`
`(51)Int. CI.
`Steven Wayne GOLDSTEIN, Delray
`(2006.01)
`H04M 1105
`381/151; (52)U.S. CI ............................................ 455/41.2
`
`
`
`Correspondence Address:
`
`GREENBERG TRAURIG, LLP
`
`1750 TYSONS BOULEVARD, 12TH FLOOR
`MCLEAN, VA 22102 (US)
`
`(57)
`
`ABSTRACT
`
`
`
`
`
`FLPersonics Holdings Inc., Boca Raton, (73)Assignee:
`
`(21)Appl. No.:
`
`11/774,965
`
`(22)Filed:
`
`Jul. 9, 2007
`
`At least one exemplary embodiment is directed to an ear
`
`
`
`
`
`
`
`piece comprising: an ambient microphone; an ear canal
`
`
`
`
`microphone; an ear canal receiver; a sealing section; a logic
`
`
`
`
`circuit; a communication module; a memory storage unit,;
`
`
`and a user interaction element, where the user interaction
`
`
`element is configured to send a play command to the logic
`
`
`circuit when activated by a user where the logic circuit reads
`Related U.S. Application Data
`
`
`
`
`registration parameters stored on the memory storage unit
`
`(60)Provisional application No. 60/806,769, filed on Jul.
`
`
`and sends audio content to the ear canal receiver according
`
`to the registration parameters.
`
`
`
`
`
`8, 2006.
`
`FD..¥-pi:��
`
`S�-,_-_
`
`� _..,
`
`APPLE 1026
`
`1
`
`
`
`Patent Application Publication
`
`Feb. 7,2008 Sheet 1 of 5
`
`US 2008/0031475 Al
`
`&
`
`i
`
`verrei
`
`wetted
`
`Satellite Radio
`
`
`
`Lake
`aecthy
`
`diem
`Saarianny
`7
`
`2
`
`
`
`
`
`
`
`
`
`
`
`Patent Application Publication
`
`Feb. 7,2008 Sheet 2 of 5
`
`US 2008/0031475 Al
`
`APPROPRI
`
`:ve
`
`ia
`
`tt
`
`
`
`Sit
`eens
`3
`
`¥
`
`Synthesis &
`
`in
`318
`cn
`
`4
`
`POPEPPODODDELPOEEEEEEPEEDDEPPPEPLELEPIED:
`
`
`
`
`
`3
`
`
`
`
`
`
`Patent Application Publication Feb. 7, 2008 Sheet 3 of 5
`
`US 2008/0031475 Al
`
`aUe
`
`gunnheyy,
`|
`
`
`xxxe=©s&
`
`pemmerermmemmeenrt!
`
`
`
`DTV!
`
`
`
`
`
`
`
`
`
`DeerCHEEEECILMELETEOLUETETOMEENEETG
`DAGALLAAALSaALTatii
`
`apreererneenere
`
`
`pe
`eer
`
`
`
`adddnednnbty
`
`
` 8
`
`88
`3<
`poo 1
`
`ey
`=
`Ancosonemmnerarrenrernrnrenrrmnrrnernmen ammaf
`
`
`:
`tearaeereeneNNeNN
`
`F
`oe
`
`
`i3;i
`
`ve
`
`aan
`
`
`
`4
`
`
`
`Patent Application Publication Feb. 7, 2008 Sheet 4 of 5
`
`US 2008/0031475 Al
`
`
`z SfyhiBS, Seedsgen,
`"i,“ty
`SHO
`
`
`
`
`LAGIJatoncdne
`Badin
`
`
`
`avn cadet
`
`
`
`os
`con
`Fig Bis
`
`a
`STS
`
`:
`-
`a
`ir
`
`
`
`SHE Ear Caead My mph,
`
`Leas}
`
`BRO Sac Gaal Recer gar
`feck
`
`| SURleader |
`i
`fehua ey ok
`Le
`
`
`
`5
`
`
`
`Patent Application Publication Feb. 7, 2008 Sheet 5 of 5
`
`US 2008/0031475 Al
`
`?
`
`6Lae
`
`“YS
`i
`i
`pment
`mg Serve |
`i
`
`|
`
`yl Bagigavteed Adio ectsaved is EPS
`
`i
`aeet
`“sy
`g o
`
`hater
`!
`; AAAAICTTRADS
`
`
`6
`
`
`
`US 2008/0031475 Al
`
`Feb. 7, 2008
`
`PERSONAL AUDIO ASSISTANT DEVICE AND
`METHOD
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`[0001] This application claims the priority benefit of
`60/806,769, under 35 U.S.C. §119(e), filed 8 Jul. 2006,
`which is incorporated herein by reference in its entirety.
`
`FIELD OF THE INVENTION
`
`[0002] The invention relates in general to methods and
`devices for the storage and recall of audio content via an
`earpiece, and in particular, though not exclusively, for the
`storage and playing of music or verbal content on a system
`that is built into a headphone.
`
`BACKGROUND OF THE INVENTION
`
`[0003] Present audio content playing devices are separated
`from the headphone system that normally contains the
`speakers (also referred to as receivers). The reason for this
`has typically been that audio content has been stored on
`disks that require a separate playing system. However, even
`with the advent of storing audio content on non-disk RAM
`(Random Access Memory) storage systems, the audio con-
`tent player has been separated from the earpiece system
`(e.g., plug in headphonesor earbuds). Combining the capac-
`ity for audio download and playing in an earpiece system is
`not obviousoverrelated art since the user interaction system
`(e.g., play button, keyboard system) does not readily appear
`compatible with the size of an earpiece device and the
`difficulty of user interaction.
`
`[0004] Additionally, no system currently exists for regis-
`tration and download of audio content into an earpiece.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0005] Embodiments ofthe present invention will become
`apparent from the following detailed description, taken in
`conjunction with the drawings in which:
`
`[0006] FIG. 1 illustrates the connection between an ear-
`piece device (103 and 104) and a communication network;
`
`[0007] FIG. 2 illustrates at least one exemplary embodi-
`ment where earpiece devices share information with other
`earpiece devices within range (e.g., GPS location and iden-
`tity);
`
`FIG.3 illustrates an example of various elements
`[0008]
`that can be part of an earpiece device in accordance with at
`least one exemplary embodiment;
`
`FIG.4 illustrates an example of a communication
`[0009]
`system in accordance with at least one exemplary embodi-
`ment that a user can use to register via his/her computer;
`
`[0010] FIG. 5A illustrates an earpiece that can store and
`download audio content in accordance with at least one
`
`exemplary embodiment;
`
`FIG. 5B illustrates a block diagram of the earpiece
`[0011]
`of FIG. 5A; and
`
`[0012] FIG. 6 illustrates a user interface for setting the
`parameters of the Personal Audio Assistant.
`
`DETAILED DESCRIPTION OF EXEMPLARY
`EMBODIMENTS OF THE PRESENT
`INVENTION
`
`[0013] The following description of exemplary embodi-
`ment(s) is merely illustrative in nature and is in no way
`intended to limit the invention, its application, or uses.
`
`[0014] Processes, methods, materials and devices known
`by one of ordinary skill
`in the relevant arts can not be
`discussed in detail but are intendedto be part of the enabling
`discussion where appropriate for example the generation and
`use of transfer functions.
`
`[0015] Notice that similar reference numerals andletters
`refer to similar items in the following figures, and thus once
`an item is defined in one figure, it can not be discussed for
`following figures.
`
`[0016] Note that herein when referring to correcting or
`corrections of an error (e.g., noise), a reduction of the error
`and/or a correction of the error is intended.
`
`SUMMARY OF EXEMPLARY EMBODIMENTS
`
`[0017] At least one exemplary embodiment is directed to
`a system for Personalized Services delivered to a Personal
`Audio Assistant incorporated within an earpiece (e.g., ear-
`buds, headphones). Personalized Services include content
`such as music files (for preview or purchase) related to a
`user’s preferences, reminders from personal scheduling soft-
`ware, delivery and text-to-speech, speech-to-text processing
`of email, marketing messages, delivery and text-to-speech of
`stock market information, medication reminders, foreign
`language instruction, academic instruction, time and date
`information, speech-to-speech delivery, instructions from a
`GPS system and others. A Personal Audio Assistant can be
`an audio playback platform for providing the user with
`Personalized Services.
`
`[0018] At least one exemplary embodiment is directed to
`a Personal Audio Assistant system that is included as part of
`an earpiece (e.g., Headphone system). The Personal Audio
`Assistantis capable ofdigital audio playback, mitigating the
`need to carry a personal music player. Furthermore, a
`subscription-based service provides audio contentto the user
`through the Personal Audio Assistant. The type of audio
`content, which is automatically providedto the user, is based
`on the user’s preferences, which are obtained through a
`registration process.
`
`[0019] The audio content, which is seamlessly down-
`loaded to the Personal Audio Assistant in the background,is
`managed from a Server system and is only available on the
`Personal Audio Assistant for a predetermined period of time
`or for a fixed numberof playback counts. However, the user
`can purchase any music file or electronic book directly from
`the Personal Audio Assistant with a simple one-click control
`interface, storing the purchased audio content on the Per-
`sonal Audio Assistant as well as storing the content perma-
`nently in a user storage lock-box location on the Server
`system.
`
`[0020] The system provides for audio content to be new
`and “fresh” each time the user auditions the content. As
`
`such, the contentis typically auditioned ina first-in:first-out
`scenario. In one such example, the user has turned on the
`Personal Audio Assistant at 8:00 am and by 10:00 am has
`
`7
`
`
`
`US 2008/0031475 Al
`
`Feb. 7, 2008
`
`auditioned 2 hours of content that were created for the user
`as a manifestation of the user’s choices of their preferences
`of genre,artist, their demographics, day of the week, time of
`day and purchase history. The system also provides for the
`elimination of a particular song or playlist in situ.
`
`the user’s Listening History Envelope is
`[0021] As
`updated based on experience, subsequent downloads will
`only contain content
`incorporating these revised prefer-
`ences. The Personal Audio Assistant provides for ample
`memory, thus permitting hours of uninterrupted playback
`without the need to download additional content from the
`server. When in need, the Personal Audio Assistant auto-
`matically interrogates various communication platforms as
`it searches for connections. Once a connection is made, the
`Listener History Envelopefile is uploaded to the server, and
`a newset of personalized playlist content is downloaded to
`the Personal Audio Assistant. Accordingly, as the Personal
`Audio Assistant content is auditioned and thus depleted, the
`communications system provides for constant replenish-
`ment.
`
`In another embodiment, the Personal Audio Assis-
`[0022]
`tant also provides for a new set of business solutions to be
`offered to the music industry. As the personalized audio
`content is only available for audition for a limited period of
`time, and can not be sent to the user again from for weeks
`to months, the user’s purchasing behavior can be demon-
`strated as spontaneous. The basic model of “Try before you
`buy” is the expected outcome. In another iteration,
`the
`distributor of the music can choose to offer discounts, which
`can be time-sensitive or quantity-sensitive in nature,in effect
`promoting greater purchase activity from the user.
`[0023]
`In another iteration, while in audition a user can
`wish to place the desired content in a hold status. The hold
`status formsthe basis of a “wishlist,” thus allowing the user
`to hold for future consideration audio content while it is
`
`being auditioned. This content resides in the memory of the
`Personal Audio Assistant for a defined period, and is auto-
`matically erased, or the user can do so manually. The
`selected content will also appear on the user’s computer via
`a URL address; here it resides on the server ready for
`audition or purchase and download.
`[0024] The system is designed to operate as simply as
`possible. Using a single button, which has multiple contacts,
`the interface allowsthe user to purchase, delete, skip to next,
`and adds to wish list and even control listening level.
`[0025]
`In another iteration, the user can download their
`own music to the Personal Audio Assistant for audition. The
`
`Personal Audio Assistant system is capable of text-to-speech
`processing and can interface with personal scheduling soft-
`ware to provide auditory schedule reminders for the user.
`Auditory remindersrelating to the user’s medication sched-
`ule are also generated by the system.
`[0026] At least one exemplary embodimentincludes input
`Acoustic Transducers (microphones) for capturing user’s
`speech as well as Environmental Audio. In further embodi-
`ments, stereo input Acoustic Transducers capture Environ-
`mental Audio, and, mixing it with the audio signal path,
`present the ambient sound field to the user, mitigating the
`need to remove the Headphoneapparatus for normal con-
`versation.
`
`content. The Personal Audio Assistant can store and play
`back audio content in compressed digital audio formats. In
`one embodiment, the storage memory of the Personal Audio
`Assistant is completely closed to the end-user and controlled
`from the Server. This allows for audio content to be distrib-
`uted on a temporary basis, as part of a subscription service.
`In another iteration of the present invention, the storage
`memory of the Personal Audio Assistant is not completely
`closed to the end-user, allowing the user to transfer audio
`content to the Personal Audio Assistant from any capable
`device such as a Personal Computer or a Personal Music
`Player.
`
`Inat least one exemplary embodimentthe Personal
`[0028]
`Audio Assistant automatically scans for other Bluetooth-
`enabled audio playback systems and notifies the user that
`additional devices are available. These additional devices
`
`can include a Bluetooth video system, television system,
`personal video player, video camera, cell phone, another
`Personal Audio Assistant and others.
`
`In another iteration, the Personal Audio Assistant
`[0029]
`can be directly connected to a Terrestrial Radio receiver, or
`have such a receiver built in to the system.
`
`In another exemplary embodiment, a technique
`[0030]
`known as Sonification can be used to convey statistical or
`other numerical information to a headphone. For example,
`the user would be able to receive information about the
`
`growth or decline of a particular stock, groups of stocks or
`even sectors of the markets though the Personal Audio
`Assistant. Many different components can be altered to
`change the user’s perception of the sound, andin turn, their
`perception of the underlying information being portrayed.
`An increase or decrease in some level of share price or
`trading levels can be presented to the user. A stock market
`price can be portrayed by an increase in the frequency of a
`sine tone as the stock price rose, and a decline in frequency
`as it fell. To allow the user to determine that more than one
`
`stock was being portrayed, different timbres and spatial
`locations might be used for the different stocks, or they can
`be played to the user from different points in space, for
`example, through different sides of their headphones. The
`user can act upon this auditory information and use the
`controls built-in to the headphoneto either purchaseorsell
`a particular stock position.
`
`[0031] Furthermore, specific sonification techniques and
`preferences can be presented to the user as “themes” from
`which the user can select. For example, one theme might
`auralize the current trading price of one stock with an
`ambient sine tone in the left ear, the price of another stock
`in the right ear, their respective trade volumesas perceived
`elevation using personalized head-related transfer function
`binauralization, and the current global index or other market
`indicator as the combined perceptual loudnessof both tones.
`Such a scheme affords ambient auditory display in this
`example of five dimensions of financial data without com-
`promising the user’s ability to converse or work on other
`tasks. In another embodiment, the system affords users the
`ability to customize themes to their liking and to rapidly
`switch among them using simple speech commands. Addi-
`tionally, the user can search the web from voice commands
`and receive results via text to speech synthesizer.
`
`[0027] Additional exemplary embodiments are directed to
`various scenarios for the delivery and consumption of audio
`
`In yet another exemplary embodiment the PAA
`[0032]
`functions as a dictation device for medical professionals for
`
`8
`
`
`
`US 2008/0031475 Al
`
`Feb. 7, 2008
`
`dictating clinical information to a patient’s medical record,
`or write prescriptions for medication or devices. Conversely,
`the PAA can function as text-to-speech allowing the clini-
`cian to audition information from a medical record, rather
`than reading. BF thought: can save considerable time pre-
`paring clinician interaction with patient. ]
`
`In another iteration, the Personal Audio Assistant
`[0033]
`can functionas a tool to locate other users of Personal Audio
`
`Assistant who share commoninterests, or who are searching
`for particular attributes of other users. Whereas the 1** user
`has stored specific personal information in the Public Data
`memory of the Personal Audio Assistant, an example of
`which might be related to schools attended, marital status,
`profession etc, or the 1** user can be in search of an another
`user with these attributes and whereas 2nduser of a Personal
`
`Audio Assistant comes within communication range of the
`1* user, the individual Personal Audio Assistants commu-
`nicate with each other, access the personal
`information
`stored in each of their respective Public Data memory’s to
`ascertain if these users have commoninterests. If a match
`occurs, each unit can contain both audible and visual indi-
`cators announcing that a match has been madeand thus each
`user can start dialog either physically or electronically via
`the environmental microphones.
`
`Examples of Terminology
`[0034] Note that the following non-limiting examples of
`terminology are soley intended to aid in understanding
`various exemplary embodiments and is not intended to be
`restrictive of the meaning of termsnorall inclusive.
`[0035] Acoustic Isolation Cushion: An “Acoustic Isolation
`Cushion” shall be defined as a circum-aural or intra-aural
`
`device that provides acoustic isolation from Environmental
`Noise. Acoustic Isolation Cushions can be includedaspart
`of a Headphonessystem, allowing the output of the acous-
`tical transducers to reach the ear unimpeded, butstill pro-
`viding acoustic isolation from Environmental Noise.
`[0036] Acoustic Transducer: An “Acoustic Transducer”
`shall be defined as a device that converts sound pressure
`level variations into electronic voltages or vice versa.
`Acoustic Transducers include microphones, loudspeakers,
`Headphones, and other devices.
`[0037] Audio Playback:
`“Audio Playback” shall be
`defined as the auditory stimuli generated when Playback
`Hardware reproduces audio content (music, speech, etc) for
`a listener or a group oflisteners listening to Headphones.
`
`[0038] Audition: “Audition” shall be defined as the pro-
`cess of detecting sound stimulus using the humanauditory
`system. This includes the physical, psychophysical, psy-
`choacoustic, and cognitive processes associated with the
`perception of acoustic stimuli.
`
`[0039] Client: A “Client” shall be defined as a system that
`communicates with a Server, usually over a communications
`network, and directly interfaces with a user. Examples of
`Client systems include personal computers and mobile
`phones.
`
`[0040] Communications Port: A Communication Port
`shall be defined as an interface port supporting bidirectional
`transmission protocols (TCP/IP, USB, IEEE 1394, IEEE
`802.11, Bluetooth, A2SP, GSM, CDMA,orothers) via a
`communications network (e.g.,
`the Internet, cellular net-
`works).
`
`[0041] Control Data: “Control Data” shall be defined as
`information that dictates the operating parameters for a
`system or a set of systems.
`
`[0042] Earcon: An Earcon shall be defined as a Personal-
`ized Audio signal that informs the User of a pending event
`typically inserted in advance of the upcoming audio content.
`
`[0043] Ear Mold Style: “Ear Mold Style”shall be defined
`as a description of the form factor for an intra-aural device
`(e.g., hearing aids). Ear Mold Styles include completely in
`the canal (CIC), in the canal (ITC), in the ear (ITE), and
`behind the ear (BTR).
`
`“Environmental Audio”
`[0044] Environmental Audio:
`shall be defined as auditory stimuli ofinterest to the user in
`the environment where the user is present. Environmental
`Audio includes speech and music in the environment.
`
`“Environmental Noise”
`[0045] Environmental Noise:
`shall be defined as the auditory stimuli inherent to a par-
`ticular environment where the user is present and which the
`user does not wish to audition. The drone of highwaytraffic
`is a common example of Environmental Noise. Note that
`Environmental Noise and Audio Playback are two distinct
`types of auditory stimuli. Environmental Noise does not
`typically include Music or other audio content.
`
`[0046] E-Tailing System: An “E-tailing System”shall be
`defined as a web-based solution through which a user can
`search, preview and acquire some available product or
`service. Short for “electronic retailing,” E-tailing is the
`offering of retail goods or services on the Internet. Used in
`Internet discussions as early as 1995,
`the term E-tailing
`seems an almost inevitable addition to e-mail, e-business,
`and e-commerce. E-tailing is synonymous with business-to-
`consumer (B2C)transactions. Accordingly, the user can be
`required to register by submitting personal information, and
`the user can be required to provide payment in the form of
`Currency or other consideration in exchange for the product
`or service. Optionally, a sponsor can bear the cost of
`compensating the E-tailer, while the user would receive the
`product or service.
`
`[0047] Generic HRTF: A “Generic HRTF”shall be defined
`as a set of HRTF data that
`is intended for use by any
`Member. A Generic HRTF can provide a generalized model
`of the parts of the human anatomy relevant to audition and
`localization, or simply a model of the anatomy of an
`individual other than the Member. The application of
`Generic HRTF data to Audio Content provides the least
`convincing Spatial Image for the Member, relative to Semi-
`Personalized and Personalized HRTF data. Generic HRTF
`data is generally retrieved from publicly available databases
`such as the CIPIC HRIF database.
`
`[0048] Headphones: “Headphones” (also known as ear-
`phones, earbuds, stereophones, headsets Canalphones, orthe
`slang term “cans”) are a pair of transducers that receive an
`electrical signal from a media player, communication receiv-
`ers and transceivers, and use speakers placed in close
`proximity to the ears (hence the name earphone) to convert
`the signal
`into audible sound waves. Headphones are
`intended as personal listening devices that are placed either
`circum-aural or intra-aural according to one of the Ear Mold
`Styles, as well as other devices that meet the above defini-
`tion such as advanced eyewear that
`includes Acoustical
`Transducers (i.e. Dataview). Headphones can also include
`
`9
`
`
`
`US 2008/0031475 Al
`
`Feb. 7, 2008
`
`stereo input Acoustic Transducers (microphones) included
`as part of the Ear Mold Style form factor.
`
`[0049] HRTF: “HRTF” is an acronym for head-related
`transfer function—aset of data that describes the acoustical
`
`reflection characteristics of an individual’s anatomyrelevant
`to audition. Although in practice they are distinct (but
`directly related), this definition of HRTF encompasses the
`head-related impulse response (HRIR) or any other set of
`data that describes some aspects of an individual’s anatomy
`relevant to audition.
`
`Informed Consent: “Informed Consent” shall be
`[0050]
`defined as a legal condition whereby a person can be the to
`have given formal consent based upon an appreciation and
`understanding of the facts and implications associated with
`a specific action. For minors or individuals without complete
`possession oftheir faculties, Informed Consent includes the
`formal consent of a parent or guardian.
`
`[0051] Listening History Envelope: “Listening History
`Envelope”shall be defined as a record of a user’s listening
`habits over time. The envelope includes system data, time
`system was turned off,
`time system is presenting [BF
`thought: system doesn’t audition, system transducers, and
`user auditions] content, time stamp of content being audi-
`tioned, content which is: skipped, deleted played multiple
`times, saved in the Wish List, and time between listening
`sessions.
`
`[0052] Music: “Music” shall be defined as a form of
`expression in the medium of time using the structures of
`tones and silence to create complex forms in time through
`construction of patterns and combinationsof natural stimuli,
`principally sound. Music can also be referred to as audio
`media or audio content.
`
`[0053] Playback Hardware: Any device used to play pre-
`viously recorded or live streaming audio. Playback Hard-
`ware includes Headphones, loudspeakers, personal music
`players, mobile phones, and other devices.
`
`[0054] Personal Audio Assistant: A “Personal Audio
`Assistant” shall be defined as a portable system capable of
`interfacing with a communications network, directly or
`through an intermediate,
`to transmit and receive audio
`signals and other data.
`
`[0055] Personal Computer: “Personal Computer”shall be
`defined as any piece of hardware that is an open system
`capable of compiling, linking, and executing a programming
`language (such as C/C++, java, etc.).
`
`[0056] Personal Music Player: “Personal Music Player”
`shall be defined as any portable device that implements
`perceptual audio decoder technology but is a closed system
`in that users are not generally allowed or able to write
`software for the device.
`
`[0057] Personalized HRTF: A “Personalized HRTF”shall
`be defined as a set of HRTF data that is measured for a
`specific Member and unique to that Member. The applica-
`tion of Personalized HRTF data to Audio Content creates, by
`far, the most convincing Spatial Image for the the Member
`(Begault et. al. 2001, D. Zotkin, R. Duraiswami, and L.
`Davis 2002).
`
`[0058] Personalized Services: “Personalized Services”
`shall be defined as services customized to better meet the
`
`needs of an individual. Personalized Services include media
`content (for preview or purchase) related to a user’s pref-
`erences,
`reminders from personal scheduling software,
`delivery and text-to-speech processing of email, marketing
`messages, delivery and text-to-speech of stock marketinfor-
`mation, medication reminders, foreign language instruction
`[real-time foreign language translation? ], academic instruc-
`tion, time and date information, and others.
`
`[0059] Public Data: “Public Data” shall be defined as data
`which contains specific and personal information about the
`registered user of the Personal Audio Assistant. The regis-
`tered user chooses which portions of their complete Regis-
`tration Process data they wish to include in this subset. This
`data becomesdistributed to other users who have compliant
`devices thus allows other users to know specific details of
`the registered user.
`
`Process”
`“Registration
`Process:
`[0060] Registration
`includes the acquisition of the user’s preference via a web
`page. Typically, the process would include the items to be
`captured: Age, demographics,
`email, gender, Relative
`Audiogram, Personal Preferences, banking information,
`credit card information, wake-up and sleep times, music
`preferences by genre, artist, preferences for writers and
`authors, desire to receive advertising, turn-on listening level,
`equalization, email preferences, parental control setup as
`well as other user-controlled settings.
`
`[0061] Relative Audiogram: A “Relative Audiogram”shall
`be defined as a measured set of data describing a specific
`individual’s hearing threshold level as a function of fre-
`quency. A Relative Audiogram is only an approximate
`Audiogram, leaving more complete Audiogram analysis to
`qualified audiologists.
`
`[0062] Semi-Personalized HRTF: A “Semi-Personalized
`HRTF”shall be defined as a set of HRTFdata thatis selected
`from a database of known HRTF data as the “best-fit” for a
`
`specific user. Semi-Personalized HRTF data is not necessar-
`ily unique to one user; however, interpolation and matching
`algorithms can be employed to modify HRTF data from the
`database to improve the accuracy of a Semi-Personalized
`HRTF. The application of Semi-Personalized HRTF data to
`Audio Content provides a Spatial Image that is improved
`compared to that of Generic HRTF data, but less effective
`than that of Personalized HRTF data. The embodiments
`within speak to a variety of methods for determining the
`best-fit HRTF data for a particular Member
`including
`anthropometrical measurements extracted from photographs
`and deduction.
`
`Server: A “Server”shall be defined as a system that
`[0063]
`controls centrally held data and communicates with Clients.
`
`[0064] Sonification: “Sonification” shall be defined as the
`use of non-speech audio to convey information orto aurally
`perceptualize non-acoustic data (auralize). Due to a variety
`of phenomena involving human cognition, certain types of
`information can be better or more efficiently conveyed using
`auditory means than, for example, visual means.
`
`Exemplary Embodiments
`
`[0065] FIG. 1 illustrates the connection between an ear-
`piece device (103 and 104) and a communication network
`(101), which can be operatively (via wire or wireless) to a
`server system (100) and/or an e-mail server (105). Addi-
`
`10
`
`10
`
`
`
`US 2008/0031475 Al
`
`Feb. 7, 2008
`
`tionally a radio signal(e.g., satellite radio) can be input into
`the earpiece 500 via a communication module (e.g., blue-
`tooth wireless module 515).
`
`[0066] FIG. 2 illustrates at least one exemplary embodi-
`ment where earpiece devices share information with other
`earpiece devices within range (e.g., GPS location and iden-
`tity). For example multiple users (e.g., 202, 203, 204, and
`206) can send signals to each individual earpiece (e.g., 500)
`when in range (e.g., via a wireless connection 205) or to a
`mobile audio communications device 200 via a wireless
`connection (201) with each earpiece (500). Additionally a
`information (e.g., audio content, software download) can be
`sent via a client’s computer 207 to each earpiece, either
`directly (e.g., 205), or via 200. For example audio content
`can be retrieved on a user’s computer and sent
`to the
`earpieces that have authorization to use it.
`
`[0067] FIG. 3 illustrates an example of various elements
`that can be part of an earpiece device in accordance with at
`least one exemplary embodiment. The earpiece can include
`all or someofthe elementsillustrated in FIG. 3. For example
`the logic circuit 570 or the operatively connected memory
`storage device 585, can include spatial enhancement soft-
`ware 329, a DSP code 330, a speech synthesis and recog-
`nition system 311, and a digital
`timer 312. Additional
`elements can be connectedto the logic circuit 570 as needed,
`for example a software communication interface 307 (e.g.,
`wireless module 515), data port interface 306, audio input
`buffers 300 connected to digital audio input 302 and/or
`analog audio input converted to digital via an ADC 301,
`environmental audio input acoustic transducer(s) 321 con-
`verted to digital via an ADC 316, user control 324, digital
`audio output 328, output acoustic transducers 319, display
`systems 318, communication buffers 325 as well as other
`electronic devices as known by oneof ordinary skill in the
`relevant arts.
`
`FIG.4 illustrates an example of a communication
`[0068]
`system in accordance with at least one exemplary embodi-
`mentthat a user can use to register via his/her computer 419,
`via a communication network 400 (e.g., internet connection)
`connected to many various database andregistration systems
`as illustrated and labeled in FIG.4.
`
`[0069] FIG. 5A illustrates an earpiece that can store and
`download audio content in accordance with at least one
`
`exemplary embodiment. The earpiece 500, can include a
`first user interaction element 530 (e.g., a button), that can be
`used to turn the earpiece 500 on, or if on then activate an
`audio play command to start playing saved audio content.
`The earpiece 500 can also include a seconduserinteraction
`element 550 (e.g., a slide control) that can be used for
`example to control
`the volume. The earpiece can also
`include recharge ports, that can accept two wires of varying
`voltage that can be inserted into the recharge ports 570 to
`recharge any batteries in the earpiece 500. The earpiece can
`include an ambient microphone 520 and an optional com-
`munication antenna 510,
`that if needed can aid in the
`communication between the earpiece 500 and a communi-
`cation network.
`
`FIG.5B illustrates a block diagram of the earpiece
`[0070]
`of FIG. 5A,illustrating the first user interaction element 530,
`the ambient microphone (AM)520, that can be used to pick
`up ambient audio content, an ear canal microphone (ECM)
`570 that can pick up audio in the ear canal region, an ear
`
`canal receiver (ECR) 580 that can direct audio contentto the
`ear drum, all of which can be connected operatively to a
`logic circuit 570. A memory storage device can be opera-
`tively connectedto the logic circuit (LC) 570, and can store
`data suchas registration, preference, and audio content data.
`The optional communication antenna 510 can be connected
`to a communication module (e.g., wireless module 515), and
`can receive or transmit information 560 to a communication
`network.
`
`[0071] FIG. 6 illustrates a user interface for se