`U5005513129A
`
`5,513,129
`[11] Patent Number:
`[19]
`Unlted States Patent
`
`
`Bolas et al.
`[45] Date Of Patent:
`Apr. 30, 1996
`
`[54] METHOD AND SYSTEM FOR
`CONTROLLING COMPUTER-GENERATED
`¥g§g§§g¥§ggflm IN RESPONSE
`
`[75]
`
`Inventors: Mark Bolas; Ian E. McDowall, both of
`11126121552118.5131} Bdas’ L05
`
`.
`.
`[73] Asmgnee: Fakespace, Inc., Menlo Park, CalIf.
`
`[21] Appl- No; 91,650
`
`[22] Filed:
`
`Jul. 14, 1993
`
`[51]
`Int. Cl.6 ...................................................... G06F 17/00
`
`[52] US. Cl. ......................... 364/578
`[58] Field of Search .................................. 364/578 514
`364/DIG. 1 DIG 2. 360/14 2. 346/712:
`358/81 82' 545/156 ,84/645 610 464 R,
`601 ’602’ 609 611 634 6,35 641 642’
`DIG, 1 131G :2 DI’G 29’, 395/152’ 153’
`' 1’54 155 ’156 157’ 159 161 206
`’
`’
`’
`’
`’
`’
`References Cited
`
`[56]
`
`3 490 328
`3:609:019
`3,617,647
`3,900,886
`4,081,829
`4,182,214
`4,257,062
`4,267,561
`4,768,086
`4,988,981
`5,148,154
`
`U.S. PATENT DOCUMENTS
`84/464
`1,1970 King
`9/1971 Tuber-8::""""""""""""""1.369/135
`
`..
`_____ 369/135
`11/1971 Maier et a1.
`
`8/1975 Coyle ,,,,,,,,,,,
`353/32
`3/1978 Brown .................. 358/82
`
`1/1980 Wakernan .....
`84/DIG. 29
`3/1981 Memdlth -------------- 358/81
`
`5/1981 Karpinsky
`358/82
`..........
`8/1988 Paist
`358/81
`
`...... 340/709
`1/1991 Zimmerman et al.
`........................ 395/155
`9/1992 MacKay et al.
`
`
`5,307,456
`4/1994 MacKay ...................... 395/154
`5,319,452
`6/1994 Funahashi
`............................. 84/464 R
`FOREIGN PATENT DOCUMENTS
`
`....... 84/610
`2142461
`1/1985 United Kingdom ..
`
`6/1992 WIPO ..: ............................ GO6F 3/03
`WO92/09948
`OTHER PUBLICATIONS
`Jacobson, et al., “Time for Technojuju,” NewMedia, p. 18,
`Jan. 1993_
`
`Primary Examiner—Ellis B. Ramirez
`Attorney, Agent, or Firm—Limbach & Limbach; Alfred A.
`Equitz
`
`[57]
`
`ABSTRACT
`
`_
`,
`A math“ and apparatus for the com“ and mmpulanon 0f
`a virtual environment (such as virtual objects therein) in
`response to a music signal. The music is either interpreted
`directly to effect the control and manipulation. Alternatively,
`a control track corresponding to an audio signal (such as a
`music signal) is prerecorded, played back with the audio
`signal, and the control track is processed to control and
`manipulate the virtual world (or to control some other
`process of a computer system) as the audio signal is playing.
`In preferred embodiments, a computer creating a virtual
`world interprets the music, the control track, or both, and
`uses the resulting information to modify, create, and or
`control objects
`in the virtual environment. Preferred
`embodiments of the inventive system include apparatus for
`delaying input music to compensate for lag introduced by
`the system components, such as delay required to implement
`processing of control
`tracks corresponding to the input
`music.
`
`23 Claims, 7 Drawing Sheets
`
`“
`
`HEAD-
`
`
`TRACKING
`MEANS
`
`3
`
`
`
`
`
`
`
`
`
`MUSIC
`ANALYZER
`-
`VR PROCESSOR
`SOURCE
`
`INTERFACE
`
`
`
`AUDIO SIGNAL
`AUDIO
`
`(OPTIONALLY WITH
`OUTPUT
`
`
`AUDIO SIGNAL WITH CONTROL TRACK(S)
`CONTROL TRACK(S))
`SIGNAL
`(0R CONTROL TRACK(S) ALONE)
`
`8
`
`(D'SPLAY)
`
`9
`
`WNPUT DEVICE)
`10
`
`(HEADPHONES)
`
`Page 1 of 23
`
`Harmonix Exhibit 1001
`
`Page 1 of 23
`
`Harmonix Exhibit 1001
`
`
`
`US. Patent
`
`Apr. 30, 1996
`
`Sheet 1 of 7
`
`5,513,129
`
`m
`
`><.Em_n_
`
`or
`
`.sz_
`
`mo_>mo
`
`5.4m:
`
`mmzozn.
`
`m>
`
`momwmoomm
`
`-moEmmE.
`
`E5
`
`
`
`3,2IE-
`
`-05:
`
`wz§o<E
`
`mz<m=2
`
`:
`
`
`
`mzoé65$:SE28me
`
`_..0."—
`
`_m
`
`m
`
`m
`
`v._
`
`._<zw_w
`
`
`
`Amie/EhAomhzooIt;.2265OED<
`
`$201355
`Sacha
`
`053.
`
`moEmmHZ
`
`
`
`355052
`
`:55522059
`
`
`
`239::SE28
`
`.5528
`
`x3e:
`
`momzom
`
`<_.
`
`063.2
`
`momzom
`
`Edge
`
`-05:
`
`0255.45
`
`wz<m=2
`
`I
`
`0—.
`
`ESESnag
`
`o,
`
`mowmmooEm>
`
`g
`
`Page 2 of 23
`
`Page 2 of 23
`
`
`
`
`
`
`
`
`
`
`
`
`US. Patent
`
`LPA
`
`%wm,
`
`7nm2amS
`
`5556EE
`
`9n
`
`
`
` 2“,320119;:19Emma5,mo25%5m>E052
`
`
`
`,Ezmo,mo”.._<2w_wxofit.Fo
`
`
`
`
`
`m
`
`“271'hu
`_0.8.2
`
`_
`
`ooue5:25
`
`__mm<zm
`
`pl!9m:
`
`5%
`
`55%m0—".
`
`>mosfi§ 2mm.
`$89:moImofimflz
`”.8808;I6528
`
`._<.:w_o
`
`moEmEZ
`
`._<zo:n_o
`
`Elmo
`
`:52
`
`mo_>wn_
`
`moEmmHZ
`
`45.65
`
`2_0535.
`
`wo._<2<
`
`xo<E
`
`._<zo_.Eo
`
`Aomhzoo
`
`v.0."—
`
`Page 3 of 23
`
`Page 3 of 23
`
`
`
`
`US. Patent
`
`Apr. 30, 1996
`
`Sheet 3 of 7
`
`5,513,129
`
`
`121A
`MULTITRACK
`SIGNAL
`TAPE PLAYER
`
`
`CONDITIONING
`
`
`
`
`SIGNAL
`CONDITIONING
`
`; 1203
`
`
`;
`
`MICROPROCESSOR
`
`131X
`
`SMPTE
`190
`
`141X
`
`TAPE IF
`CONVERTER
`140x
`
`130x
`
`MICROPROCESSOR 13”
`W/A —> D
`+ DIGITAL I/O
`
`
`
`
`130Y
`
`14W
`
`TAPEIF
`CONVERTER
`14W
`
`/_150
`J“
`n
`_/_
`/—
`SWITCHES
`
`MN 160
`POT
`
`a 165
`
`DISK
`DRIVE
`
`(OR OTHER
`DATA MEDIA)
`
`4 TRACK
`TAPE RECORDER
`
`17m,
`180T
`
`2TRACK
`TAPE PLAYER
`
`17OL
`
`FIG 5
`
`Page 4 of 23
`
`Page 4 of 23
`
`
`
`US. Patent
`
`Apr. 30, 1996
`
`Sheet 4 of 7
`
`5,513,129
`
`
`4 TRACK
`205R
`
` TAPE PLAYER
`
`PROCESSOR
`
`
`
`220X
`
`TAPE IF
`
`CONVERTER
`
`
`
`210
`
`AUDIO
`
`AMPLIFIER
`
`210R
`
`
`
`
`
`MULTICHANNEL AUDIO
`
`
`DIGITAL WITH
`MICROPROCESSOR
`
`
`SERIAL OUTPUT
`
` 240
`
`
`
`
`
`
`VR
`
`DISPLAY
`
`VR SYSTEM
`
`
`
`FIG. 6
`
`(AREA OF
`INTEREST
`TRACKER)
`
`
`
`Page 5 of 23
`
`Page 5 of 23
`
`
`
`US. Patent
`
`Apr. 30, 1996
`
`Sheet 5 of 7
`
`5,513,129
`
`FIG. 7
`
`
`
`+5
`
`0.1
`FROM
`'-—-l
`TAPE
`ANALOG 1
`IN +5
`-
`
`
`
`0.1
`
`
`15K
`1
`5 09
`
`TO
`MICRO-
`PROCESSOR
`
`
`
`Page 6 of 23
`
`Page 6 of 23
`
`
`
`US. Patent
`
`Apr. 30, 1996
`
`Sheet 6 of 7
`
`5,513,129
`
`OPEN AND INITIALIZE DEVICES
`
`4 0
`O
`
`INITIALIZE VARIABLES
`
`-
`
`DEFINE AND CREATE
`STANDARD OBJECTS
`
`READ HEAD POSITION
`
`READ CONTROL
`TRACK INFORMATION
`
`READ DIGITZED AUDIO AND
`INPUT INFORMATION
`
`41
`
`O
`
`420
`
`430
`
`440
`
`450
`
`CREATE; DISTROY; MOVE AND
`MODIFY OBJECTS
`
`460
`
`CALL DRAWING ROUTINE
`
`470
`
`IS ESCAPE KEY DEPRESSED ?
`
`480
`
`FIG. 10
`
`END
`
`490
`
`
`
`
`
`
`
`Page 7 of 23
`
`Page 7 of 23
`
`
`
`US. Patent
`
`Apr. 30, 1996
`
`Sheet 7 of 7
`
`5,513,129
`
`
`
`FIG. 11
`
`Page 8 of 23
`
`Page 8 of 23
`
`
`
`1
`METHOD AND SYSTEM FOR
`CONTROLLING COMPUTER-GENERATED
`VIRTUAL ENVIRONMENT IN RESPONSE
`TO AUDIO SIGNALS
`
`Field of the Invention
`
`The invention pertains to methods and apparatus for
`controlling a computer system in response to music signals,
`or in response to prerecorded control tracks corresponding to
`audio signals (such as music signals). In preferred embodi-
`ments,
`invention pertains to methods and apparatus for
`creating and modifying, or otherwise controlling, computer-
`generated virtual environments (or displayed virtual objects
`in virtual environments) in response to music signals or in
`response to prerecorded control
`tracks corresponding to
`audio signals.
`
`BACKGROUND OF THE INVENTION
`
`The terms “virtual environment,” “virtual world,” and
`“virtual reality” are used interchangeably to describe a
`computer-simulated environment (intended to be immer-
`sive) which includes a graphic display (from a user’s first
`person perspective, in a form intended to be immersive to
`the user), and optionally also sounds which simulate envi~
`ronmental sounds. The abbreviation “”VR will sometimes
`be used herein to denote “virtual reality,” “virtual environ-
`ment,” or “virtual world”. A computer system programmed
`with software, and including peripheral devices, for produc-
`ing a virtual environment will sometimes be referred to
`herein as a VR system or VR processor.
`The graphic display generated by a VR system can be a
`two-dimensional (2D) or a three-dimensional (3D) display.
`Typically, a VR system includes an input device and user
`interface software which enable a user to interact with the
`scene being displayed, typically to simulate motion in the
`virtual environment or manipulation of displayed represen-
`tations of objects (“virtual objects”) in the virtual environ-
`ment. Typically, the illusion of immersion in a virtual reality
`system is strengthened by the use of head-tracking or some
`other such system which directs the computer to generate
`images along the area of viewing interest of the user.
`The present invention is a method and apparatus particu-
`larly useful for creating and/or controlling virtual environ-
`ments. A VR system which embodies the invention can
`rapidly and inexpensively create, animate, or otherwise
`control a wide variety of entertaining virtual environments
`and virtual objects in response to music or in response to
`prerecorded “control
`tracks” which correspond to audio
`signals (such as music).
`While currently being used in the research and scientific
`communities, VR systems are becoming less expensive and
`are poised to reach the consumer electronics market as
`entertainment devices.
`
`VR systems must generate a much greater amount of
`content data (image data and audio data simulating envi-
`ronmental appearance and sounds) than must be generated in
`most other electronic media. Whereas video game systems
`require complex scenes to be generated and themes to be
`programmed, such systems can easily limit the scope of the
`game content because they can easily constrain the player to
`move in a few simple directions (e.g., left and right) and
`need only produce images to be presented on flat screen
`monitors or on simple 3D field-sequential type monitors.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`Page 9 of 23
`
`5,513,129
`
`2
`
`In contrast, by their very nature, VR systems allow the
`user to look around and fly around in many different
`directions and positions. Even where the user is constrained
`to look only toward the left or the right, VR systems must
`construct complete representations of 3D worlds. This com—
`plexity has made it very diflicult to generate virtual worlds
`for the consumer entertainment market in a quick fashion.
`In addition to the complexity of creating static 3D models
`for virtual worlds, it has also been difficult to control the
`dynamics of virtual worlds. VR systems to date are notori-
`ous for providing only very boring and nearly static envi-
`ronments. The few VR systems that
`include dynamic
`motions of the virtual world either base such motions on
`physical laws (such as gravity) or base the motions on
`corresponding motions produced by human users (such as
`motion of the fingers of a user wearing a conventional
`“glove” input device).
`The present invention overcomes the limitations of con~
`ventional VR systems by providing an efiicient way to
`generate content data (i.e., animated image data and audio
`data) to fill or populate a virtual environment in a choreo-
`graphed response to input music signals.
`There has long been an interest in the virtual reality field
`with respect to the possibility of virtual musical instruments
`and the creation of new and novel instruments within a
`virtual world. The present invention is a radical shift from
`previous attempts to combine music and virtual environ-
`ments.
`
`Conventional efforts to integrate music with virtual envi-
`ronments have, to date, all been directed toward creation of
`music from a virtual environment. The musical expression
`of the user has been treated as an urge seeking to be brought
`forth, and virtual environments have been seen as vehicles
`for the user to perform music or dance without having to
`learn special physical skills. Much efl°ort has been made to
`make sounds appear to be coming from virtual objects in the
`virtual environment. This has been done by running audio
`into the VR system and then convolving the audio in such a
`way as to make it appear to come from a certain place in the
`virtual environment.
`
`For example, at the NASA Ames View Lab, Scott Fisher,
`Rick Jacoby, and others explored virtual environments. One
`aspect of the research was the integration of audio into the
`virtual experience. This included the use of audio cues for
`such purposes as telling one if one bumped into a virtual
`object, but there was no tactile feedback for such events. The
`research pushed into the more artistic realm of creation of
`music in the context of a virtual world.
`Mark Bolas and Phil Stone created the Virtual Theremin
`and virtual drum kit. In this system, the user wore a glove
`and a hand tracker and moved the gloved hand to manipulate
`virtual objects which were in turn linked to various synthe-
`sizer parameters. Thus, by manipulating virtual objects (as
`taught, for example, by US. Pat. 4,988,981, issued Jan. 29,
`1991), sounds of different qualities could be created. A
`skilled user could create modern sounding musical inter-
`ludes. These ideas have been carried forth by people such as
`Jaron Lanier who has givena number of public perfor—
`mances in which he manipulates virtual objects to create a
`musical performance. Research and exploration along these
`lines is expected to continue (the virtual “air guitar” and the
`like will probably be developed). In all VR systems of this
`type, manipulation of a virtual object causes the sound or
`music to change.
`Currently, virtual worlds are created by describing a
`simulation and a number of objects. The interaction of the
`
`Page 9 of 23
`
`
`
`3
`
`4
`
`5,513,129
`
`objects is described in some form of simulation language or
`graphical description. Traditionally, the control and creation
`of the objects is driven by “world building” software. Once
`a virtual world has been created, a limited number of its
`parameters may be manipulated by the user from “inside”
`the virtual world. One example of how these databases are
`created is described in PCT International Patent Application
`WO 92/09948, by VPL Research Inc. As is evident from WO
`92/09948, it has define animation for all or even some of the
`virtual objects in a virtual world. Until the present invention,
`it had not been proposed to interface to nodes in a database
`defining a virtual environment, and to manipulate such
`nodes, on the basis of music.
`Conventional VR systems and music have thusfar been
`used together in ways which have the following disadvan-
`tages:
`(a) a VR system has been used as a virtual musical
`instrument, so that the user must “play” the virtual
`instrument (by manipulating an input device) to hear
`anything. This means that the system creates music,
`and that the system’s musical output is limited by the
`user’s ability to “play” the “instrument;”
`(b) VR systems that have given sounds to virtual objects
`(e.g., the system displays a virtual kitchen sink and
`produces a “drip-drip” sound which seems to come
`from the sink’s location) have required that the sounds
`are generated by signals produced within the VR sys-
`tem in response to user manipulation of an input device
`or internal programs, which signals are then interpreted
`by a synthesizer. The sounds produced by the synthe-
`sizer are thus cued from the VR system in response to
`manipulation of an input device (which manipulation
`may, for example, to cause a user to “move” into a
`position to view or otherwise interact with a virtual
`kitchen sink from which sounds will then seem to
`emanate). Thus, these VR systems have depended on
`user manipulation of an input device to control the
`appearance or activities of objects in a virtual environ-
`ment, to cause the VR system to cue production of
`sound events; and
`
`(c) VR systems have played musical scores as background
`music for the virtual environment.
`
`Basically, the paradigm to date has been to create systems
`that have (virtual) object-driven sounds. This invention
`reverses the paradigm to create a system which has musi-
`cally-driven objects.
`One VR system has been developed in which a VR
`processor is programmed to perform simple operations to
`modify a virtual environment in response to voice com—
`mands. This VR system, developed at the NASAAmes View
`Lab during the years 1988—1989, was capable of displaying
`a virtual object, or terminating the display of a virtual object,
`in response to a voice command from a human user. How-
`ever,
`the system did not produce, modify, or otherwise
`control a virtual environment in response to music, or in
`response to a prerecorded control track corresponding to an
`audio signal.
`Outside the VR field, many attempts have been made to
`produce devices which provide users with visual light effects
`based on an audio signal, such as music. However, these
`systems have been disappointing to watch (principally
`because the light shows are two-dimensional and are not
`obviously correlated with the audio input), and have typi-
`cally met with disappointment when marketed.
`An example of a conventional apparatus for producing
`visual light effects based on audio signals is described in
`US. Pat. No. 4,081,829 (issued Mar. 28, 1978). This appa-
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`4O
`
`45
`
`50
`
`55
`
`60
`
`65
`
`ratus controls the display of two-dimensional rings or solid
`shapes on the screen of a television receiver, in response to
`audio input signals. However, only a limited set of two-
`dimensional shapes can be displayed and only limited
`changes in their shape or color can be accomplished in
`response to the audio input.
`Another example of a conventional apparatus for produc-
`ing visual
`light eifects in response to audio signals is
`described in US. Pat. No. 4,257,062 (issued Mar. 17, 1981).
`This apparatus controls a set of lamps which are mounted in
`eyewear to be worn by the user, by switching individual ones
`of the lamps on and off in response to music. Peak levels of
`specific frequency bands of the music are detected and
`employed to switch on or ofi different ones of the lamps.
`Another system for producing visual efiects in response to
`audio signals has been described in the Jan. 1993 issue of
`NewMedia magazine (at page 18) as a system which
`includes a Silicon Graphics Iris Indigo workstation, and
`which alters the appearance of colored visual representations
`of sound waves (displayed on a large screen in a concert
`hall) in response to crowd noise (picked up by a microphone
`during a concert) and live music in MIDI format (generated
`by musicians during the concert) supplied to the worksta-
`tron.
`
`tracks (which
`It is believed that prerecorded control
`correspond to prerecorded audio such as music) have not
`been employed to control operation of a computer system,
`such as to control generation of a virtual environment by a
`VR computer system. It is also believed that control signals
`have not been extracted from music for use in controlling
`generation of a virtual environment by a VR system (e.g., by
`populating the virtual environment with animated virtual
`objects which move in response to the music).
`
`SUMMARY OF THE INVENTION
`
`In a preferred embodiment, the invention is a computer
`system and computer-implemented method for the creation
`and control of a virtual world in response to music signals
`and/0r prerecorded control
`tracks corresponding to the
`music signals. The system includes means for interfacing
`between the computer software which controls production of
`the virtual world, and live or prerecorded music (and/or
`prerecorded control tracks). The invention transcends tradi-
`tional use of VR as a musical instrument, and enables a VR
`system to be employed as a virtual stage driven by music.
`In another class of embodiments, the invention controls
`operation of a computer system (which need not be a VR
`system) in response to one or more prerecorded control
`tracks corresponding to audio signals, or in response to both
`music signals and one or more such prerecorded control
`tracks.
`
`The component of the inventive system which generates
`control signals from input music (and/0r prerecorded control
`tracks and/or human generated input signals), or which
`sends prerecorded control tracks in appropriate format to a
`VR system or other processor, will sometimes be referred to
`herein as an “Acoustic Etch” system or an “Acoustic Etch.”
`In preferred embodiments, the invention employs music
`to manipulate or control a virtual environment. This can be
`accomplished in several ways. Since music cannot directly
`interact with the virtual environment, the Acoustic Etch
`receives music (in some electronic, acoustic, or optical
`form) and generates control signals therefrom which are
`used by a VR system to influence activity in the virtual
`world.
`
`Page 10 of 23
`
`Page 10 of 23
`
`
`
`5
`
`6
`
`5,513,129
`
`The control signals derived from the music may be
`extracted from the music directly. For example, the Acoustic
`Etch can employ a simple algorithm (of the same type used
`by well known graphic equalizers) to extract a rhythm signal
`indicative of the beat of some frequency band of the music
`(e.g. a band representing drums), or of some other parameter
`of a frequency band of the music. The rhythm signal is sent
`to the VR system which in turn generates control signals
`therefrom to control the rhythm of a virtual dancer (or some
`other moving virtual object).
`As an alternative (or in addition) to extracting signals
`from music itself for processing by a VR system,
`the
`invention can supply to the VR system one or more prere-
`corded control tracks corresponding to the music, or can
`generate control signals from prerecorded control tracks and
`then supply such control signals to the VR system for
`processing. For example, control tracks can be prerecorded
`along with left and right tracks of a stereo music signal. The
`prerecorded control tracks, left stereo track, and right stereo
`track, can then be played back (simultaneously or with
`selected delays between them) and received in parallel by
`the VR system. The control tracks can be generated auto-
`matically (e.g., by electronic signal processing circuitry) in
`response to a music signal and then recorded, or can be
`generated in response to manually asserted commands from
`a person (while the person listens to such music signal) and
`then recorded.
`tracks can be indicative of more
`Prerecorded control
`sophisticated analysis of a corresponding music signal than
`could be conveniently performed by some contemplated
`(e.g., inexpensive) VR system embodiments of the inven-
`tion. The placement and rhythm of dancers could be encoded
`in prerecorded control tracks, for example.
`The use of prerecorded control tracks has several advan—
`tages and features, including the following:
`(a) an entire song can be choreographed and prerecorded
`with a control track (for example, indicative of place
`ment and rhythm of dancers), so that the control track
`forms part of the prerecorded choreographed musical
`work;
`
`(b) the control track can include higher level information,
`such as pictures of a dancer or other performer, which
`can be used as source data by the VR system to display
`images of the performer in the virtual environment;
`(c) the medium for the control track need not the same as
`that of the music. For example,
`the music may be
`recorded on a compact disk (CD) while the control
`track is recorded on a computer game cartridge or other
`medium;
`(d) synchronization of the control track and the music can
`be accomplished under control of the VR system,
`which could use the control track to synchronize with
`the music, or vice versa;
`
`(e) the control track can be encoded (or processed) in a
`way which accounts for the “delay time” required for
`the VR system to use the information coming from the
`control track. This will improve the apparent synchro-
`nization between the music and the graphics data
`output from the VR system, even when the VR system
`requires a long time to “draw” a particular frame of an
`animated virtual world; and
`(f) a prerecorded control track can eliminate the need for
`some embodiments of the invention to include means
`for automatically decoding musical expression (the
`automatic decoding of musical expression is poorly
`understood).
`
`10
`
`15
`
`20
`
`25
`
`3O
`
`35
`
`4o
`
`45
`
`50
`
`55
`
`60
`
`65
`
`For example, an operator can record a control track which
`is emotionally linked with a song. The VR system could then
`easily convert the control track into a variety of control
`signals, and can produce more repeatable and interesting
`results than could be achieved by processing the music
`directly (in the absence of the control track).
`The major disadvantage of using a prerecorded control
`track is that the control track must be generated and recorded
`in advance, and then played back in some way. It must be
`delivered in conjunction with the music, and the easiest way
`to do this is on the same physical recording medium.
`An advantage of embodiments of the invention which
`directly process music (rather than processing a prerecorded
`control track) is that the music and the VR control signals
`generated therefrom are more independent than are a control
`track and the VR control signals generated therefrom (and
`can be related in any of a variety of ways). In embodiments
`which directly process music,
`the visual experience and
`emotional coupling between the VR and the music is looser,
`since the interpretation is generically related to musical
`signals and their processing. However, specific processing
`algorithms can be used by the VR system for specific songs—
`thus tailoring the algorithm to the music.
`In essence, preferred embodiments of the invention use
`music to create a “trac ” of distilled music which is in a form
`
`usable by a VR system. The interpretation of the information
`is still dependent on the VR system, or the particular VR
`software being run by a computer system. The same “raw”
`music or control track can be interpreted differently by
`different VR systems (or VR software programs) in the sense
`that different VR systems (or programs) can generate dif-
`ferent sets of control signals in response to a single raw input
`signal. Alternatively, the same VR system (or program) can
`interpret the same “raw” music or control track differently at
`different times. The control track can be used to program the
`VR system’s response and thus tailor the system to a specific
`song.
`
`OBJECTS AND ADVANTAGES
`
`Accordingly, several objects and advantages of various
`embodiments of the present invention are:
`to provide an apparatus which extracts information from
`music (or other audio) for the control and manipulation
`of objects within a virtual environment;
`to provide an apparatus which uses a control track pre-
`recorded along with audio (music, in preferred embodi-
`ments) for the control and manipulation of objects
`within a virtual environment;
`
`to provide a VR system which delays audio (in response
`to which control signals are generated) in order to
`compensate for the lag introduced by other components
`of the VR system;
`to provide a virtual experience in which music effectively
`drives the display of an animated graphical scene;
`to provide a mechanism by which music is used to control
`and influence a virtual environment in such a way as to
`relieve the database which describes the virtual envi—
`ronment from having to define all the motions of the
`objects in the virtual environment;
`to provide a control track for the influence and control of
`a virtual environment in which the control track is
`
`created during or following the music recording and
`production process when individual tracks (of a multi«
`track musical work) that are used for a particular mix
`are available before being mixed down; and
`
`Page 11 of 23
`
`Page 11 of 23
`
`
`
`5,513,129
`
`7
`
`to provide a control track which can contain information
`(such as images of a perfonner’s face, for example)
`other than information extracted from corresponding
`music;
`Further objects and advantages are to provide for the rapid
`creation and animation of a virtual environment from music
`
`which already has a high level of production quality.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a diagram of a preferred embodiment of the
`inventive system, in which a music source is interfaced to a
`VR system by an Acoustic Etch system. The blocks may or
`may not represent physically distinct objects (several of the
`blocks could be implemented in a single device).
`FIG. 2 is a diagram of a variation on the Fig. 1 embodi-
`ment, in which the Acoustic Etch system receives or con—
`tains prerecorded control tracks, and music corresponding to
`the control tracks is used to cue output of the stored control
`tracks to the VR processor.
`FIG. 3 is a graph of a control track and a corresponding
`music signal, where the control track is phase shifted relative
`to the music signal by a degree adequate to compensate for
`delays that are expected to be introduced, in other parts of
`the system, during processing initiated in response to the
`control track.
`
`FIG. 4 is a block diagram of a variation on the Acoustic
`Etch apparatus employed in the FIG. 1 system.
`FIG. 5 is a block diagram of a system for creating an audio
`tape with control tracks for playback by the system shown
`in FIG. 6.
`
`FIG. 6 is a block diagram of a system for playback of the
`audio tape produced by the FIG. 5 system.
`FIG. 7 is a schematic diagram of a circuit suitable for
`implementing any of signal conditioning blocks 120A and
`120B.
`
`FIG. 8 is a schematic diagram of a circuit suitable for
`implementing either of tape IF convertors 140X or 140Y (of
`FIG. 5).
`FIG. 9 is a schematic diagram of a circuit suitable for
`implementing either of tape IF convertors 220K or 220Y (of
`FIG. 6).
`FIG. 10 is a block level description of the software which
`is preferably run on VR system 250 of FIG. 6.
`FIG. 11 is a representation of a typical single eye image
`as displayed on display 260 of Fig. 6.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENTS
`
`The term “audio signal” is used herein in a broad sense to
`include not only sound waves but also electrical, optical, or
`other signals representing sound waves (such as the electri-
`cal output of a transducer in response to sound waves). The
`terms “music signal” and “music” are used interchangeably
`herein in a broad sense to include not only sound waves that
`are recognizable by a human listener as music, but also
`electrical, optical, or other signals representing such sound
`waves (such as the electrical output of a transducer in
`response to the sound waves). Typically, a system embody-
`ing the invention will receive and process music signals in
`the form of digitized electrical signals.
`FIG. 1 is a diagram of a preferred embodiment of the
`inventive system. In FIG. 1, music source 1 is interfaced to
`VR processor 7» by Acoustic Etch system 3. VR processor 7
`
`10
`
`15
`
`20
`
`25
`
`3O
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`8
`is a computer programmed with software for implementing
`a virtual environment. Specifically, VR processor 7 can
`cause image data representing a virtual environment to be
`displayed on display device 8 and can cause left and right
`channels of audio signals (simulating sounds in the virtual
`environment) to be played back to a user wearing head-
`phones 10 (which include left and right speakers). Display
`device 8 can be any of a variety of devices, such as a device
`which mounts on the head of a human user (preferably
`including left and right monitors for providing a stereo-
`scopic display to the user), or a single flat screen display
`which outputs either a non-stereoscopic display or a stereo—
`scopic display. Head-tracking means 11 (included in both the
`FIG. 1 and FIG. 2 embodiments) is provided for optionally
`providing input (to processor 7) indicative of the position of
`the head of a human user wearing a head-mounted embodi-
`ment of display device 8.
`Processor 7 is a computer programmed with software
`enabling a human user to interact with the virtual environ-
`ment by manipulating input device 9, whose output
`is
`supplied to processor 7. In one embodiment, input device 9
`includes a glove and sensors mounted to the glove for
`detecting movements of a user’s hand within the glove. In
`another embodiment, input device 9 includes a frame and
`sensors for producing output signals indicative of forces or
`torques exerted on the frame by a user. The frame is
`preferably mounted to display device 8 (or to a base sup-
`porting the display device) symmetrically With respect to an
`axis of symmetry the display device, with limited freedom
`to move relative thereto, and the sensors are preferably
`mounted at the ends of the limited range of motion of the
`frame.
`
`An analog-to-digital conversion circuit within Acoustic
`Etch unit 3 receives and digitizes a music signal from source
`1. The music signal is optionally accompanied by one or
`more prerecorded control tracks corresponding to the music
`signal, which control tracks are played back with the music
`signal. Analyzer 5 within Acoustic Etch unit 3 receives the
`digitized output of circuit 4, and generates control signals by
`processing the music signal (or both the music signal and the
`control tracks). The control signals output from analyzer 5
`are supplied through interface 6 to VR processor 7, for use
`within processor 7 for controlling generation of the virtual
`environment. One or more of the control tracks (or both the
`music signal and one or more control tracks, or the music
`signal alone) can be supplied directly to VR processor 7, to
`enable processor 7 to cause headphones 10 to play the music
`signals, and to control generation of the virtual environment
`in response to the control tracks or music, such as if the
`functions of the Acoustic Etch unit are embodied in the VR
`processor.
`
`track is optionally
`the control
`In the FIG. 1 system,
`prerecorded on the same medium as the mu