throbber
Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 1 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 1 of 34
`
`EXHIBIT A
`EXHIBIT A
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 2 of 34
`ee TTTTTTTA
`
`US009691429B2
`
`United States Patent
`US 9,691,429 B2
`(10) Patent No.:
`(12)
`Leibermanet al.
`(45) Date of Patent:
`Jun. 27, 2017
`
`
`(54) SYSTEMS AND METHODS FOR CREATING
`MUSIC VIDEOS SYNCHRONIZED WITH AN
`AUDIO TRACK
`(71) Applicant: Mibblio, Inc., Brooklyn, NY (US)
`(72)
`Inventors: David Leiberman, Brooklyn, NY (US);
`Samuel Rubin, Brooklyn, NY (US)
`
`.
`(73) Assignee: MIBBLIO, INC., Brooklyn, NY (US)
`(*) Notice:
`Subject to any disclaimer, the term ofthis
`patent is extended or adjusted under 35
`U.S.C, 154(6)by 43 days.
`(21) Appl. No.: 14/708,805
`
`(22)
`
`Filed:
`
`May 11, 2015
`
`(2006.01)
`(2006.01)
`(2006.01)
`(2006.01)
`(2011.01)
`(2011.01)
`
`(65)
`
`(51)
`
`Prior Publication Data
`US 2016/0336039 Al
`Nov. 17, 2016
`Int. CL
`HOAN 5/93
`GIB 27/031
`HOAN 9/82
`GIB 27/34
`HOAN 21/43
`HOAN 21/8547
`(52) U.S. Cl.
`CPC ............ GUIB 27/031 (2013.01); GUB 27/34
`(2013.01); HO4N 9/8211 (2013.01); HO4N
`21/4307 (2013.01); HO4N 21/8547 (2013.01)
`(58) Field of Classification Search
`CPC ... G11B 2220/90; G11B 27/034; G11B 27/34;
`G11B 27/024; G11B 27/036
`USPC ciccceseeeessessesseseeceeceeceeceeceeeeseeseeseenes 386/285
`See application file for complete search history.
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`5,265,248 A
`7,026,536 B2
`
`11/1993 Moulios et al.
`4/2006 Luet al.
`
`7,027,124 B2
`4/2006 Footeet al.
`een 8 1ene MeNay a
`500,
`omson et al.
`8,046,688 B2
`10/2011 Adamset al.
`8,244,103 BL*
`8/2012 Shore -rreereevreoee aeng
`704.000 Bo ,
`fool SalveCe G101/40
`84/612
`
`8,896,609 B2
`
`11/2014 Xu et al.
`(Continued)
`
`OTHER PUBLICATIONS
`Musicstory: A personalized Music Video Creator; David A.
`Shamma, Bryan Paredo, Kristian J. Hammond; Proceedings of the
`13th Annual ACEM International Conference on Multimedia;
`ACM,2005.
`
`(Continued)
`
`Primary Examiner — William Tran
`(74) Attorney, Agent, or Firm — Robert W. Morris;
`Eckert Seamans Cherin & Mellott, LLC
`
`(57)
`
`ABSTRACT
`
`Systems and methods for creating music videos synchro-
`nized with an audio track are provided. In some embodi-
`ments, an audio track may be selected and one or more video
`takes may be captured while the selected audio track plays.
`The video takes may be analyzed while they are captured to
`determine, for example, a video intensity level and/or a
`numberof faces recognized within each take. By capturing
`the video takes with the audio track, the video takes may be
`synchronizedto the audiotracks so that they are in time with
`one another. Portions or subsets of the video takes may be
`paired or matched with certain sections of the audio track
`based on, for example, the audio characteristics for a par-
`ticular section and video characteristics of a particular take.
`
`19 Claims, 17 Drawing Sheets
`
`700
`
`Ye
`710
`720
`730
`740
`750
`770
`780
`790
`
`
`Q eX as 4
`
`
`
`
`
`3TAKE: TAKE: 1|TAKE: 2] TAKE: TAKE: 2|TAKE:1 3 TAKE: 1| TAKE: 3
`
`
`
`
`
`TIME: 00:00|TIME: TIME: 01:00|TIME:|TIME: 02:00|TIME:TIME: TIME: 03:00
`
`
`
`
`
`
`
`-00:30|00:30 00:50 01:30 01:30 02:40 03:36
`
`
`
`
`03:00
`00:50
`61:00
`02:00
`722
`732
`1h2
`
`
`
`
`742
`
`746
`
`756
`
`712
`
`726
`
`736
`
`
`
`
`
`
`A

`PA
`TAKE: 1|TAKE: 2
`
`760
`
`0:00
`
`00:30
`714724
`
`00:50
`
`734
`
`
`74a
`74
`774
`
`784
`
`794
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 3 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 3 of 34
`
`US 9,691,429 B2
`Page 2
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`OTHER PUBLICATIONS
`
`Creating Music Videos Using Automatic Media Analysis; Jonathan
`Foote, Matthew Cooper, and Andreas Girgensohn; Proceedsings of
`the 10th International Conference on Multimedia; ACM, 2002.
`MuViSync: Realtime Music Video Alignment; R. Macrae, X.
`Anguera, N. Oliver; 2010 IEEE International Conference on Mul-
`timedia and Expo (ICME), 534-9, 2010; ISBN-13: 978-1-4244-
`7491-2; DOL 10.1109/ICME.2010.5583863; Conference: 2010
`IEEE International Conference on Multimedia and Expo (ICME),
`Jul.
`19-23, 2010, Suntec City, Singapore; Publisher:
`IEEE,
`Piscataway, NJ, US.
`Cati Dance: Self-Edited, Self-Synchronized Music Video; Tristan
`Jehan, Michael Lew, and Cati Vaucelle; ACM SIGGRAPH 2003
`Sketches & Applications. ACM, 2003.
`Dubsmash by Mobile Motion GmbH;https://itunes.apple.com/app/
`dubsmash/id9 18820076 retrieved on May 11, 2015.
`Video Star by Frontier Design Group; https://itunes.apple.com/us/
`app/video-star/id43 8596432?mt=8 retrieved on May 11, 2015.
`
`* cited by examiner
`
`2002/0035475
`
`AL*®
`
`2004/0060070
`2005/0143915
`
`Al
`Al*
`
`3/2002 Yoda wcccccee GIOL 15/24
`704/270
`
`3/2004 Mizushima
`6/2005 Odagawa ............. GO08G 1/0962
`701/443
`
`2005/0190199
`2006/0288849
`
`2008/0037953
`
`2008/0055469
`
`2009/0164034
`2010/0290538
`2012/03 16660
`2013/0330062
`
`2014/0160250
`
`2014/03 17480
`2014/0320697
`2015/0050009
`
`9/2005 Brownetal.
`Al
`AL* 12/2006 Peeters 0... GI10H 1/0008
`84/616
`2/2008 Kawamura............ HOA4N5/783
`386/343
`3/2008 Miyasaka .......0.... GI0H 1/368
`348/521
`
`AlL*
`
`Al*
`
`6/2009 Cohenetal.
`Al
`11/2010 Xu et al.
`Al
`12/2012 Luoet al.
`Al
`AL* 12/2013 Meikle oe HOAN 9/87
`386/285
`6/2014 Pomerantz ......... HOAN 5/23229
`348/47
`
`Al*
`
`Al
`Al
`AlL*
`
`10/2014 Chauetal.
`10/2014 Lammersetal.
`
`2/2015 Svendsen.....
`
`GI1B 27/036
`386/280
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 4 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 4 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 1 of 17
`
`US 9,691,429 B2
`
`116
`
`14—~
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 5 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 5 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 2 of 17
`
`US 9,691,429 B2
`
`—~at0n
`
`S SONG7
`
`[fli]. A b-
`
`|
`
`“| SONG1
`
`fn], 2 2108)
`
`200
`
`210
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 6 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 6 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 3 of 17
`
`US 9,691,429 B2
`
`—-rohnmoomYBS
`eMSeCHMVOVOeterxc
`
`
`
`302d-—_|
`
`302e—_
`
`304
`
`306
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 7 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 7 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 4 of 17
`
`US 9,691,429 B2
`
`a
`
`SELECT SONG
`DURATION
`
`‘364
`
`360
`
`362-4+~
`
`FIG. 3B
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 8 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 8 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 5 of 17
`
`US 9,691,429 B2
`
`~ 4< Projects
`
`406
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 9 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 9 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 6 of 17
`
`US 9,691,429 B2
`
`
`
`FIG. 5B
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 10 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 10 of 34
`
`Sheet 7 of 17
`
`US 9,691,429 B2
`
`U.S. Patent
`
`Jun. 27, 2017
`
`org
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 11 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 11 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 8 of 17
`
`US 9,691,429 B2
`
`96-60
`
`
`
`00-60-SIANLL&DVL
`
`Ov-60
`
`
`
`00:20“SINLL|6-aWL|
`
`‘SLL|¢Sve
`
`06-10
`
`
`
`00:10[NLLbDIVE
`
`|06:00-|00:00
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 12 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 12 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 9 of 17
`
`US 9,691,429 B2
`
`FIG.7B
`
`Take3
`
`— @K
`
`e
`jm
`
`3:30:00
`
`02
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 13 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 13 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 10 of 17
`
`US 9,691,429 B2
`
`a
`SELECT AN AUDIO TRACK
`
`|
`
`
`
`SELECTADURATIONFOR
`THE AUDIO TRACK
`
`|
`
`802
`
` CREATE MUSIC VIDEO
`
`
`FINISHED \.
`CAPTURING >
`/
`
`\ VIDEO
`\TAKES?
`
`FEATURING AUDIO TRACKS AND
`AT LEASE A SUBSET OF
`THE VIDEO TAKES
`
`FIG. BA
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 14 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 14 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 11 of 17
`
`US 9,691,429 B2
`
`850
`
`ANALYZE AUDIO TRACK
`
`ANALYZE VIDEO TAKES
`
`858
`
`DETERMINE
`AUDIO INTENSITY LEVELS
`OF AUDIO TRACK
`
`DETERMINE
`| VIDEO INTENSITY LEVELS
`OF VIDEO TAKES
`
`856
` B52
`
`
`
`
`
`
`
`
`
`
`PAIR SECTIONS OF AUDIO
`
`
`TRACK WTIH PORTIONS OF
`
`
`VIDEO TAKES BASED ON
`
`
`» DETERMINEDAUDIO—M
`
`
`
`INTENSITY LEVELS AND
`DETERMINED
`
`VIDEO INTENSITY LEVELS
`
`FIG. 8B
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 15 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 15 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 12 of 17
`
`US 9,691,429 B2
`
`900
`
`CAPTURE VIDEO
`
`2 A a—
`
`906”
`
`9047
`-se2an
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 16 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 16 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 13 of 17
`
`US 9,691,429 B2
`
`944”
`
`946-7
`
`9705
`
`970a
`
`g7oy
`
`2708
`
`$70a‘
`
`970c
`)
`
`|
`
`g7oh
`
`970a
`
`960
`
`VOtu2 4,\usJoa _\
`
`962
`
`944
`
`FIG. 9D
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 17 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 17 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 14 of 17
`
`US 9,691,429 B2
`
`980
`
`SELECT AN AUDIO TRACK
`
`~
`
`982
`
`CAPTURE A PLURALITY OF 8
`VIDEO TAKES
`
`DETERMINE A NUMBER OF FACES
`WITHIN EACH VIDEO TAKE WHILE
`CAPTURING THE VIDEO TAKES
`
`_- 986
`
`
`
` 988
`
`
`
`
`
`
`
`
`
`PAIR AT LEAST A SUBSET OF THE
`PLURALITY OF CAPTURED
`VIDEO TAKES TO THE SELECTED
`AUDIO TRACK BASED ON THE
`DETERMINED NUMBER OF FACES
`WITHIN THE VIDEO TAKES
`
`
`
`
`
`
`
`
`
`
`
`CREATE MUSIC VIDEO INCLUDING
`THE SUBSET OF VIDEO TAKES
`SYNCHRONIZED TO THE
`SELECTED AUDIO TRACK
`
`390
`
`FIG. 9E
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 18 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 18 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 15 of 17
`
`US 9,691,429 B2
`
` 10027"
`
`FIG. 10A
`
`
`
`non BB ae pp|ogame}
`
`iy
`
`2
`
`uy
`
`ty
`.
`
`to
`
`‘
`
`:
`
`062
`
`FIG. 10D
`
`1062
`
`1062
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 19 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 19 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 16 of 17
`
`US 9,691,429 B2
`
`1080
`
`AN AUDIO INPUT INTERFACE
`
`_|
`
`RECEIVEAUDIO INPUTVIA | _A082
`RECORDTHEREREWEDAUDIOON|_—A084
`
`RECORDED AUDIO TRACK
`
`A USER DEVICE
`
`DETERMINE THAT THE RECORDED (~71°°°
`AUDIO INCLUDES AT LEAST ONE OF |
`AVOCALPHRASEANDA
`MELODIC PHRASE
`
`PAIR AT LEAST A PORTION OF AT
`LEAST ONE OF A PLURALITY OF
`CAPTURED VIDEO TAKES TOTHE |
`RECORDED AUDIO BASED ON THE |
`DETERMINED VOCALAND/OR
`|
`MELODIC PHRASE WHILE THE
`VIDEO TAKES ARE CAPTURED
`
`GENERATE A MUSIC VIDEO INCLUDING|
`THE RECORDED AUDIO TRACK AND |
`AT LEAST THE PORTION OF THE
`|
`AT LEAST ONE OF THE PLURALITY
`OF CAPTURED VIDEO TAKES
`SYNCGRONIZED TO THE
`
`FIG. 10E
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 20 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 20 of 34
`
`U.S. Patent
`
`Jun. 27, 2017
`
`Sheet 17 of 17
`
`US 9,691,429 B2
`
`TEMPO: 90Bom
`
`RECORD AN
`AUDIO TRACK
`
`Key: Aminor
`
`FIG. 11
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 21 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 21 of 34
`
`US 9,691,429 B2
`
`1
`SYSTEMS AND METHODS FOR CREATING
`MUSIC VIDEOS SYNCHRONIZED WITH AN
`AUDIO TRACK
`
`FIELD OF THE INVENTION
`
`Various embodiments described herein generally relate to
`systems and methodsfor creating music videos. In particu-
`lar, music videos may becreated including portions of one
`or more video takes that are automatically synchronized to
`an audio track.
`
`BACKGROUND OF THE INVENTION
`
`Music videos, whether they involve famous musicians or
`independentartists, are fun and creative mediums for shar-
`ing one’s music and musical style with the world. While
`most music videos, at least historically, were created in
`support of established musicians for marketing purposes, the
`enhanced capabilities of mobile devices allows almost any
`individual to record and edit music, as well as capture video,
`all using one device. Individuals, whether an experienced
`musician or a novice, young or old, now havethe ability to
`create their own music videos using such devices.
`Although music videos often include an individual’s own
`music, it is also possible for music videos to be created based
`on an individual’s favorite or a popular song. While there are
`presently someapplications of this concept, most of these
`applications have several inherent drawbacks.
`In one instance, music videos have been created where an
`audio track plays in the background while a single video is
`captured or recorded. This, however, creates an extremely
`poor quality music video as there is no visual transition
`between various parts of the audio track. For example, a
`single video take may be used for the audio track’s verse and
`chorus. This leads to extremely unprofessional
`looking
`music videos that, while potentially entertaining, are aes-
`thetically inferior to professional quality music videos,
`which may use multiple video takes captured at one or more
`locations.
`Another instance of music videos being created focuses
`on a linear application of a video and audio track. For
`example, a single video take may be captured and multiple
`end points may be applied to that video. However,
`this
`application is extremely limited in that it does not allow a
`user to use multiple video, and, as such, does not allow the
`user to apply multiple end points to the multiple videos.
`Furthermore, in order to have different locations in the music
`video, a user would need to visit multiple locations in
`chronological order and capture video at each location. The
`editing of the video takes captured at each location would
`then only present the locations in the order that they were
`visited. This, as mentioned previously, creates a music video
`that is unprofessional in appearance, as professional music
`videos may have varying locations throughout the music
`video.
`In another, somewhatsimilar, instance, music videos have
`been created where a user is required to capture video of
`another individual within a silhouette. A generic silhouette is
`overlaid on the user’s display and, while the user records
`video, the individual being recorded must stay generally
`within the boundsofthe silhouette. While this may expand
`the manipulative aspects of the video, the user is unfortu-
`nately constrained to be within oneparticular area, and does
`not have a free range of motion to fully capture any desired
`action. Furthermore, as the user is bound bythe silhouette,
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`the ability to transition to different video takes for different
`portions of an audio track is limited, if at all possible.
`In yet another instance, music videos have been created
`that include only a small portion or snippetof the audio track
`with an individual capturing a single video for that portion
`of the audio track. For example, a user may select a song to
`form a music video for, and create a music video based on
`the song’s verse or chorus. This may lead to short, dull, and
`unprofessional music videos as the music video may become
`nothing more than a video clip for a small tidbit of a song.
`Thus, in light of some of the aforementioned problems, it
`would be beneficial for there to be systems, methods, and
`non-transitory computer readable mediumsthat allow a user
`to create a professional style music video using portions of
`multiple videos takes taken at different times and at different
`locations that are automatically synchronized to a selected
`audio track. Furthermore, it would be beneficial for there to
`be systems, methods, and non-transitory computer readable
`medium that allow video takes to be paired with an audio
`track such that sections of the audio track having various
`intensity levels or dynamics are matched with suitable
`portions of the video takes.
`
`SUMMARY OF THE INVENTION
`
`This generally relates to systems, methods, and non-
`transitory computer readable mediums for creating music
`videos that are synchronized to an audio track.
`In one exemplary embodiment, a method for creating a
`music video where an audio track is synchronized with a
`plurality of video takes is described. An audio track, such as
`a song, may be selected. For example, a user may select a
`song stored on their user device, in a music library on an
`external device, or on a music server. A plurality of video
`takes may be captured also using the user device. While the
`plurality of video takes are being captured, they may also be
`synchronized with the selected audio track. The synchroni-
`zation allows for the captured video takes to be aesthetically
`and/or musically synchronized with an appropriate section
`or sections of the audio track. A music video may then be
`created including the audio track andat least a subset of the
`plurality of video takes that are already synchronized to the
`selected audio tracks. For example, portions of one or more
`captured video takes may be matchedto certain sections of
`the audio track based on the audio track’s audio intensity
`levels and/or a particular video take’s video intensity.
`In another exemplary embodiment, a user device includ-
`ing at
`least one audio input component, at
`least
`image
`capturing component, memory, andat least one processoris
`described. The memory maystore an audio track recorded
`using the at least one audio input componentanda plurality
`of video takes captured by the at least one image capturing
`component. For example, a user may record an audio track
`using their user device’s microphoneas well as record video
`takes using one or more camerasresident on the user device.
`The at least one processor of the user device may then
`determine a vocal and/or melodic phrase within the recorded
`audio track, and synchronizeat least a portion of one or more
`captured video takes to the recorded audio track based on the
`determined vocal and/or melodic phrase while the at least
`one of the plurality of video takes is captured.
`In yet another exemplary embodiment, another method
`for creating a music video is described. An audio track may
`be selected and a plurality of video takes may be captured.
`A numberof faces within each video take of the captured
`video takes may be determined while the plurality of video
`takes are being captured. Also while the plurality of video
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 22 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 22 of 34
`
`US 9,691,429 B2
`
`3
`takes are being captured,at least a subset of the plurality of
`captured video takes may be synchronized to the selected
`audio track based on the numberof faces determined to be
`
`within each video take. A music video may then be created
`including the selected audio track and at least the subset of
`the plurality of captured video takes synchronized to the
`selected audio track.
`
`In still yet another exemplary embodiment, a user device
`including memory for storing a plurality of video takes, at
`least one image capturing component, and at
`least one
`processoris described. The at least one processoris operable
`to receive a selection of an audio track from the plurality of
`audio tracks stored in memory. The audio track may then
`play and, while playing, at least one video take may be
`captured using the at least one image capturing component.
`Theat least one captured video take may be synchronized to
`the selected audio track while the selected audio track plays.
`A music video may then be generated that includes the
`selected audio track and at least a subset of the at least one
`
`video take that is already synchronizedto the selected audio
`track.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The above and other features of the present invention, its
`nature and various advantages will be more apparent upon
`consideration of the following detailed description, taken in
`conjunction with the accompanying drawings in which:
`FIG.1 is an illustrative block diagram of a user device in
`accordance with various embodiments;
`FIG. 2 is an illustrative diagram of a user interface
`displayed on a user device in accordance with various
`embodiments;
`FIGS. 3A and 3B are illustrative diagrams of user inter-
`faces displayed on a user device for selecting an audio track
`in accordance with various embodiments;
`FIG. 4 is an illustrative diagram of a user interface
`displayed on a user device presenting a selected audio track
`for a music video to be created in accordance with various
`embodiments;
`FIGS. 5A and 5Bareillustrative diagrams of various user
`interfaces displayed on a user device for capturing video
`takes for a music video to be created in accordance with
`various embodiments;
`FIGS. 6A-C are illustrative diagrams of various user
`interfaces displaying video takes being captured by a user
`device for a music video in accordance with various embodi-
`ments;
`FIGS. 7A and 7Bare illustrative diagrams of a created
`music video including a plurality of captured video takes
`synchronized to selected audio track in accordance with
`various embodiments;
`FIGS. 8A and8Bis anillustrative flowchart of a process
`for creating music videos in accordance with various
`embodiments;
`FIGS. 9A-C are illustrative diagrams of user interfaces
`including various video takes having a number of faces or
`images determined to be within the video takes in accor-
`dance with various embodiments;
`FIG. 9D is an illustrative diagram of various sections of
`a selected audio track’s waveform synchronized with por-
`tions of video takes based on the numberoffaces determined
`to be within each video take in accordance with various
`embodiments;
`FIG. 9E is an illustrative flowchart of a process for
`synchronizing video takes to an audio track based on a
`
`15
`
`25
`
`40
`
`45
`
`60
`
`65
`
`4
`numberof faces determined to be within the video takes in
`accordance with various embodiments;
`FIGS. 10A-D are illustrative diagrams of various vocal
`and melodic indicators and phrases for use as transition
`points between video takes for a music video synchronized
`to an audio track in accordance with various embodiments;
`FIG. 10E is an illustrative flowchart of a process for
`creating a music video including various video takes syn-
`chronized to a recorded audio track based on a determined
`vocal and/or melodic phrase within the audio track in
`accordance with various embodiments; and
`FIG. 11 is an illustrative diagram of a user interface for
`recording an audio track to be used for creating a music
`video in accordance with various embodiments.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`The present invention may take form in various compo-
`nents and arrangements of components, and in various
`techniques, methods, or procedures and arrangements of
`steps. The referenced drawings are only for the purpose of
`illustrated embodiments, and are not to be construed as
`limiting the present invention. Various inventive features are
`described below that can each be used independently of one
`another or in combination with other features. Furthermore,
`in at least some embodiments,like referenced numerals refer
`to like parts throughout.
`FIG.1 is an illustrative block diagram of a user device in
`accordance with various embodiments. User device 100, in
`some embodiments, may correspond to any electronic
`device or system. Various types of user devices include, but
`are not limited to, portable media players, cellular tele-
`phones or smart phones, pocket-sized personal computers,
`personal digital assistants (“PDAs”), desktop computers,
`laptop computers, tablet computers, and/or electronic acces-
`sory devices such as smart watches and bracelets. User
`device 100 may communicate with one or more additional
`user devices, networks, and/or servers. For example, user
`device 100 may send text messages to other user devices
`across a network, or user device 100 may access one or more
`websites located on a server.
`
`User device 100, in some embodiments, may include one
`or more processors 102, memory 104, storage 106, commu-
`nications circuitry 108, an input interface 110, and an output
`interface 118. In some embodiments,
`input interface 110
`may include one or more cameras 110 or other image
`capturing components, one or more microphones 112 or
`other audio capturing components, and one or more external
`device inputs 116. Further, in some embodiments, output
`interface 118 may include display 120 and one or more
`speakers 122 or other audio output components. Persons of
`ordinary skill in the art will recognize that user device 100
`may include any number of components, and one or more
`additional components or modules may be added or omitted
`without deviating from the scope of the present disclosure.
`Additionally, one or more components may be combined or
`separated, and multiple instances of various components are
`also possible, however only one of each componentis shown
`within user device 100 for simplicity.
`Processor(s) 102 may include any suitable processing
`circuitry, such as one or more processors, capable of con-
`trolling the operations and functionality of user device 100.
`In some embodiments, processor(s) 102 may facilitate com-
`munications between various components within user
`device 100. For example, processor(s) 102 may cause output
`interface 118 to perform an associated output in response to
`
`

`

`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 23 of 34
`Case 4:20-cv-07572-JSW Document 54-1 Filed 08/25/21 Page 23 of 34
`
`US 9,691,429 B2
`
`5
`one or more inputs being detected by input interface 110.
`Processor(s) 102 may run an operating system for user
`device 100, applications resident on user device 100, firm-
`ware applications, media application, and/or any other type
`of application, or any combination thereof function on, or in
`conjunction with, user device 100.
`Memory 104 mayinclude any suitable form of memory,
`such as cache memory, semi-permanent memory (e.g.,
`RAM), or any other memory type, or any combination of. In
`some embodiments, memory 104 may be used in place of
`and/or in addition to an external memory or storage unit or
`device for storing data on user device 100.
`Storage 106 may include one or more storage mediums.
`Various types of storage mediums include, but are not
`limited to, hard drives, solid state drives, flash memory,
`permanent memory (e.g., ROM), or any other storage type,
`or any combination thereof. Any form of data or content may
`be stored within storage 106, such as photographs, music
`files, videos, contact information, applications, documents,
`or any other file type, or any combination thereof.
`In some embodiments, memory 104 and storage 106 may
`be combinedinto a single component. For example, a single
`memory component may include memory andstorage func-
`tions. In other embodiments, multiple instances of memory
`104 and/or storage 106 may be present, howeverit is also
`possible for memory 104 and/or storage 106 to be external
`to user device 100. For example, one or more files may be
`stored remotely on an external hard drive or on a cloud
`storage provider. However, persons of ordinary skill in the
`art will recognize that the aforementioned scenarios are
`merely examples.
`Communications circuitry 108 may include any circuitry
`capable of connecting user device 100 to one or more
`additional devices (e.g.,
`laptop computers, smartphones,
`etc.), one or more networks (e.g.,
`local area networks
`(“LAN”), wide area networks (“WAN”), point-to-point net-
`works, etc.), and/or one or moreservers(e.g., file manage-
`ment systems, music directories, etc.). Communications
`circuitry may support any suitable communications protocol
`including, but not limited to, Wi-Fi (e.g., 802.11 protocol),
`Bluetooth®, radio frequency systems (e.g., 900 MHz, 1.4
`GHz, and 5.6 GHz communications systems),
`infrared,
`GSM, GSM plus EDGE, CDMA, quadband, LTE, VOIP, or
`any other communications protocol, or any combination
`thereof.
`
`Input interface 110 may include any suitable mechanism
`and/or componentfor receiving inputs from a user operating
`user device 100. For example, input interface 110, in one
`embodiment, includes one or more cameras 112. Cameras
`112 may correspond to any suitable image capturing com-
`ponent capable of capturing images and/or video. For
`example, camera 112 may capture photographs, sequences
`of photographs, rapid shots, videos, or any other type of
`image, or any combination thereof. In some embodiments,
`cameras 112 may be capable of capturing high-definition
`(“HD”), 3-D, and/or panoramic images and/or videos. In
`some embodiments, cameras 112 may include one or more
`filters or settings for images and/or video that may be
`captured by cameras 112 (e.g., black and white, monochro-
`matic, fades, slow-motion,etc.). In some embodiments, user
`device 100 may include multiple instances of camera 112.
`For example, user device 100 may include a front-facing
`camera anda rear-facing camera. In some embodiments, one
`or more additional image capturing components, such as a
`zoom or add onfilter, may be used in connection with, or
`instead of, camera 112 to aid in capturing images and/or
`videos.
`
`20
`
`25
`
`30
`
`40
`
`45
`
`50
`
`6
`Microphone(s) 114 may be any component capable of
`detecting and/or receiving audio signals. For example,
`microphone(s) 114 may include one or more sensors for
`generating electrical signals and circuitry capable of pro-
`cessing the generated electrical signals. In some embodi-
`ments, user device 100 may include multiple instances of
`microphone 114, such as a first microphone and a second
`microphone. In some embodiments, user device 100 may
`include multiple microphones capable of detecting various
`frequency levels (e.g., high/low-frequency microphones).
`Furthermore, in some embodiments, one or more external
`microphones may be connected to user device 100 and may
`be used in conjunction with, or instead of, microphone(s)
`114.
`
`External device input 116 may correspond to any input
`interface or set of input interfaces capable of receiving
`inputs from an external device. For example, one or more
`external microphones, as described above, may be coupled
`to user device 100 through external device input 116. As
`another example, a user may couple an electric guitar,
`drums, and/or keyboard to user device 100 via external
`device input 116. However, it is also possible for a user to
`couple one or more external devices, such as a guitar or
`keyboard, to an external musical interface (e.g., a mixing
`board or computer), which in turn may couple to user device
`100 via external device input 116.
`Output interface 118 may include any suitable mechanism
`or component for generating outputs from a user operating
`user device 100. For example, display 120 may, in some
`embodiments, present content to a user on user device 100.
`Display 120 may be any size or shape, and may be located
`on one or more regions/sides of user device 100. For
`example, display 120 may fully occupya first side of user
`device 100, or display 120 may only occupy a portion of a
`first side of user device 100. Various display types include,
`but are not
`limited to,
`liquid crystal displays (“LCD”),
`monochrome displays, color graphics adapter (“CGA”) dis-
`plays, enhanced graphics adapter (“EGA”) displays, vari-
`able graphics array (“VGA”) displays, 3-D displays, high-
`definition (“HD”) displays, or any other display type, or any
`combination thereof.
`In some embodiments, display 120 may be a touch screen
`and/or an interactive touch sensitive display screen. For
`example, display 120 may be a multi-touch panel coupled to
`processor(s) 102, and may include one or more capacitive
`sensing panels. In some embodiments, display 120 mayalso
`correspond to a component, or portion, of input interface
`110, as it may recognize and one or more touch inputs. For
`example, in response to detecting certain touch inputs on
`display 120, processor(s) 102 may execute one or more
`functions for user device 100 and/or may display certain
`content on display 120.
`Speakers 122 may correspond to any suitable mechanism
`for outputting audio signals. For example, speakers 122 may
`include one or more speaker units, transducers, or arrays of
`speakers and/or transducers capable of broadcasting audio
`signals and/or audio content to an area where user device
`100, or a user, may be located. In some embodiments,
`speakers 122 may correspond to headphones or ear buds
`capable of broadcasting audio directly to a user. In yet
`another embodiment, one or more external speakers may be
`connected to user device 100 (e.g., via external device input
`116), and may serve to provide audio content to a user
`associated with user device 100.
`
`FIG. 2 is an illustrative diagram of a user interface
`displayed on a user device in acco

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket