`Eloi Batlle, Jaume Masip, Enric Guaus
`Audiovisual Institute and Dept. of Technology
`Pompeu Fabra University
`Pg. Circumvalaci´o, 8
`E-08003 Barcelona. Catalunya-Spain
`
`email: eloi,jmasip,eguaus @iua.upf.es
`
`ABSTRACT
`Automatic identification of music titles and copyright en-
`forcement of audio material has become a topic of great
`interest. One of the main problems with broadcast audio
`is that the received audio suffers several transformations
`before reaching the listener (equalizations, noise, speaker
`over the audio, parts of the songs are changed or removed,
`etc.) and, therefore, the original and the broadcast songs
`are very different from the signal point of view. In this pa-
`per, we present a new method to minimize the effects of
`audio manipulation (i.e.
`radio edits) and distortions due
`to broadcast transmissions. With this method, the identi-
`fication system is able to correctly recognize small frag-
`ments of music embedded in continuous audio streams (ra-
`dio broadcast as well as Internet radio) and therefore gen-
`erate full play-lists. Since the main goal of this system is
`copyright enforcement, the system has been designed to
`give almost no false positives and achieve very high ac-
`curacy.
`
`KEY WORDS
`Broadcast audio identification, mel-cepstrum coefficients,
`hidden Markov models, Viterbi, music information retrival.
`
`1 Introduction
`
`Systems that are able to automatically identify file songs
`have received a great deal of recent attention. The main
`goal of these systems is to associate a singer and and a title
`to an audio file. That is, given an audio file (WAV, MP3,
`etc.) the systems analyzes its content and matches it to a
`database of originals (previous observations and analysis
`of these files or very similar ones).
`Among all proposals, content-based identification
`techniques has proved to be more flexible and robust
`than file comparison, metadata or watermarking proposals
`which depend on the integrity of non audible data. Content-
`based identification techniques are based on the acustic
`qualities of audio. Different system implementation for this
`approach have been proposed which contain different au-
`dio feathure extraction mechanisms and database matching
`algorithms [1, 2]. Nevertheless, none of these system pro-
`posals explicitly face the challenges derived from broadcast
`audio identification.
`
`As a matter of fact, commercial radio stations modify
`the songs before broadcast to increase their impact on ca-
`sual listeners and, therefore, it is very common to find some
`parts of the song changed (or repeated or deleted). Another
`common situation is the broadcasting of only a few seconds
`of the song. From a copyright point of view, it is very im-
`portant to detect these situations because copyrights should
`be taken into account not only for the whole song but also
`for small parts of it.
`Another problem in broadcast environments is the fact
`that the system has no access to isolated songs, but to a con-
`tinuous stream of unlabeled audio that contains not only
`songs but also news, commercials and other unknown ma-
`terial. And all these audio events mix together with often
`fuzzy transitions between them.
`In the next sections, we present an audio identification
`system that is able to correctly identify songs in a continu-
`ous stream of unknown audio material (song spotting) and
`to generate a play-list finding the beginning and end points
`of each song.
`This paper is structured as follows. It starts with an
`overview of the global system and how it works. Section
`3 introduces the feature extraction front-end that discrim-
`inates relevant information from the whole audio signal.
`Then, section 4 presents the channel estimation technique
`used to counterbalance the effects of signal editing and
`broadcasting. Since the identification system is based on
`a stochastic approach, section 5 sketches the training algo-
`rithm for the system, while Section 6 describes the whole
`matching process. Finally, section 7 shows the system per-
`formance results under different identification conditions.
`
`2 System overview
`
`The identification system is build on a well known stochas-
`tic pattern matching technique known as Hidden Markov
`Models (HMM). HMMs have proven to be a very powerful
`tool to statistically model a process that varies in time [3].
`The idea behind them is very simple. Consider a stochastic
`process from an unknown source and consider also that we
`only have access to its output in time. Then, HMMs are
`well suited to model this kind of events. From this point of
`view, HMMs can be seen as a doubly embedded stochastic
`process with a process that is not observable (hidden pro-
`
`Roku EX1020
`IPR2024-01058
`U.S. Patent No. 10,719,849
`
`0001
`
`
`
`cess) and can only be observed through another stochastic
`process (observable process) that produces the time set of
`observations.
`We can see music as a sequence of audio events. The
`simplest way to show an example of this this is in a mono-
`phonic piece of music. Each note can be seen as an acous-
`tic event and, therefore, from this point of view the piece
`is a sequence of events. However, polyphonic music is
`much more complicated since several events occur simulta-
`neously. In this case we can define a set of abstract events
`that do not have any physical meaning but it mathemat-
`ically describes the sequence of complex music.
`In sec-
`tion 5, we describe how we deal in our system with this
`kind of complex music. With this approach, we can build a
`database with the sequences of audio events of all the music
`we want to identity.
`To identify a fragment of a piece of music in a stream
`of audio, the system continuously finds the probability that
`the events of the pieces of music stored in the database are
`the generators of this unknown broadcast audio. This is
`done by using the HMMs as a generators of observations
`instead of decoding the audio into a sequence of HMMs
`(see section 6).
`
`Since we only have access to the distorted data and
`due to the nature of the problem we cannot know how the
`distortion was, we need a method to recover the original
`audio characteristics from the distorted signal without hav-
`ing access to the manipulations this audio has suffered.
`Here we define the channel as a combination of all pos-
`sible distortions like equalizations, noise sources and DJ
`manipulations.
`
`is slowly varying we
`can design a filter that, applied to the time sequence of pa-
`rameters, is able to remove the effects of the channel. The
`filter we designed for our system is
`
`If the distorting channel
`+-, ./02143565 798 .4:<;798 14356=>. :<;
`
`(3)
`
`By filtering the paramenters of the distorted audio
`with this filter, they are converted, as close as possible, to
`the clean version. By removing this channel effect from
`the received signal the identification performance is greatly
`improved because all the distortions caused by any equal-
`ization and transmission are removed [6]. Therefore the
`system will be able to deal with not only clean CD audio
`but also broadcast noisy audio.
`
`3 Feature extraction
`
`5 Training
`
`The first step in a pattern matching system is the extraction
`of some features from the raw audio samples. We choose
`the parameter extraction method depending on the nature
`of the audio signal as well as the application. Since the aim
`of our system is to identify music behaving as close as pos-
`sible to a human being, it is sensible to approximate the hu-
`man inner ear in the parametrization stage. Therefore, we
`use a filter-bank based analysis procedure. In speech recog-
`nition technology, mel-cepstrum coefficients (MFCC) are
`well known and their behavior leads to high performance
`of the systems [4]. It can be also shown that MFCC are
`also well suited for music analysis [5].
`
`4 Channel estimation
`
`Techniques for dealing with known distortion are straight-
`forward. However, in real radio broadcast, the distortion
`that affects the audio signal is unknown. To remove some
`effects of this distortions, we can assume that they are
`caused by a linear time-invariant (or slowly variant) chan-
`nel. With this approach we assume that all the distortion
`that slowly
`
`can be approximated by a linear filter
`changes in time. Thus, if we define
`
`received,
`
`
`!#"%$ $ !&"%$ $(')!&"%$* $
`
`transform, we can write
`
`and in the logarithmic space
`
`(1)
`
`(2)
`
`In our approach, HMMs represent generic acoustic genera-
`tors. Each HMM models one generic source of audio. For
`example, if the audio we model has a piano and a trumpet,
`we will have one HMM to model the piano and another
`one to model the trumpet. However, commercial pop mu-
`sic has a very complex variety and mixture of sounds and
`so it is almost impossible to assign a defined sound source
`to each HMM. Therefore, each HMM in the system mod-
`els abstract audio generations, that is, each HMM is cal-
`culated to maximize the probability that if it was really a
`sound generator, it will generate that sound (complex or
`not). Thus, HMMs are calculated in a way that the proba-
`bility that a given sequence of them will generate a partic-
`ular song and, that given all possible songs, we can find a
`sequence of HMMs for each of them that generates them
`reasonably well.
`To derive the formulas to calculate the paramenters
`of each HMM we used a modification of the Expectation--
`Maximization algorithm were the incomplete data (as they
`are defined in [7]) are not only the parameters of the HMMs
`but also their correct sequences for each song. If we sup-
`that
`is related to the probability density function of the incom-
`plete data then we can relate them with
`
`(4)
`
`pose that a probability density function exists ?@ A $B
`C ED $B 0 FHG
`I&JLK ?@ A $B NM/A
`where D
`space andO are the samples of the complete samples space.
`
`are the samples from the incomplete samples
`
`We also suppose that there is at least one transformation
`
`0002
`
`
`
`6.2 Identification algorithm
`
`The identification algorithm is in charge of matching all the
`signatures against the input streaming audio signals to de-
`termine whenever a song section has been detected. The
`Viterbi algorithm is used again with the purpose of exploit-
`ing the observation capabilities of the HMM models con-
`tained in the signature sequences. Nevertheless, this time
`the graph model is not a complete graph but a cyclic HMM
`model as shown in Figure 2. This model is built linking all
`song HMM sequences from the identity signature database
`in a ring structure where each HMM only has two links,
`one to itself and one toward its immediate neighbor. Nev-
`ertheless, the Viterbi algorithm is allowed periodically to
`use internal ring links in order to allow jump between dif-
`ferent song sections. Combining the Viterbi algorithm with
`the HMM ring model proposal, the identification phase can
`perform all the following key features:
`
`Z Normal operation: Identify the song signature and
`
`perform continuous time tracking between song start
`and song end. The optimal path corresponds to con-
`secutive HMM matching where only external links are
`used.
`
`Z Song mixing: Identify internal jumps between songs.
`Z Song interruption:
`
`The optimal path corresponds to consecutive HMM
`matching using external links and only one internal
`link.
`
`Identify non modeled sections.
`The optimal path corresponds to behaviors where the
`optimal path can not be classified in the previous
`cases.
`
`AA DD
`
`EE
`
`CCBB
`
`EE
`
`DD
`
`BB
`
`S
`
`1
`
`BB
`
`AA
`
`AA
`
`EE
`
`DD
`
`EE
`
`DD
`
`DD
`
`S2
`
`FF
`
`EE
`
`BB
`
`AA
`
`DD
`
`CC
`
`FF
`
`SO
`
`BB
`
`FF
`
`AA
`
`AA
`
`EE
`
`FF
`
`CC
`
`FF
`
`EE
`
`S3
`
`Figure 2. HMM model for a signature database with four
`songs: S0-S1-S2-S3.
`
`The time complexity of the Viterbi algorithm for the
`
`ring graph is RS P[T-QX , while it only requires a space of
`RS Q%
`to process P
`audio frames with Q HMMs in the
`
`from the space of complete samples to the space of incom-
`plete samples.
`Therefore, the training stage in our system is done in
`a iterative way similar to the Baum-Welch algorithm [8]
`widely used in speech recognition system. Speech systems
`use HMMs to model phonemes (or phonetic derived units)
`but, unfortunately, in music identification systems we do
`not have any clear kind of units to use. That is why at
`each iteration a new set of units is estimated as a part of
`the incomplete data in order to jointly find the sequence
`of probabilities and also the set of abstract units that best
`describes complex music. After some experimental results
`we found that a good set of units is completely estimated
`after 25-30 iterations.
`
`6 Audio Identification
`
`HMM training described in the previous section was aimed
`at obtaining the maximum distance between all possible
`song models in order to increase speed and reliability dur-
`ing the audio identification phase. Once the HMMs are
`trained, the next steps toward building the entire system
`consist in getting the song models and matching them
`against streaming audio signals.
`
`6.1 Signature generation
`
`Signature generation consists in obtaining a sequence of
`HMMs for each song that uniquely identifies it among the
`others. The song signatures are generated using a Viterbi
`algorithm [9]. The Viterbi algorithm computes the high-
`est probability path between HMMs on a complete HMM
`graph model as shown in Figure 1.a. This figure is followed
`by an example of Viterbi signature generation in Figure 1.b.
`All the song signatures are stored in a signature database.
`
`55
`
`BB
`
`
`
` 2
`
`CC
`
`66
`
`44
` 3
`
`77
`AA
`
`11
`
`DD
`
`FF
`
`88
`
`EE
`
`S1 = EBCDBBAAE
`(b) Viterbi output
`
`FF
`
`EE
`
`AA
`
`DD
`
`BB
`
`CC
`
`(a) HMM model
`
`Figure 1. The Viterbi algorithm computes the optimum
`path travel on a complete HMM graph model.
`
`The time complexity of the Viterbi algorithm that
`frames on a com-
`
`complexity required for backtracking the optimal sequence
`
`computes the signature of a song withP
`plete graph withQ HMMs isRS PUTVQXWY , while the space
`isRS PVTHQX . Therefore, the implementation of the signature
`generator is feasible as far asQ
`
`of magnitude.
`
`is kept under small orders
`
`0003
`
`
`
`labels. Finally, the monitoring tool verifies the detected au-
`dio labels against the original labels and retrieves statistics
`about the audio identification system reliability.
`
`Monitoring Tool
`
`hmm
`
`mp3
`
`Signature
`Generation
`
`sign
`
`Audio
`Identification
`
`Audio
`Tool
`
`Distortion
`
`mp3
`
`Audio Labels
`
`Figure 3. Test-bench schematic.
`
`Figure 4. Planar plot of the Viterbi score achieved by the
`song signature (horizontal axis) over time (vertical axis).
`
`7.1 Matching the complete audio database
`
`This experiment aims at studying the capabilities of the au-
`dio identification system to identify original audio streams.
`This feature is exploited when the content of mp3 or other
`audio files can be analyzed directly by the application. In a
`broader sense, these experiments determines the raw identi-
`fication capabilities of the HMM observers with the Viterbi
`algorithm.
`The input for this experiment was a continuous au-
`dio stream generated appending the 3856 songs contained
`
`ring graph. Therefore, the identification algorithm scales
`linearly with the number of songs in the database because
`each HMM only has two links and the internal link period
`can be large enough to have small impact on the time com-
`plexity while maintaining a reasonable song-mixing capa-
`bility.
`
`7 Experimental results
`
`The identification algorithm and the song signature
`database were implemented using the C++ programming
`language. The innermost time critical loop was developed
`in assembler code in order to achieve higher optimization.
`The running process consumed 35Mbytes memory space
`and achieved real-time performance while processing one
`streaming audio input. The computing platform was a sin-
`gle Pentium-III CPU with 1GHz clock.
`The system parameters used for the real-time imple-
`mentation were:
`
`Z 256 HMMs to generate the song signatures.
`Z 450 HMMs average per song signature.
`Z 3852 song signatures in the database.
`Z 6 seconds periodicity for the internal links.
`
`The first experiment consisted in streaming one song
`to the audio identification algorithm. Figure 4 shows the
`Viterbi output from the identified song signature. In this
`case, the Viterbi algorithm kept running under normal op-
`eration since no transitions were performed between songs.
`The continuous diagonal line corresponds to the end-to-end
`detection of the main sound track while the small parallel
`diagonals correspond to sections that were identified mul-
`tiple times inside the same song.
`Three additional experiments were run with the aim of
`studying the identification system reliability under differ-
`ent broadcast audio distortions. An automated test-bench
`was built with the aim of performing exhaustive statisti-
`cal studies of the identification system over the complete
`song database. The schematic of the complete test-bench
`used in all the experiments is shown in Figure 3. The first
`block builds the signature database by processing all the
`mp3 audio files from original CD albums. An audio tool
`produces a continuous audio stream and the original audio
`labels associated with the complete mp3 file database. The
`audio stream contained a single mono channel coded with
`signed words at 22050Hz rate. The audio labels combined
`the song identification number and the time stamp that mea-
`sured the distance from the beginning. The distortion block
`is optional and modifies the original audio stream trying to
`reproduce the main audio editions performed in real radio
`broadcast studios. The identification block is in charge of
`observing the audio stream with the HMMs, match them
`against the signature database and generate detected audio
`
`0004
`
`
`
`Measured
`Bezier
`Measured (without false negatives)
`Bezier (without false negatives)
`
`1
`
`0.1
`
`0.01
`
`Error Rate
`
`0.001
`
`0
`
`20
`
`40
`
`60
`
`Song Length (%)
`
`80
`
`100
`
`Figure 5. Probability distribution of the error rate over the
`song length.
`
`fact, the distortion block here defined is a combination of a
`compressor and a limiter in order to achieve a fixed max-
`imum level. There are four important parameters in the
`compression process: threshold, ratio, attack and release.
`
`ratio 1:1
`ratio 1:2
`ratio 1:4
`ratio > 1:20
`
`1V
`
`out
`
`threshold
`1
`(a) Threshold and Ratio
`
`Vin
`
`(b) Attack, Release and
`Gain characteristics, over-
`laped to the original and
`modified signals
`
`Figure 6. Compressor parameters
`
`The threshold defines the minimum value which the
`compressor reduces the input signal, according to the ratio.
`This is not an instantaneous process, and we must choose
`the attack and the release time in order to define how fast
`the signal is compressed when its amplitude increases, and
`how fast the signal leaves compression when its amplitude
`decreases, respectively. The threshold, ratio, attack and re-
`lease values used in this experiment are 0.5, 40, 10ms and
`2500ms respectively. All these values are experimentaly
`fixed for the worst case.
`Some Radio Stations apply multi-band compression:
`the compression applied at different frequency bands is not
`the same. With this technique, the original sound gets more
`presence and contrast. In Fig. 7, we can see the effect of
`the compression techniques mentioned above, applied to an
`original signal from a CD.
`The identification test-bench with the distortion block
`
`in the complete mp3 audio database. Aproximately, the
`length of the complete audio stream was 250 hours (10.5
`days). The distortion block was not present during this ex-
`periment. The experiment took 8 hours to complete on a
`16 computer cluster with dual Pentium-III CPU at 1GHz
`and running paralellized versions of the audio tool and the
`audio identification block.
`The analysis of the preliminary results determined the
`existence of a large number of identification labels that
`overlapped and generated false negative detections. As a
`matter of fact, three sources of false positives were found:
`
`Z Same file: Two copies of same audio file were found
`
`in the mp3 database when it appeared both in the orig-
`inal and in the compilation albums by the same artist.
`Duplicated copies were also detected in albums from
`different artists who performed together.
`
`Z Same song: Two different audio files contain the same
`Z Song mix: A single song is composed by mixing
`
`song but performed in a slightly different way as may
`be the case of the original and live concert versions of
`the same song.
`
`pieces of songs from different albums.
`
`The false positives where corrected by means of label
`exchange using tables that contained allowed correspon-
`dences between songs. The error rate measures obtained
`before and after extracting false positives are shown in Fig-
`ure 5. The figure presents three sections clearly differenti-
`ated in terms of error rate: the song introduction, the song
`middle stage and the song end. The higher error rates found
`at the song introductions and endings is due to a higher
`mismatch between the MFCC coefficients and the instru-
`mental sections that concentrate at the song introduction
`and song end. Moreover, as already stated in Section 5,
`each HMM represents a generic acoustic generator and in
`average, these sections are simpler in terms of instrumental
`complexity or even contain significant silence periods.
`
`7.2 Matching the complete audio database
`with radio distortions
`
`It is well known that radio stations use complex sound pro-
`cessing techniques to get higher loudness and produce the
`effect of impressive sound broadcast. The use of all these
`sound processing techniques is not perfectly defined, and it
`depends on the music style and the legislation of each spe-
`cific country, between other factors. The most common
`techniques are signal compressions, enhancements, time
`stretching and exciters.
`The radio distortion model used in the test-bench fo-
`cuses on the compression technique. Audio compression
`consists on dynamic range reduction, due an adaptative and
`variable gain of the input signal, which allows signal ampli-
`fication without changing the maximum peak level, There-
`fore, audio compression increases the overall loudness. In
`
`0005
`
`
`
`Audio Source
`
`Original
`Radio capture
`MP3 128 kbps
`MP3 32 kbps
`MP3 24 kbps
`
`Identification
`with False Positives
`100%
`100%
`100%
`99.83%
`99.04%
`
`Identification with
`no False Positives
`100%
`100%
`100%
`100%
`100%
`
`Table 1. System performance on different environments.
`
`tortion audio databases, radio captures and different mp3
`compression rates. The system analysis showed that false
`positives were due to song copies, versions and remixes.
`Moreover, the system performance for different song sec-
`tions has been detemined. Finally, radio distortion and mp3
`compression deteriorate the algorithm output but do not im-
`pact the audio detection reliability.
`
`References
`
`[1] T. Kastner, E. Allamanche, J. Herre, O. Hellmuth,
`M. Cremer, and H. Grossmann, “MPEG-7 Scalable
`Robust Audio Fingerprinting,” in Proceedings of the
`AES Convention, 2002.
`
`[2] J. Haitsma, T. Kalker, and J. Oostveen, “Robust Audio
`Hashing for Content Identification,” in Proceedings of
`the Content-Based Multimedia Indexing, 2001.
`
`[3] L. R. Rabiner, “A Tutorial on HMM and Selected Ap-
`plications in Speech Recognition,” Proceedings of the
`IEEE, vol. 77, no. 2, pp. 257–286, 1989.
`
`[4] E. Batlle, C. Nadeu, and J. A. R. Fonollosa, “Feature
`Decorrelation Methods in Speech Recognition.,” in In-
`ternational Conference on Spoken Language Process-
`ing, Sydney, 1998, vol. 3, pp. 951–954.
`
`[5] B. Logan, “Mel Frequency Cepstral Coefficients for
`Music Modeling,” in ISMIR, 2000.
`
`[6] R. A. Bates, “Reducing the Effects of Linear Channel
`Distortion on Continuous Speech Recognition,” M.S.
`thesis, Col. of Engineering. Boston University, 1996.
`
`[7] A. P. Dempster and et altri, “Maximum Likelihood
`from Incomplete Data via the EM Algorithm,” Journal
`of the Royal Statistical Society, vol. 39, no. 1, pp. 1–38,
`1977.
`
`“An Inequality with
`[8] L. E. Baum and J. A. Eagon,
`Applications to Statistical Estimation for Probabilis-
`tic Functions of Markov Processes and to a Model for
`Ecology,” BAMS, pp. 360–363, 1967.
`
`[9] A. J. Viterbi, “Error Bounds for Convolutional Codes
`and an Asymptotically Optimum Decoding Identifica-
`tion,” IEEE Trans. Info. Theory, vol. 13, no. 2, pp.
`260–269, 1967.
`
`(a) Original signal
`
`(b) Compressed signal
`
`Figure 7. Compressor effects
`
`produced the error rate measures shown in Figure 8. The
`figure shows the system performance with and without
`false positives corrected using the same tables as the first
`experiment. As can be seen, the distortion block introduces
`a significant performance penalty in terms of false negative
`labels while it has a minimal impact on the final error rate
`when comparing Figure 8 and Figure 5.
`
`Measured
`Bezier
`Measured (without false negatives)
`Bezier (without false negatives)
`
`1
`
`0.1
`
`0.01
`
`Error Rate
`
`0.001
`
`0
`
`20
`
`40
`
`60
`
`Song Length (%)
`
`80
`
`100
`
`Figure 8. Probability distribution of the error rate over the
`song length.
`
`7.3 Matching broadcast radio and Mp3 com-
`pression
`
`This test used an input of 25 hours of audio captured from
`a single radio broadcast station which accumulated 217
`songs in total interleaved with news and commercials. Ta-
`ble 1 shows the system performance results for the radio
`capture as well as different mp3 compression rates before
`and after extracting false negatives. Therefore, the system
`accuracy can be granted as far as table correspondences are
`maintained withing the song database.
`
`8 Conclusions
`
`The combination of channel estimation, trained HMM ob-
`servers and Viterbi sequencing and alignment algorithms
`results in highly robust audio identification system perfor-
`mance. The system has been characterized extensively in
`terms of error rate response under original and radio dis-
`
`0006
`
`

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site