throbber
8/8/24, 7:02 PM
`
`High Performance Browser Networking
`
`The Wayback Machine - https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`High Performance Browser Networking
`
`Chapters
`Log In / Sign Up
`Search book...
`
`Prev
`
`Chapter 18. WebRTC
`
`Chapter 18. WebRTC
`Part IV. Browser APIs and Protocols
`
` Next
`
`Web Real-Time Communication (WebRTC) is a collection of standards, protocols, and JavaScript APIs, the combination of which enables peer-to-peer audio, video, and data sharing between browsers (peers). Instead of relying on third-
`party plug-ins or proprietary software, WebRTC turns real-time communication into a standard feature that any web application can leverage via a simple JavaScript API.
`
`Delivering rich, high-quality, RTC applications such as audio and video teleconferencing and peer-to-peer data exchange requires a lot of new functionality in the browser: audio and video processing capabilities, new application APIs, and
`support for half a dozen new network protocols. Thankfully, the browser abstracts most of this complexity behind three primary APIs:
`
`MediaStream: acquisition of audio and video streams
`RTCPeerConnection: communication of audio and video data
`RTCDataChannel: communication of arbitrary application data
`
`All it takes is a dozen lines of JavaScript code, and any web application can enable a rich teleconferencing experience with peer-to-peer data transfers. That’s the promise and the power of WebRTC! However, the listed APIs are also just the
`tip of the iceberg: signaling, peer discovery, connection negotiation, security, and entire layers of new protocols are just a few components required to bring it all together.
`
`Not surprisingly, the architecture and the protocols powering WebRTC also determine its performance characteristics: connection setup latency, protocol overhead, and delivery semantics, to name a few. In fact, unlike all other browser
`communication, WebRTC transports its data over UDP. However, UDP is also just a starting point. It takes a lot more than raw UDP to make real-time communication in the browser a reality. Let’s take a closer look.
`
`Standard under construction
`
`WebRTC is already enabled for 1B+ users: the latest Chrome and Firefox browsers provide WebRTC support to all of their users! Having said that, WebRTC is also under active construction, both at the browser API level and at the transport
`and protocol levels. As a result, the specific APIs and protocols discussed in the following chapters may still change in the future.
`Standards and Development of WebRTC
`
`Enabling real-time communication in the browser is an ambitious undertaking, and arguably, one of the most significant additions to the web platform since its very beginning. WebRTC breaks away from the familiar client-to-server
`communication model, which results in a full re-engineering of the networking layer in the browser, and also brings a whole new media stack, which is required to enable efficient, real-time processing of audio and video.
`
`As a result, the WebRTC architecture consists of over a dozen different standards, covering both the application and browser APIs, as well as many different protocols and data formats required to make it work:
`
`Web Real-Time Communications (WEBRTC) W3C Working Group is responsible for defining the browser APIs.
`Real-Time Communication in Web-browsers (RTCWEB) is the IETF Working Group responsible for defining the protocols, data formats, security, and all other necessary aspects to enable peer-to-peer communication in the browser.
`
`WebRTC is not a blank-slate standard. While its primary purpose is to enable real-time communication between browsers, it is also designed such that it can be integrated with existing communication systems: voice over IP (VOIP), various
`SIP clients, and even the public switched telephone network (PSTN), just to name a few. The WebRTC standards do not define any specific interoperability requirements, or APIs, but they do try to reuse the same concepts and protocols
`where possible.
`
`In other words, WebRTC is not only about bringing real-time communication to the browser, but also about bringing all the capabilities of the Web to the telecommunications world—a $4.7 trillion industry in 2012! Not surprisingly, this is a
`significant development and one that many existing telecom vendors, businesses, and startups are following closely. WebRTC is much more than just another browser API.
`Audio and Video Engines
`
`Enabling a rich teleconferencing experience in the browser requires that the browser be able to access the system hardware to capture both audio and video—no third-party plug-ins or custom drivers, just a simple and a consistent API.
`However, raw audio and video streams are also not sufficient on their own: each stream must be processed to enhance quality, synchronized, and the output bitrate must adjust to the continuously fluctuating bandwidth and latency between
`the clients.
`
`On the receiving end, the process is reversed, and the client must decode the streams in real-time and be able to adjust to network jitter and latency delays. In short, capturing and processing audio and video is a complex problem. However,
`the good news is that WebRTC brings fully featured audio and video engines to the browser (Figure 18-1), which take care of all the signal processing, and more, on our behalf.
`
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`1/20
`
`Genius Sports Ex. 1037
`p. 1
`
`

`

`8/8/24, 7:02 PM
`
`High Performance Browser Networking
`
`Figure 18-1. WebRTC audio and video engines
`
`The full implementation and technical details of the audio and video engines is easily a topic for a dedicated book, and is outside the scope of our discussion. To learn more, head to http://www.webrtc.org.
`
`The acquired audio stream is processed for noise reduction and echo cancellation, then automatically encoded with one of the optimized narrowband or wideband audio codecs. Finally, a special error-concealment algorithm is used to hide
`the negative effects of network jitter and packet loss—that’s just the highlights! The video engine performs similar processing by optimizing image quality, picking the optimal compression and codec settings, applying jitter and packet-loss
`concealment, and more.
`
`All of the processing is done directly by the browser, and even more importantly, the browser dynamically adjusts its processing pipeline to account for the continuously changing parameters of the audio and video streams and networking
`conditions. Once all of this work is done, the web application receives the optimized media stream, which it can then output to the local screen and speakers, forward to its peers, or post-process using one of the HTML5 media APIs!
`
`Acquiring Audio and Video with getUserMedia
`
`The Media Capture and Streams W3C specification defines a set of new JavaScript APIs that enable the application to request audio and video streams from the platform, as well as a set of APIs to manipulate and process the acquired media
`streams. The MediaStream object (Figure 18-2) is the primary interface that enables all of this functionality.
`
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`2/20
`
`Genius Sports Ex. 1037
`p. 2
`
`

`

`8/8/24, 7:02 PM
`
`High Performance Browser Networking
`
`Figure 18-2. MediaStream carries one or more synchronized tracks
`
`The MediaStream object consists of one or more individual tracks (MediaStreamTrack).
`Tracks within a MediaStream object are synchronized with one another.
`The input source can be a physical device, such as a microphone, webcam or a local or remote file from the user’s hard drive or a remote network peer.
`The output of a MediaStream can be sent to one or more destinations: a local video or audio element, JavaScript code for post-processing, or a remote peer.
`
`A MediaStream object represents a real-time media stream and allows the application code to acquire data, manipulate individual tracks, and specify outputs. All the audio and video processing, such as noise cancellation, equalization, image
`enhancement, and more are automatically handled by the audio and video engines.
`
`However, the features of the acquired media stream are constrained by the capabilities of the input source: a microphone can emit only an audio stream, and some webcams can produce higher-resolution video streams than others. As a
`result, when requesting media streams in the browser, the getUserMedia() API allows us to specify a list of mandatory and optional constraints to match the needs of the application:
`
`<video autoplay></video>
`
`<script>
` var constraints = {
` audio: true,
` video: {
` mandatory: {
`width: { min: 320 },
`height: { min: 180 }
`
` },
` optional: [
`{ width: { max: 1280 }},
`{ frameRate: 30 },
`{ facingMode: "user" }
`
` ]
` }
` }
`
` navigator.getUserMedia(constraints, gotStream, logError);
`
` function gotStream(stream) {
` var video = document.querySelector('video');
` video.src = window.URL.createObjectURL(stream);
` }
`
` function logError(error) { ... }
`</script>
`
`HTML video output element
`
`Request a mandatory audio track
`
`Request a mandatory video track
`
`List of mandatory constraints for video track
`
`Array of optional constraints for video track
`
`Request audio and video streams from the browser
`
`Callback function to process acquired MediaStream
`
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`3/20
`
`Genius Sports Ex. 1037
`p. 3
`
`

`

`8/8/24, 7:02 PM
`High Performance Browser Networking
`This example illustrates one of the more elaborate scenarios: we are requesting audio and video tracks, and we are specifying both the minimum resolution and type of camera that must be used, as well as a list of optional constraints for
`720p HD video! The getUserMedia() API is responsible for requesting access to the microphone and camera from the user, and acquiring the streams that match the specified constraints—that’s the whirlwind tour.
`
`The provided APIs also enable the application to manipulate individual tracks, clone them, modify constraints, and more. Further, once the stream is acquired, we can feed it into a variety of other browser APIs:
`
`Web Audio API enables processing of audio in the browser.
`Canvas API enables capture and post-processing of individual video frames.
`CSS3 and WebGL APIs can apply a variety of 2D/3D effects on the output stream.
`
`To make a long story short, getUserMedia() is a simple API to acquire audio and video streams from the underlying platform. The media is automatically optimized, encoded, and decoded by the WebRTC audio and video engines and is then
`routed to one or more outputs. With that, we are halfway to building a real-time teleconferencing application—we just need to route the data to a peer!
`
`For a full list of capabilities of the Media Capture and Streams APIs, head to the official W3C standard.
`
`Audio (OPUS) and Video (VP8) Bitrates
`
`When requesting audio and video from the browser, pay careful attention to the size and quality of the streams. While the hardware may be capable of capturing HD quality streams, the CPU and bandwidth must be able to keep up! Current
`WebRTC implementations use Opus and VP8 codecs:
`
`The Opus codec is used for audio and supports constant and variable bitrate encoding and requires 6–510 Kbit/s of bandwidth. The good news is that the codec can switch seamlessly and adapt to variable bandwidth.
`
`The VP8 codec used for video encoding also requires 100–2,000+ Kbit/s of bandwidth, and the bitrate depends on the quality of the streams:
`
`720p at 30 FPS: 1.0~2.0 Mbps
`360p at 30 FPS: 0.5~1.0 Mbps
`180p at 30 FPS: 0.1~0.5 Mbps
`
`As a result, a single-party HD call can require up to 2.5+ Mbps of network bandwidth. Add a few more peers, and the quality must drop to account for the extra bandwidth and CPU, GPU, and memory processing requirements.
`Real-Time Network Transports
`
`Real-time communication is time-sensitive; that should come as no surprise. As a result, audio and video streaming applications are designed to tolerate intermittent packet loss: the audio and video codecs can fill in small data gaps, often
`with minimal impact on the output quality. Similarly, applications must implement their own logic to recover from lost or delayed packets carrying other types of application data. Timeliness and low latency can be more important than
`reliability.
`
`Audio and video streaming in particular have to adapt to the unique properties of our brains. Turns out we are very good at filling in the gaps but highly sensitive to latency delays. Add some variable delays into an audio stream, and "it just
`won’t feel right," but drop a few samples in between, and most of us won’t even notice!
`
`The requirement for timeliness over reliability is the primary reason why the UDP protocol is a preferred transport for delivery of real-time data. TCP delivers a reliable, ordered stream of data: if an intermediate packet is lost, then TCP
`buffers all the packets after it, waits for a retransmission, and then delivers the stream in order to the application. By comparison, UDP offers the following "non-services":
`
`No guarantee of message delivery
`No acknowledgments, retransmissions, or timeouts.
`No guarantee of order of delivery
`No packet sequence numbers, no reordering, no head-of-line blocking.
`No connection state tracking
`No connection establishment or teardown state machines.
`No congestion control
`No built-in client or network feedback mechanisms.
`
`Before we go any further, you may want to revisit Chapter 3 and in particular the section “Null Protocol Services”, for a refresher on the inner workings (or lack thereof) of UDP.
`
`UDP offers no promises on reliability or order of the data, and delivers each packet to the application the moment it arrives. In effect, it is a thin wrapper around the best-effort delivery model offered by the IP layer of our network stacks.
`
`WebRTC uses UDP at the transport layer: latency and timeliness are critical. With that, we can just fire off our audio, video, and application UDP packets, and we are good to go, right? Well, not quite. We also need mechanisms to traverse
`the many layers of NATs and firewalls, negotiate the parameters for each stream, provide encryption of user data, implement congestion and flow control, and more!
`
`UDP is the foundation for real-time communication in the browser, but to meet all the requirements of WebRTC, the browser also needs a large supporting cast (Figure 18-3) of protocols and services above it.
`
`Figure 18-3. WebRTC protocol stack
`
`ICE: Interactive Connectivity Establishment (RFC 5245)
`
`STUN: Session Traversal Utilities for NAT (RFC 5389)
`TURN: Traversal Using Relays around NAT (RFC 5766)
`SDP: Session Description Protocol (RFC 4566)
`DTLS: Datagram Transport Layer Security (RFC 6347)
`SCTP: Stream Control Transport Protocol (RFC 4960)
`SRTP: Secure Real-Time Transport Protocol (RFC 3711)
`
`ICE, STUN, and TURN are necessary to establish and maintain a peer-to-peer connection over UDP. DTLS is used to secure all data transfers between peers; encryption is a mandatory feature of WebRTC. Finally, SCTP and SRTP are the
`application protocols used to multiplex the different streams, provide congestion and flow control, and provide partially reliable delivery and other additional services on top of UDP.
`
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`4/20
`
`Genius Sports Ex. 1037
`p. 4
`
`

`

`8/8/24, 7:02 PM
`High Performance Browser Networking
`Yes, that is a complicated stack, and not surprisingly, before we can talk about the end-to-end performance, we need to understand how each works under the hood. It will be a whirlwind tour, but that’s our focus for the remainder of the
`chapter. Let’s dive in.
`
`We didn’t forget about SDP! As we will see, SDP is a data format used to negotiate the parameters of the peer-to-peer connection. However, the SDP "offer" and "answer" are communicated out of band, which is why SDP is missing from
`the protocol diagram.
`
`Brief Introduction to RTCPeerConnection API
`
`Despite the many protocols involved in setting up and maintaining a peer-to-peer connection, the application API exposed by the browser is relatively simple. The RTCPeerConnection interface (Figure 18-4) is responsible for managing the
`full life cycle of each peer-to-peer connection.
`
`Figure 18-4. RTCPeerConnection API
`
`RTCPeerConnection manages the full ICE workflow for NAT traversal.
`RTCPeerConnection sends automatic (STUN) keepalives between peers.
`RTCPeerConnection keeps track of local streams.
`RTCPeerConnection keeps track of remote streams.
`RTCPeerConnection triggers automatic stream renegotiation as required.
`RTCPeerConnection provides necessary APIs to generate the connection offer, accept the answer, allows us to query the connection for its current state, and more.
`
`In short, RTCPeerConnection encapsulates all the connection setup, management, and state within a single interface. However, before we dive into the details of each configuration option of the RTCPeerConnection API, we need to
`understand signaling and negotiation, the offer-answer workflow, and ICE traversal. Let’s take it step by step.
`
`DataChannel
`
`DataChannel API enables exchange of arbitrary application data between peers—think WebSocket, but peer-to-peer, and with customizable delivery properties of the underlying transport. Each DataChannel can be configured to provide the
`following:
`
`Reliable or partially reliable delivery of sent messages
`In-order or out-of-order delivery of sent messages
`
`Unreliable, out-of-order delivery is equivalent to raw UDP semantics. The message may make it, or it may not, and order is not important. However, we can also configure the channel to be "partially reliable" by specifying the maximum
`number of retransmissions or setting a time limit for retransmissions: the WebRTC stack will handle the acknowledgments and timeouts!
`
`Each configuration of the channel has its own performance characteristics and limitations, a topic we will cover in depth later. Let’s keep going.
`Establishing a Peer-to-Peer Connection
`
`Initiating a peer-to-peer connection requires (much) more work than opening an XHR, EventSource, or a new WebSocket session: the latter three rely on a well-defined HTTP handshake mechanism to negotiate the parameters of the
`connection, and all three implicitly assume that the destination server is reachable by the client—i.e., the server has a publicly routable IP address or the client and server are located on the same internal network.
`
`By contrast, it is likely that the two WebRTC peers are within their own, distinct private networks and behind one or more layers of NATs. As a result, neither peer is directly reachable by the other. To initiate a session, we must first gather
`the possible IP and port candidates for each peer, traverse the NATs, and then run the connectivity checks to find the ones that work, and even then, there are no guarantees that we will succeed.
`
`Refer to “UDP and Network Address Translators” and “NAT Traversal” for an in-depth discussion of the challenges posed by NATs for UDP and peer-to-peer communication in particular.
`
`However, while NAT traversal is an issue we must deal with, we may have gotten ahead of ourselves already. When we open an HTTP connection to a server, there is an implicit assumption that the server is listening for our handshake; it
`may wish to decline it, but it is nonetheless always listening for new connections. Unfortunately, the same can’t be said about a remote peer: the peer may be offline or unreachable, busy, or simply not interested in initiating a connection with
`the other party.
`
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`5/20
`
`Genius Sports Ex. 1037
`p. 5
`
`

`

`8/8/24, 7:02 PM
`High Performance Browser Networking
`As a result, in order to establish a successful peer-to-peer connection, we must first solve several additional problems:
`
`1. We must notify the other peer of the intent to open a peer-to-peer connection, such that it knows to start listening for incoming packets.
`2. We must identify potential routing paths for the peer-to-peer connection on both sides of the connection and relay this information between peers.
`3. We must exchange the necessary information about the parameters of the different media and data streams—protocols, encodings used, and so on.
`
`The good news is that WebRTC solves one of the problems on our behalf: the built-in ICE protocol performs the necessary routing and connectivity checks. However, the delivery of notifications (signaling) and initial session negotiation is
`left to the application.
`
`Signaling and Session Negotiation
`
`Before any connectivity checks or session negotiation can occur, we must find out if the other peer is reachable and if it is willing to establish the connection. We must extend an offer, and the peer must return an answer (Figure 18-5).
`However, now we have a dilemma: if the other peer is not listening for incoming packets, how do we notify it of our intent? At a minimum, we need a shared signaling channel.
`
`Figure 18-5. Shared signaling channel
`
`WebRTC defers the choice of signaling transport and protocol to the application; the standard intentionally does not provide any recommendations or implementation for the signaling stack. Why? This allows interoperability with a variety of
`other signaling protocols powering existing communications infrastructure, such as the following:
`
`Session Initiation Protocol (SIP)
`Application-level signaling protocol, widely used for voice over IP (VoIP) and videoconferencing over IP networks.
`
`Jingle
`
`Signaling extension for the XMPP protocol, used for session control of voice over IP and videoconferencing over IP networks.
`ISDN User Part (ISUP)
`Signaling protocol used for setup of telephone calls in many public switched telephone networks around the globe.
`
`A "signaling channel" can be as simple as a shout across the room—that is, if your intended peer is within shouting distance! The choice of the signaling medium and the protocol is left to the application.
`
`A WebRTC application can choose to use any of the existing signaling protocols and gateways (Figure 18-6) to negotiate a call or a video conference with an existing communication system—e.g., initiate a "telephone" call with a PSTN
`client! Alternatively, it can choose to implement its own signaling service with a custom protocol.
`
`Figure 18-6. SIP, Jingle, ISUP, and custom signaling gateways
`
`The signaling server can act as a gateway to an existing communications network, in which case it is the responsibility of the network to notify the target peer of a connection offer and then route the answer back to the WebRTC client
`initiating the exchange. Alternatively, the application can also use its own custom signaling channel, which may consist of one or more servers and a custom protocol to communicate the messages: if both peers are connected to the same
`signaling service, then the service can shuttle messages between them.
`
`Skype is a great example of a peer-to-peer system with custom signaling: the audio and video communication are peer-to-peer, but Skype users have to connect to Skype’s signaling servers, which use their own proprietary protocol, to help
`initiate the peer-to-peer connection.
`
`Selecting a Signaling Service
`
`WebRTC enables peer-to-peer communication, but every WebRTC application will also need a signaling server to negotiate and establish the connection. What are our options?
`
`There is a growing list of existing communication gateways that can interoperate with WebRTC. For example, Asterisk is a popular, free, and open source framework that is used by both individual businesses and large carriers around the
`world for their telecommunication needs. As an option, Asterisk has a WebSocket module, which will allow SIP to be used as a signaling protocol: the browser establishes a WebSocket connection to the Asterisk gateway, and the two
`exchange SIP messages to negotiate the session!
`
`Alternatively, the application can easily develop and deploy a custom signaling gateway if interoperability with other networks is not required. For example, a website may choose to offer peer-to-peer audio, video, and data exchange to its
`users: the site is already tracking which users are logged in, and it can keep signaling connections open to all of its online users. Then, when two peers want to initiate a peer-to-peer session, the site’s servers can relay the signaling messages
`between clients.
`
`There is no single correct choice for a signaling gateway: the choice depends on the requirements of the application. However, before you set out to invent your own, survey the available commercial and open source options first! And, of
`course, pay close attention to the underlying signaling transport, as it may have significant impact on both the latency of the signaling channel and the client and server overhead; see “Application APIs and Protocols”.
`
`Session Description Protocol (SDP)
`
`Assuming the application implements a shared signaling channel, we can now perform the first steps required to initiate a WebRTC connection:
`
`var signalingChannel = new SignalingChannel();
`var pc = new RTCPeerConnection({});
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`6/20
`
`Genius Sports Ex. 1037
`p. 6
`
`

`

`8/8/24, 7:02 PM
`
`High Performance Browser Networking
`
`navigator.getUserMedia({ "audio": true }, gotStream, logError);
`
`function gotStream(stream) {
` pc.addStream(stream);
`
` pc.createOffer(function(offer) {
` pc.setLocalDescription(offer);
` signalingChannel.send(offer.sdp);
` });
`
`} f
`
`unction logError() { ... }
`
`Initialize the shared signaling channel
`
`Initialize the RTCPeerConnection object
`
`Request audio stream from the browser
`
`Register local audio stream with RTCPeerConnection object
`
`Create SDP (offer) description of the peer connection
`
`Apply generated SDP as local description of peer connection
`
`Send generated SDP offer to remote peer via signaling channel
`
`We will be using unprefixed APIs in our examples, as they are defined by the W3C standard. Until the browser implementations are finalized, you may need to adjust the code for your favorite browser.
`
`WebRTC uses Session Description Protocol (SDP) to describe the parameters of the peer-to-peer connection. SDP does not deliver any media itself; instead it is used to describe the "session profile," which represents a list of properties of the
`connection: types of media to be exchanged (audio, video, and application data), network transports, used codecs and their settings, bandwidth information, and other metadata.
`
`In the preceding example, once a local audio stream is registered with the RTCPeerConnection object, we call createOffer() to generate the SDP description of the intended session. What does the generated SDP contain? Let’s take a look:
`
`(... snip ...)
`a=extmap:1 urn:ietf:params:rtp-hdrext:ssrc-audio-level
`m=audio 1 RTP/SAVPF 111 ...
`a=candidate: 1862263974 1 udp 2113937151 192.168.1.73 60834 typ host ...
`a=mid:audio
`a=rtpmap:111 opus/48000/2
`a=fmtp:111 minptime=10
`(... snip ...)
`
`Secure audio profile with feedback
`
`Candidate IP, port, and protocol for the media stream
`
`Opus codec and basic configuration
`
`SDP is a simple text-based protocol (RFC 4568) for describing the properties of the intended session; in the previous case, it provides a description of the acquired audio stream. The good news is, WebRTC applications do not have to deal
`with SDP directly. The JavaScript Session Establishment Protocol (JSEP) abstracts all the inner workings of SDP behind a few simple method calls on the RTCPeerConnection object.
`
`Once the offer is generated, it can be sent to the remote peer via the signaling channel. Once again, how the SDP is encoded is up to the application: the SDP string can be transferred directly as shown earlier (as a simple text blob), or it can
`be encoded in any other format—e.g., the Jingle protocol provides a mapping from SDP to XMPP (XML) stanzas.
`
`To establish a peer-to-peer connection, both peers must follow a symmetric workflow (Figure 18-7) to exchange SDP descriptions of their respective audio, video, and other data streams.
`
`Figure 18-7. Offer/answer SDP exchange between peers
`
`1. The initiator (Amy) registers one or more streams with her local RTCPeerConnection object, creates an offer, and sets it as her "local description" of the session.
`2. Amy then sends the generated session offer to the other peer (Bob).
`3. Once the offer is received by Bob, he sets Amy’s description as the "remote description" of the session, registers his own streams with his own RTCPeerConnection object, generates the "answer" SDP description, and sets it as the
`"local description" of the session—phew!
`4. Bob then sends the generated session answer back to Amy.
`5. Once Bob’s SDP answer is received by Amy, she sets his answer as the "remote description" of her original session.
`
`With that, once the SDP session descriptions have been exchanged via the signaling channel, both parties have now negotiated the type of streams to be exchanged, and their settings. We are almost ready to begin our peer-to-peer
`communication! Now, there is just one more detail to take care of: connectivity checks and NAT traversal.
`
`Interactive Connectivity Establishment (ICE)
`
`In order to establish a peer-to-peer connection, by definition, the peers must be able to route packets to each other. A trivial statement on the surface, but hard to achieve in practice due to the numerous layers of firewalls and NAT devices
`between most peers; see “UDP and Network Address Translators”.
`
`First, let’s consider the trivial case, where both peers are located on the same internal network, and there are no firewalls or NATs between them. To establish the connection, each peer can simply query its operating system for its IP address
`(or multiple, if there are multiple network interfaces), append the provided IP and port tuples to the generated SDP strings, and forward it to the other peer. Once the SDP exchange is complete, both peers can initiate a direct peer-to-peer
`connection.
`
`https://web.archive.org/web/20140717001215/http://chimera.labs.oreilly.com/books/1230000000545/ch18.html
`
`7/20
`
`Genius Sports Ex. 1037
`p. 7
`
`

`

`8/8/24, 7:02 PM
`High Performance Browser Networking
`The earlier "SDP example" illustrates the preceding scenario: the a=candidate line lists a private (192.168.x.x) IP address for the peer initiating the session; see “Reserved Private Network Ranges”.
`
`So far, so good. However, what would happen if one or both of the peers were on distinct private networks? We could repeat the preceding workflow, discover and embed the private IP addresses of each peer, but the peer-to-peer connections
`would obviously fail! What we need is a public routing path between the peers. Thankfully, the WebRTC framework manages most of this complexity on our behalf:
`
`Each RTCPeerConnection connection object contains an "ICE agent."
`ICE agent is responsible for gathering local IP, port tuples (candidates).
`ICE agent is responsible for performing connectivity checks between peers.
`ICE agent is responsible for sending connection keepalives.
`
`Once a session description (local or remote) is set, local ICE agent automatically begins the process of discovering all the possible candidate IP, port tuples for the local peer:
`
`1. ICE agent queries the operating system for local IP addresses.
`2. If configured, ICE agent queries an external STUN server to retrieve the public IP and port tuple of the peer.
`3. If configured, ICE agent appends the TURN server as a last resort candidate. If the peer-to-peer connection fails, the data will be relayed through the specified intermediary.
`
`If you have ever had to answer the "What is my public IP address?" question, then you’ve effectively performed a manual "STUN lookup." The STUN protocol allows the browser to learn if it’s behind a NAT and to discover its public IP and
`port; see “STUN, TURN, and ICE”.
`
`Whenever a new candidate (an IP, port tuple) is discovered, the agent automatically registers it with the RTCPeerConnection object and notifies the application via a callback function (onicecandidate). Once the ICE gathering is complete,
`the same callback is fired to notify the application. Let’s extend our earlier example to work with ICE:
`
`var ice = {"iceServers": [
` {"url": "stun:stun.l.google.com:19302"}

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket