`ISI/RS−93−359
`August 1993
`
`Eve M. Schooler
`
`Case Study:
`Multimedia Conference Control in a
`Packet−switched Teleconferencing System
`
`Reprinted from the
`Journal of Internetworking: Research and Experience,
`Vol. 4, No. 2, pp. 99−120 (June 1993).
`
`University of Southern California
`Information Sciences Institute
`4676 Admiralty Way, Marina del Rey, CA 90292−6695
`310−822−1511
`
`This research was sponsored by the Defense Advanced Research Projects Agency under contract number DABT63−91−C−0001. Views and
`conclusions contained in this report are the author’s and should not be interpreted as representing the official opinion or policy of DARPA, the U.S.
`Government, or any person or agency connected with them.
`
`CISCO Exhibit 1014, pg. 1
`
`
`
`Case Study: Multimedia Conference Control
`in a Packet−switched Teleconferencing System
`
`Eve M. Schooler
`USC/Information Sciences Institute
`4676 Admiralty Way, Marina del Rey, CA 90292
`schooler@isi.edu
`
` Abstract
`
`MMCC, the multimedia conference control program, is a window−based tool for connection
`management. It serves as an application interface to a wide−area network packet teleconferencing
`system, in which it is used not only to orchestrate multisite conferences, but also to provide local
`and remote audio and video control, and to interact with other conference−oriented tools that
`support shared workspaces. In this paper we document the design, operation and continued
`evolution of MMCC. We present MMCC’s general architecture model, its connection control
`protocol and its relationship to other system components. This discussion raises issues about
`configuration management and the impact of conferencing over the Internet. Finally, we discuss
`MMCC’s influence on current directions for research in multimedia connection management, and on
`our efforts to design a scalable Internet teleconferencing architecture.
`
`Keywords: packet teleconferencing, connection architecture, conference control protocol, multimedia
`
`1. Introduction
`The Multimedia Conferencing project has focused on networking requirements for real−time teleconferencing.
`Toward this end, Information Sciences Institute (ISI) and Bolt, Beranek and Newman (BBN) have designed and
`implemented a suite of experimental packet protocols that operate at a number of levels in the protocol stack [39, 12,
`13, 30]. As proof of concept, we have developed an experimental packet teleconferencing system [6] that allows
`geographically separated individuals to collaborate by combining real−time packet−switched audio and video with
`shared computer workspaces, sometimes called groupware. The system currently operates at several sites on the
`Distributed Simulation Network, DSInet (previously known as the Terrestrial Wideband Network, TWBnet), and more
`recently has been ported to run over the DARPA Research Testbed, DARTnet. It has been operational since 1986 and
`has been used for more than 400 teleconferences, a third of which have been multisite meetings.
`
`This paper documents our experiences with the design, operation and evolution of the multimedia conference
`control program (MMCC), the application interface to this teleconferencing system.
` After discussion of the
`conferencing system in general, we present MMCC’s peer−to−peer session model and an overview of its connection
`control protocol. We also describe the relationship between conference management and other system components.
`Issues are raised about the impact of conferencing over the Internet, including robustness, heterogeneity and
`scalability. Finally, we discuss MMCC’s influence on our current research in multimedia connection management and
`on our ideas for a scalable Internet teleconferencing architecture.
`
`2. Software Architecture Overview
`MMCC provides coordinated management of separate services, such as audio, video, and groupware [32]. MMCC
`runs continuously as part of one’s workstation environment and plays three basic roles. First and foremost, MMCC
`supplies connection control in the form of session establishment, maintenance and disconnection; it performs these
`tasks by communicating remotely with peer MMCCs using the Connection Control Protocol (CCP), then by
`communicating locally with separate programs that actually handle the audio, video, and groupware connections.
`Second, MMCC provides a configuration control interface to the various media. At this interface we ask questions
`
`−1−
`
`CISCO Exhibit 1014, pg. 2
`
`
`
`such as, "Do certain devices or individual media exist at a site? Are they available? Can they be configured as
`requested?"
` To achieve configuration management, MMCC relies on
`interactions with several RPC−based
`configuration servers, which control and configure combinations of local hardware devices (e.g., cameras, codecs,
`crossbar switches). Third, MMCC presents a graphical user interface (GUI) to underlying system components. Its
`general organization is depicted in Figure 1.
`
`User Interface
`
`MMCC
`
`Connection Control
`
`Configuration Control
`
`Connection
`Control
`Protocol
`
`Config
`Servers
`
`Groupware
`Agent
`
`Audio
`Agent
`
`Video
`Agent
`
`Figure 1. Coordinated Management of Separate Services
`
`In the current implementation of the system, the video, audio and groupware components or agents are distinct
`from each other and are also distinct from MMCC. Protocols have been devised to pass control information from
`MMCC to these subsystems, but MMCC leaves actual data flow to the underlying media−specific components
`themselves. This allows each agent to select an appropriate transport and internet level communication protocol when
`communicating with its counterpart at a remote participant’s site, a protocol that best meets the needs of the data, be
`it real−time, non−real−time, or control data.
`
`3. Connection Control Protocol
` The Connection Control Protocol is the session layer protocol used by MMCCs to communicate control
`information among themselves [30, 31]. Peer MMCCs reside on machines scattered throughout the network, either
`across a local area network or over a larger geographic distance, such as the Internet. Each MMCC is associated with
`a given user, at a given workstation and well−known port. MMCC notifies the user of requests from other users’
`MMCCs, and places requests to other MMCCs on behalf of the local user. As MMCC is equally likely to initiate
`requests as it is to receive them, each MMCC acts as both a server and client, and avoids reliance on any fixed
`conference moderator. Thus, group conferencing is achieved through use of a distributed, peer−to−peer model.
`
`CCP mainly focuses on those activities related to connection setup, modification and disconnection. For
`conference establishment, CCP employs a four−phase conference orchestration procedure as shown in Figure 2.
`During setup, the caller or initiator is responsible for (1) negotiating a common set of capabilities, (2) requesting
`participation, (3) initiating media connections and (4) propagating information among peers. To mitigate against
`distributed setup complications, the initiator acts as a leader until conference creation completes, at which time control
`may be obtained by any member. To support multiway conversations, any number of callees may be associated with
`a caller. In earlier implementations, for quicker turnaround the first two phases of conference setup were performed
`simultaneously. More recently, we have been experimenting with provisions for a range of policies; for instance,
`earlier phases may be performed ahead of time, phases may be combined for better efficiency, or may be skipped
`altogether when deemed unimportant.
`
`In contrast to the caller, the callee transitions from Idle to Connected, the two persistent states in the connection
`establishment process, by way of several different states. The caller is notified of the requested configuration, checks
`
`−2−
`
`CISCO Exhibit 1014, pg. 3
`
`
`
`that the request can be supported, alerts the user of the pending conference request, accepts the invitation, initiates the
`media connections and waits to receive global state information about the members participating in the conference.
`Each remote site is expected to report to the initiator as to the success or failure of each phase of the process, and the
`initiator redistributes this information as necessary among all conferees.
`
`With the conference in place, the initiator reverts to having no special status. Other sites may be invited or join
`the conference at any time, and any member may initiate or accept these modifications. A participant may leave the
`conference from any state, for example, by replying negatively to the participation request, not being able to support
`the request (a non−existent media configuration, or if the underlying bandwidth is unavailable), or by invoking the
`disconnect button in the graphical user interface. When a participant disconnects, the rest of the conference is left
`intact, unless it was a two−party connection which would be torn down entirely.
`
`MMCC
`
`CCP
`
`Reliable Group
`Messaging
`
`UDP sockets
`
`Caller
`
`Idle
`
`Callee
`
`Negotiating
`
`Notified
`
`Initiating
`
`Disconnecting
`
`Negotiated
`
`Ringing
`
`Accepted
`
`Connecting
`
`Connecting
`
`Synchronizing
`
`Synchronizing
`
`Connected
`
`Figure 2. CCP Conference Establishment
`
`CCP supports other conference−related functions both during and between conferences. For instance, CCP carries
`requests between MMCCs to control remote audio and video devices (e.g., camera and codec switching), to initialize
`or pre−arrange conference parameters (e.g., conference identifier), and to communicate with shared tools. Because
`functionality may vary greatly between teleconferencing implementations, the present set of functions is extensible.
`Thus, CCP incorporates a built−in RPC mechanism to carry other types of operations for use between user interfaces
`and media agents (local or remote), without CCP having to know the particulars of each function (e.g., mute a
`particular audio stream).
`
`3.1 Transport Considerations
`The development of CCP has revealed as much about session level requirements as it has about those at the
`transport level. Because there is no central conference coordinator, peer MMCCs orchestrate and manage multisite
`conferences in a distributed manner through use of the CCP. Typically one MMCC sends another MMCC a request,
`then waits for a reply to determine the next course of action. To support multisite conferences, MMCCs also need to
`query or submit requests to an entire group of remote sites at once. In those instances, an MMCC may need to
`
`−3−
`
`CISCO Exhibit 1014, pg. 4
`
`
`
`collect responses from all group members before evaluating the success of the request. Therefore, CCP’s transport
`must support efficient transactions and provide reliable, group communication. To operate in the general Internet, it
`must also accommodate variability due to wide−area network (WAN) operation and due to heterogeneous end−system
`configurations that are inevitable as groups scale up.
`
`Our preference would have been to build CCP on top of someone else’s well−established group transaction
`transport service. While several were considered, no one service met all our criteria. The Sun RPC implementation is
`indeed a request−reply protocol, and through some lower level (and not necessarily portable) system calls allows
`flexible timeouts, supports multiple outstanding requests, and enforces at−most once execution [35]. Yet, the model is
`rigidly client−server, where conference management is peer−to−peer, and while this too is surmountable, there is the
`issue that a multicast group interface is missing from the Sun RPC specification [38]. Group−oriented protocols such
`as ISIS protocols [3] or VMTP [9, 10] might also have served well as a basis for CCP, but group communication
`under ISIS pays a high penalty unless the groups are large and is increasingly inefficient for widely dispersed groups,
`and group communication under VMTP is of the unreliable variety. In addition, ISIS requires a source license, while
`VMTP requires kernel modifications, making neither easily accessible. We consider CCP’s UDP−based group
`transport service as simply an interim solution, and expect to migrate to a more widely accepted messaging platform
`after a suitable solution becomes available. Below, we describe features of our group messaging service identified as
`considerations for such a long term solution.
`
`3.1.1 Efficient Transactions
`
`CCP is built on top of a flexible datagram message passing implementation that sits above UDP (Figure 2). This
`messaging service is transaction−oriented, in that communication between sender and receiver is of a request−reply
`nature. For optimal N−way performance, connectionless UDP transport was favored over TCP [26, 27]. Each site
`only needs one UDP socket with which all other sites can interact, whereas the comparable TCP implementation
`would require N sockets per site, or N2 sockets total [36]. Furthermore, UDP’s connectionless service is ideal for
`MMCC operation. Because control information is exchanged infrequently by comparison to the duration of a typical
`conference, continuous TCP connections would go mostly unused, which would be inefficient.
`
`3.1.2 Reliable, Group Communication
`
`Because the datagram framework is an unreliable communication medium, and many but not all of CCP
`operations require reliable service, a combination of reliability mechanisms (sequence numbers, timeouts and
`retransmissions) have been implemented above UDP in the messaging service. This is an important advantage because
`in our experience dedicated TCP connections, with built−in reliable transport, do not survive the long duration (hours)
`of a session due to intermittent Internet network outages [32].
`
`MMCC is as likely to perform a group transaction as it is a point−to−point one. Therefore, CCP has a group
`interface to the transaction service, where a list of participants are sent the same message simultaneously from CCP’s
`perspective. The system currently relies on sequential interaction to emulate multicast. Each peer is contacted in
`sequence to reach group decisions or to complete group updates. This simplifies group transaction processing, but the
`lack of parallelism results in longer delays before consensus is reached. During setup for a sizable conference,
`sequential communication with each participant would become too time−consuming. Therefore for scalability,
`experimentation with the use of pipelining for group transactions, as well as IP multicast [15], are underway.
`
`3.1.3 Operating Environment Variability: Support for WAN use and heterogeneity
`
`An appropriate timeout length is difficult to predict over as varied and dynamic a community of networks and
`end systems as in the Internet. Therefore transaction timeouts are managed and dynamically adjusted on a
`site−by−site basis. These values are regularly updated, as is done for TCP [4, 19, 20], to reflect the expected
`roundtrip time (RTT) for a message between two MMCCs. RTT is useful not only for request−reply timeout
`approximations, but also for media synchronization. In future implementations, MMCC may convey RTT information
`to its media agents for the synchronization of data flows that travel over different routes between source and
`destination. Therefore, the messaging service will be expected to report these values through an up−link interface.
`
`For requests that require lengthy activity before issuing a response, we use a two−tier acknowledgment scheme,
`where the request−reply pair is broken into two components. When a request is sent to a remote MMCC, unless the
`reply will be nearly instantaneous, the recipient sends an immediate acknowledgment (ACK) of receipt back to the
`
`−4−
`
`CISCO Exhibit 1014, pg. 5
`
`
`
`sender. The recipient sends a full−fledged reply once the result of the request is learned. Figure 3 contrasts these
`approaches.
`
`A timeout period will still need to be established for an acceptable inter−ACK−reply response interval. That
`interval varies, but depends mainly on the type of CCP request made and the state the remote site is in at the time of
`the request. The ACK message contains the suggested response timeout, and default values for the different types of
`requests are estimated by the site receiving the request. The response interval in combination with a dynamically
`adjusted roundtrip time gives an accurate timeout duration for each transaction. Thus, the messaging service adjusts
`the overall timeout and retransmission values on behalf of the diverse needs of CCP−supported functions.
`
`Site A
`
`Site B
`
`Site A
`
`Site B
`
`Request
`
`ACK
`
`Reply
`
`RTT
`
`ACK
`
`/
`
`REPLY
`
`Request
`
`Reply
`
`TIMEOUT?
`
`Figure 3. Two−Tier Acknowledgment
`
`3.2 State Synchronization
`Due to the distributed nature of the system, cooperating MMCCs may get out of synchronization. Site A thinks it
`is in conference with site B, but site B thinks it is not in conference at all. This may be the result of an unsuccessful
`disconnect, due to network partitioning, or a user unexpectedly restarting the MMCC program while it is connected.
`If site A subsequently receives a conference request from site B, it realizes something is amiss, then attempts to
`resynchronize its state with site B. At that time, site A tries to ascertain the global state of the conference, by
`communicating with any other sites with which it still thinks it is in conference, potentially correcting other sites’
`state information in the process.
`
`Besides discovery of such problems through connection or disconnection requests, CCP also provides more
`aggressive options to counteract state mismatch problems. It offers the option to exchange state information with
`every message, or to periodically trigger active state queries, then use the synchronization algorithm to resolve state
`discrepancies. Although the synchronization algorithm has poor scaling properties, it performs satisfactorily in the
`current operating environment, where the overhead is kept low by small numbers of conferees per conference (there
`are only a small number of available conference sites). However, it raises the issue that state synchronization may be
`inappropriate for very large sessions.
`
`The original implementation of the system had one−conference−at−a−time semantics. We therefore provided
`support in MMCC to detect and resolve connection collisions, those instances where identical connection requests
`between a pair of sites cross in transit, i.e., site A sends a request to connect to site B, at the same time B sends a
`request to connect to site A. MMCC performed a comparison of network addresses, and the site with the greater
`internetwork address was selected to continue as the initiator and the other site went back to idle to let the connection
`complete as an invitee.
`
`Within a multi−conference framework, a conference identifier is used by CCP to distinguish requests for one
`conference from those of another conference. Assigned on a per conference basis, the identifier consists of the
`initiator’s login id, the initiator’s internetwork address and an NTP timestamp [23]. However, without the luxury of a
`
`−5−
`
`CISCO Exhibit 1014, pg. 6
`
`
`
`directory service, MMCC cannot use this identifier to detect and resolve connection collisions. Instead, MMCC
`considers conferences identical when the participant list and requested configuration (underlying agents, media
`encodings and data rates) are the same. Because this comparison only occurs during conference orchestration,
`separate but identical requests that occur sequentially (outside the setup period) would result in two separate
`conferences, as expected.
`
`3.3 Remote Control
`The ISI/BBN teleconferencing system currently provides meeting−room−to−meeting−room style conferences. A
`few people convene per site, then a conference is established among the different sites. A workstation resides at each
`site where a general purpose teleconferencing account is used to run MMCC. This setup is a result of practicality
`(equipment costs) rather than intention. Those who use the system borrow the conference room, the teleconferencing
`account and workstation, and typically are less interested in conference establishment. They want to arrive to find the
`conference already in place.
`
`To facilitate this mode of operation, we have designed an autopilot mode; the local site may be used to instruct
`a remote site to automatically answer yes to connection requests. This is helpful when MMCC is already running at a
`remote site and no one is around to click on the appropriate buttons for confirmation. On disconnection, any site
`connected to in this fashion will be automatically disconnected. Alternately, the controlling site may selectively
`disconnect autopiloted sites and leave the rest of the conference intact. Autopilot mode is a valuable asset in the
`current system because it facilitates remote repair. After a network partition, a remote site may remotely disconnect a
`site that is out of synchronization.
`
`Autopiloting leads to the issue of remote control in general. To what degree should remote sites be able to
`control a local site’s resources (e.g., cameras)? Because the answer inevitably varies −− depending on everything
`from each participant’s feelings on the matter, to the formality or informality of the meeting −− MMCC provides a
`range of exclusivity levels. If a site decides it does not want to grant privileges to a remote site to control its local
`resources, remote control may be disallowed. A site may also establish an exclusive conference, where any incoming
`requests for participation are deflected without troubling the already−in−conference members. An option is being
`considered to disallow autopilot entirely. From an operations point of view, however, the continued availability of an
`override mechanism is attractive. With the current system, situations have arisen where a remote operator either
`would like to or needs to "fix" an otherwise "broken" site. The current strategy is that by default exclusivity levels
`are wide open in MMCC (autopilot is available for use and exclusive conferencing is turned off).
`
`4. Configuration Management
`to coordinate cameras, monitors, codecs (coders/decoders), audio
`MMCC uses configuration management
`equipment, and other hardware devices at each site. For instance, some sites are configured with both a room view
`camera that is mounted above the video monitor and displays the participants, and a copy stand camera for slides or
`graphic stills. Others simply have one camera. Sites may have one, two or three different video codecs. These
`codecs support several different modes (point−to−point vs multipoint, in−band vs out−of−band audio) and a range of
`data rates and data encodings.
`
`Each MMCC keeps track of the different numbers and types of equipment or capabilities available at its own site
`and uses this information to decide if connection requests, specifically the configuration requirements, can or cannot
`be met. Since each site can select among several viable configuration alternatives each time a conference is set up,
`MMCC’s task is to make sure that these configuration options are coordinated when a conference is established; each
`participating MMCC either denies the request if it is for an unsupported or unavailable configuration, or accepts it,
`switching the codec type and video data rate to match the initiator.
`
`To coordinate multimedia resources, each MMCC relies on several configuration servers that are accessed using
`Sun RPC [35], although not all servers are required for MMCC to function. The servers communicate with and
`control various local hardware devices over RS232 serial lines. Each server is a separate program that either is
`co−located with MMCC on the same workstation or executes on another workstation across the (often times local)
`network. There are currently three such servers as displayed in Figure 4. Suffice to say that one controls codec,
`camera and monitor selection, another configures the A/V crossbar switch, and another accommodates video floor
`
`−6−
`
`CISCO Exhibit 1014, pg. 7
`
`
`
`control and limited bandwidth conferences. Each is tailored to mask the hardware specifics of the device(s) they
`control.
`
`Unfortunately, the degree to which we can monitor these services varies. The hardware attached to the codec
`server, for instance, only supports the unidirectional flow of information. There is no return channel to learn whether
`or not a request for service has succeeded. In contrast, bidirectional communication exists between the other two
`servers and their devices, allowing errors to be captured by the local MMCC, then shared with any remote MMCC(s)
`involved in the transaction.
`
`Selection Server
`
`MMCC
`
`Crossbar Server
`
`RPC
`
`Floor/BW Server
`
`Figure 4. Configuration Servers
`
`Serial lines
`
`PC/
`ATCOMM
`
`Crossbar
`Switch
`
`PictureTel
`Codec
`
`All
`Codecs
`
`Audio &
`Video
`Equipment
`
`5. Real−time Media Agents
`
`The audio and video agents are separate modules from MMCC that may be used individually or collectively to
`set up real−time media connections manually. Originally implemented on a BBN Butterfly multiprocessor, they have
`since been ported to a SPARC platform (Figure 5), and may be ported to other workstations in the future. MMCC
`acts as a front end to these programs to coordinate their usage, as they often require similar session information.
`MMCC is usually run from a workstation in the same room as the audio and video equipment, and it communicates
`with these agents across the network. Although they were intended to be co−located with MMCC (on the same
`LAN), there is nothing to prevent greater separation; for instance, conference operators have run MMCCs that control
`A/V agents located across the country or located on another continent. There is also nothing to prevent video, audio,
`groupware and MMCC processing from residing on the same workstation provided the architecture of the machine
`(CPU, operating system, network interface et al) could meet the performance demands.
`
`Live motion video is currently displayed on a video monitor that is separate from the workstation monitor where
`MMCC executes. Although the video agent receives video streams from all participating sites, depending on the type
`of video codec in use, it only delivers to the codec between one and four streams at a time. Therefore the monitor
`can be used to display up to four sites in separate quadrants or to display a single image using the full screen. To
`support video stream selection in multisite conferences, MMCC allows users to select among and to organize the
`display of the video streams. It then conveys these choices to the video agent, which in turn relays these
`requirements to any video−related hardware through the configuration servers. Using software decoding, one may also
`display each remote video stream in its own window on the workstation screen.
`
`The audio and video agents were originally developed as research vehicles to explore issues in real−time transport
`and internetwork protocols. Real−time media is digitized by separate codecs, then packetized by the audio and video
`agents. This process generates data rates typically in the range of 64 kbps ADPCM audio and 128 kbps compressed
`video. To establish video conferences with in−band audio, no separate audio packetizer is used; the combined data
`stream is entirely processed by the video agent and its associated video codec. On arrival, the combined data stream
`is processed by the receiving video agent as usual, then forwarded to and unbundled by the video codec.
`
`−7−
`
`CISCO Exhibit 1014, pg. 8
`
`
`
`The agents provide two data paths for the real−time media. One approach uses the Network Voice Protocol
`(NVP) [12] and the Packet Video Protocol (PVP) [13] to packetize the audio and video data respectively. Data is
`then encapsulated
`in
`the STream protocol (ST) [39], an experimental
`internetwork protocol which provides
`performance guarantees to real−time data through use of resource reservation and multicast communication. The
`alternate data path utilizes UDP in combination with Multicast IP [15] in both agents. With either approach, the data
`is ultimately routed across the DARTnet or DSInet testbeds, or the Multicast Backbone (MBONE), a virtual IP
`multicast backbone network spanning major portions of the Internet.
`
`SPARCstation
`
`Echo
`Canceller
`
`Audio
`Codec
`
`Audio
`Agent
`(Packetizer)
`
`Video
`Agent
`(Packetizer)
`
`Video
`Codec
`
`Figure 5. System Components
`
`MMCC
`&
`Groupware
`
`NTSC
`
`RGB
`
`Ethernet
`
`6. Groupware Agents
`
`ST−II or
`IP Multicast
`Router
`
`DARTnet,
`DSInet,
`MBONE
`
`MMCC interacts with the BBN MMConf program [14] to support a suite of shared applications, including shared
`documents, shared presentation tools, and shared video map browsers. Upon request, MMCC automatically establishes
`and disconnects an MMConf session in parallel with voice and video connections, bringing up MMConf in a
`designated conference directory. In addition, MMCC interacts with Mbftptool, a window−based application to run
`multiple background FTPs among conference participants [16]. Mbftptool may be used both before and during
`conferences to pre−stage transfers of files intended to be shared.
`
`MMCC allows a user to assign a conference alias to loosely couple with these conferencing tools. The
`conference alias usually is the subject of the meeting or the name of the group that is gathering, and provides a more
`accessible binding of participants to a session than the conference ID. The alias is also used as the name of a
`"conference directory" to which shared applications connect and in which shared files are placed. It is distributed to
`participating sites by MMCC, which also makes sure the conference directory exists at each site. Like the conference
`ID, the alias may be set at connection initiation or beforehand in anticipation of an upcoming conference.
`
`MMCC, MMConf and Mbftptool are independent applications that all support a conference mode. Integration
`with MMCC is possible because MMConf and Mbftptool, like the audio and video agents, allow other processes to
`rendezvous with them via interprocess communication. MMCC forks these applications and is able, through a
`combination of signals, control ports, process identifiers, files and command line arguments, to convey state
`information and state changes to them. The interaction between MMCC and these agents has raised several concerns.
`Namely, how can we avoid redundant steps needed by all conference−oriented software during conference setup (e.g.,
`invitations to participate, authorization through password specification), and can we provide a more general way for
`all agents (groupware, as well as audio and video) to be integrated into the conferencing system? We believe that an
`established interface should be designed between these modules and propose an improved architecture in section 8.
`
`−8−
`
`CISCO Exhibit 1014, pg. 9
`
`
`
`7. Discussion
`
`There are several difficult aspects to providing a robust and rich conference control service. Most of the
`difficulties lie in running a distributed program across the Internet, but also encompass the growing heterogeneity
`among site configurations, dissimilar routing for different media, and the difficulty in maintaining complete monitoring
`information.
`
`7.1 Distributed Program Complications
`MMCC suffers from the usual complications of distributed programs. It needs to withstand network partitions and
`broken or transient routes. This can lead to curious behavior, especially in the absence of a central server for
`definitive state information. Because MMCC’s control protocol is based on UDP, it is less disturbed by network
`breaks. However, groupware applications such as MMConf that use TCP for reliable delivery and for synchronization
`of shared data files are more sensitive to network outages. They usually have no provisions to repair a group session
`in the face of a temporary loss in connectivity of even a single group member.
`
`MMCC is somewhat better in that it is willing to tolerate temporarily unreachable peers under certain conditions.
`It is strictest about reliability during conference initiation; two sites must handshake correctly during the connection
`procedure, otherwise the conference is not established. Reliable communication is still important, but somewhat less
`enforceable at disconnection time; a site tries its best to inform a remote site that it is leaving the conference, but if
`no reply is received and all retries have also failed, it leaves the conference anyway. MMCC is least strict about
`guaranteeing less−critical functions