throbber
as) United States
`a2) Patent Application Publication (10) Pub. No.: US 2004/0153473 Al
`(43) Pub. Date: Aug. 5, 2004
`
`Hutchinson et al.
`
`US 20040153473A1
`
`(54) METHOD AND SYSTEM FOR
`SYNCHRONIZING DATA IN PEER TO PEER
`NETWORKING ENVIRONMENTS
`
`(76)
`
`Inventors: Norman Hutchinson, Richmond (CA);
`Joseph Wong, Vancouver (CA); Terry
`Coatta, Richmond (CA); James
`Wright, Vancouver (CA); Eddy Ma,
`Vancouver (CA)
`
`Correspondence Address:
`SONNENSCHEIN NATH & ROSENTHAL LLP
`P.O. BOX 061080
`
`WACKERDRIVE STATION, SEARS TOWER
`CHICAGO,IL 60606-1080 (US)
`
`(21) Appl. No.:
`
`10/715,508
`
`(22)
`
`Filed:
`
`Nov. 19, 2003
`
`Related U.S. Application Data
`
`(60) Provisional application No. 60/427,965, filed on Nov.
`21, 2002. Provisional application No. 60/435,348,
`filed on Dec. 23, 2002. Provisional application No.
`60/488,606,filed on Jul. 21, 2003.
`
`Publication Classification
`
`(SL) Tne CU? eccccccceeeesscscesssnneeeeeeecceesnnnnees GO06F 17/00
`
`(52) US. Ch.
`
`cessscssssstssssssssesnetnesnsssesasnesne 707/104.1
`
`(57)
`
`ABSTRACT
`
`Methods and systems in accordance with the present inven-
`tion provide a peer-to-peer replicated hierarchical data store
`that allows the synchronization of the contents of multiple
`data stores on a computer network without the use of a
`master data store. The synchronization of a replicated data
`store stored on multiple locations is provided even when
`there is constantly evolving set of communicationspartitions
`in the network. Each computer in the network may haveits
`own representation of the replicated data store and may
`make changesto the data store independently without con-
`sulting a master authoritative date store or requiring a
`consensus among other computers with representations of
`the data store. Changes to the data store may be communi-
`cated to the other computers by broadcasting messages in a
`specified protocol to the computers having a representation
`of the replicated data store. The computers receive the
`messages and process their local representation of the data
`store according to a protocol described below. As such, each
`computer has a representation of the replicated database that
`is consistent with the representations of the data store on the
`other computers. This allows computers to make changesto
`the data store even when disconnected via a network parti-
`tion.
`
`Data Co v. Bright Data
`
`110
`fAMEMORY
`
`C5SS
`
`124
`
`} J
`
`InMemoryDatabaseHandler
`
`304
`
`Scheduting
`
`Queue
`
`126
`¥NT
`Data Synchronization System
`
`DataStore
`
`302
`
`Protocol Engine
`
`
`
`Data Co Exhibit 1039
`Data Co Exhibit 1039
`Data Co v. Bright Data
`
`

`

`Patent Application Publication
`
`Aug. 5, 2004 Sheet 1 of 7
`
`US 2004/0153473 Al
`
`sayndwog
`
`vor
`
`zo
`
`|eun6i4
`
`JOMION
`
`UOReIOge|jOD
`
`waysks
`
`a6es0j\$
`
`Ooplvolpny
`
`yndu)
`
`uojeoddy
`
`Ae\dsig seyndwo5
`
`sayndwo5
`
`wajsks
`
`v\
`
`
`
`uojezuoiyoudsejeg
`
`

`

`Patent Application Publication Aug. 5, 2004 Sheet 2 of 7
`
`US 2004/0153473 A1
`
`
`
`
`
`
`
`
`Figure 2
`
`

`

`Patent Application Publication Aug. 5, 2004 Sheet 3 of 7
`
`US 2004/0153473 A1
`
`110
`
`MEMORY
`
`112
`
`Se
`Database
`
`.
`Service Core
`
`Applications
`
`126
`¥NS
`Data Synchronization System
`124
`
`Data Store
`
`In Memory Database Handler
`
`
`
`
`
`
`
`
`
`
`304
`
`
`308
`
`
`
`, PELLL y
`
`
`
`
`Protocol Engine
`
`
`Scheduling
`
`Queue
`
`306
`
`
`
`Inbound Queue
`
`Outbound
`
`Queue
`
`
`
`Scheduler
`
`Figure 3
`
`

`

`Patent Application Publication Aug. 5, 2004 Sheet 4 of 7
`
`US 2004/0153473 Al
`
` 402 (\]
`
` Send to Protocol Engine
`
`4oa/J
`
`408
`
`Change requested
` Changesconsistent
`
`
` n conflict with loca! data
`
`store (e.g., more recent than
`
`
`local value)?
`
`ata \
`
`406
`
`with local data store?
`
`Yes
`
`410
`
`No
`
`Yes
`‘
`
`Modify local data store
`
`
`
`
`
`
`Return error to user
`
`412
`
`Discard change
`
`
`
`
`
`418
`
`Broadcast message regarding change
`in accordance with protocol
`
`
`
`
`Figure 4
`
`
`
`End
`
`

`

`Patent Application Publication Aug. 5, 2004 Sheet 5 of 7
`
`US 2004/0153473 A1
`
`
`
`504
`
`Send to Protocol Engine
`
`508
`
`506
`
`Changesconsistent |
`with focal data store?
`No
`
`Yes
`
`§12
`
`
`oS
`Identify nearest parent
`that is consistent.
`Broadcast to resolve
`structuraldifference.
`
`
`
`
`
`
`
`502
`
`
`
`
`
`
`
`
` in conflict with local data
`store (e.g., more recent than
`
`local value)?
`
`510
`
`Discard change
`
`
`
`
`
`514
`
`Modify lacal data store
`
`516
`
`Child hash conflict?
`
`518
`
`Broadcast message regarding change
`in accordance with protocol
`
`Figure 5
`
`
`End
`
`
`

`

`Patent Application Publication Aug. 5, 2004 Sheet 6 of 7
`
`US 2004/0153473 A1
`
`Node Structure
`
`(link from/classes)
`
`Figure 6
`
`

`

`Patent Application Publication Aug. 5, 2004 Sheet 7 of 7
`
`US 2004/0153473 A1
`
`702
`
`Determineif significant fraction
`of computers have clocks
`synchronized within bound
`
`
`
`cluster found?
`
`
`
`
`
`
` Synchronized
`
`704
`
`
`
`
`Set clocks to
`maximum
`clock value
`found
`
`
`
`Set clocks of computers not
`within bound to median value
`of clocks in cluster
`
`706
`
`706
`
`Figure 7
`
`

`

`US 2004/0153473 Al
`
`Aug. 5, 2004
`
`METHOD AND SYSTEM FOR SYNCHRONIZING
`DATA IN PEER TO PEER NETWORKING
`ENVIRONMENTS
`
`RELATED APPLICATIONS
`
`[0001] This applicationis related to, and claimspriority to
`the following U.S. Provisional Patent Applications which
`are incorporated by reference herein:
`
`[0002] U.S. Provisional Patent Application Serial No.
`60/427,965, filed on Nov. 21, 2002, entitled “System and
`Method for Enhancing Collaboration using Computers and
`Networking.”
`
`[0003] U.S. Provisional Patent Application Serial No.
`60/435,348, filed on Dec. 23, 2002, entitled “Method and
`System for Synchronizing Data in Ad Hoc Networking
`Environments.”
`
`[0004] U.S. Provisional Patent Application Serial No.
`60/488,606, filed on Jul. 21, 2003, entitled “System and
`Method for Enhancing Collaboration using Computers and
`Networking.”
`
`[0005] This application is also related to the following
`U.S. patent applications which are incorporated by reference
`herein:
`
`, filed on
`[0006] U.S. patent application Ser. No.
`, entitled “Method and System for Synchronous and
`Asynchronous Note Timing in a System for Enhancing
`Collaboration Using Computers and Networking.”
`
`, filed on
`[0007] U.S. patent application Ser. No.
`, entitled “Method and System for Enhancing Col-
`laboration Using Computers and Networking.”
`
`, filed on
`[0008] U.S. patent application Ser. No.
`, entitled “Method and System for Sending Ques-
`tions, Answers and File Synchronously and Asynchronously
`in a System for Enhancing Collaboration Using Computers
`and Networking.”
`
`master computer or database, and accessing the data requires
`interacting with this computer. A master computer or data-
`base is one that is chosen as the authorative source for
`information. If the underlying networkis partitioned and a
`given computer is not in the partition in which the master
`resides, then that computer has no accessto the data.
`
`{0014] This limitation of centralized storage typically
`means that the viable solution for variable networks with
`changing positions is some form of replicated storage.
`Conventional replication-based systems typically fall into
`two classes:
`(1) strong consistency systems which use
`atomic transactions to ensure consistent replication of data
`across a set of computers, and (2) weak consistency systems
`which allow replicas to be inconsistent with each for a
`limited period of time. Applications accessing the replicated
`data store in a weak consistency environment may see
`different values for the same data item.
`
`[0015] Data replication systemsthatutilize strong consis-
`tency are inappropriate for use in environments where the
`set of replicas can vary significantly over short time periods
`and where replicas may become disconnected for protracted
`periods of time. If a replica becomes unavailable during
`replication, it can prevent or delay achieving consistency
`amongst the replicas. In addition, systems based on strong
`consistency generally require more resources and processing
`time than is acceptable for a system that mustreplicate data
`quickly andefficiently over a set of computers with varying
`processing or memory resources available.
`
`[0016] Data replication systems that rely on weak consis-
`tency can operate effectively in the type of network envi-
`ronment under consideration. There are numerous conven-
`tional systems based on weakconsistency (e.g., Grapevine,
`Bayou, Coda, refdbms). However, these conventional sys-
`tems typically are not optimized for broadcast communica-
`tions, are not bandwidth efficient and do not handle network
`partitioning well. It is therefore desirable to overcome these
`and related problems.
`
`BACKGROUND
`
`SUMMARY
`
`[0009]
`
`1. Field of the Invention
`
`invention generally relates to data
`[0010] The present
`processing systems and data store synchronization. In par-
`ticular, methods and systems in accordance with the present
`invention generally relate to synchronizing the content of
`multiple data stores on a computer network comprising a
`variable number of computers connected to each other.
`
`[0011]
`
`2. Background
`
`[0012] Conventional software systems provide for data to
`be stored in a coordinated manner on multiple computers.
`Such synchronization services ensure that the data accessed
`by any computer is the sameas that accessed by any of the
`other computers. This can be accomplished by either: (1)
`centralized storage that stores data on a single computer and
`accesses the data from the remote computers, and (2)
`replicated storage that replicates data on each computer and
`employs transactions to ensure that changes to data are
`performed at the same time on each computer.
`
`[0013] Centralized storage cannot be effectively used in
`environments where the set of
`interacting computers
`changesovertime. In a centralized system,there is only one
`
`[0017] Methods and systems in accordance with the
`present invention provide a peer-to-peer replicated hierar-
`chical data store that allows the synchronization of the
`contents of multiple data stores on a computer network
`without the use of a master data store. The synchronization
`of a replicated data store stored on multiple locations is
`provided even when there is constantly evolving set of
`communicationspartitions in the network. Each computer in
`the network may have its own representation of the repli-
`cated data store and may make changes to the data store
`independently without consulting a master authoritative data
`store or requiring a consensus among other computers with
`representations of the data store. Changes to the data store
`may be communicated to the other computers by broadcast-
`ing messagesin a specified protocol to the computers having
`a representation of the replicated data store. The computers
`receive the messages and process their local representation
`of the data store according to a protocol described below. As
`such, each computer has a representation of the replicated
`database that is consistent with the representations of the
`data store on the other computers. This allows computers to
`make changesto the data store even when disconnected via
`a network partition.
`
`

`

`US 2004/0153473 Al
`
`Aug. 5, 2004
`
`DETAILED DESCRIPTION
`
`[0030] Overview
`
`[0018] A method in a data processing system having
`peer-to-peer replicated data stores is provided comprising
`the steps of receiving, by a first data store, a plurality of
`values sent from a plurality of other data stores, and updat-
`ing a value in thefirst data store based on one or moreof the
`received values for replication.
`
`[0019] A method in a data processing system is provided
`having a first data store and a plurality of other data stores,
`the first data store having a plurality of entries, each entry
`having a value, the method comprisingthe steps of receiving
`bythe first data store a plurality of values from the other data
`stores for one of the entries. The method further comprises
`determining by the first data store which of the values is an
`appropriate value for the one entry, and storing the appro-
`priate value in the one entry to accomplish replication.
`
`[0020] A data processing system is provided having peer-
`to-peer replicated data stores and comprising a memory
`comprising a program that receives, by a first data store, a
`plurality of values sent from a plurality of other data stores,
`and updates a value in the first data store based on one or
`more of the received values for replication. The data pro-
`cessing system further comprises a processor for running the
`program.
`
`[0031] Methods and systems in accordance with the
`present invention provide a peer-to-peer replicated hierar-
`chical data store that allows the synchronization of the
`contents of multiple data stores on a computer network
`without the use of a master data store. The synchronization
`of a replicated data store stored on multiple locations is
`provided even when there is constantly evolving set of
`communicationspartitions in the network. Each computer in
`the network may have its own representation of the repli-
`cated data store and may make changes to the data store
`independently without consulting a master authoritative data
`store or requiring a consensus among other computers with
`representations of the data store. Changes to the data store
`may be communicated to the other computers by broadcast-
`ing messagesin a specified protocol to the computers having
`a representation of the replicated data store. The computers
`receive the messages and process their local representation
`of the data store according to a protocol described below. As
`such, each computer has a representation of the replicated
`database that is consistent with the representations of the
`data store on the other computers. This allows computers to
`make changesto the data store even when disconnected via
`[0021] A data processing system is provided havingafirst
`a networkpartition.
`data store and a plurality of other data stores, the first data
`store having a plurality of entries, each entry having a value.
`The data processing system comprises a memory comprising
`a program that receives bythe first data store a plurality of
`values from the other data stores for one of the entries,
`determines by the first data store which of the values is an
`appropriate value for the one entry, and stores the appropri-
`ate value in the one entry to accomplish replication. The data
`processing system further comprises a processor for running
`the program.
`
`In one implementation, the system operates by the
`[0032]
`individual computers making changes to their data stores
`and broadcasting messages according to a protocol
`that
`indicates those changes. When a computer receives a mes-
`sage, it processes the message and managesthe data store
`according the protocol based on the received message. When
`conflicts arise between nodes on different data store, gen-
`erally, the most recently updated node in the data store is
`used.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0022] The foregoing and other aspects in accordance with
`the present invention will become more apparent from the
`following description of examples and the accompanying
`drawings, which illustrate, by way of example only, prin-
`ciples in accordance with the present invention.
`
`[0023] FIG. 1 depicts an exemplary system diagram of a
`data processing system in accordance with systems and
`methods consistent with the present invention.
`
`FIG.2 depicts a block diagram of representing an
`[0024]
`exemplary logical structure of a data store on a plurality of
`computers.
`
`[0025] FIG. 3 depicts a more detailed block diagram of a
`computer system including software operating on the com-
`puters of FIG. 1.
`
`[0033] The replicated hierarchical data store (““RHDS”)
`has many potential applications in the general field of
`mobile computing. The RHDS maybe used in conjunction
`with a synchronousreal-time learning application which is
`described in further detail in U.S. patent application Ser.
`
`No.
`entitled “Method and System for Enhancing
`Collaboration Using Computers and Networking,” which
`was previously incorporated herein. In that application, the
`RHDSmay be used to allow students and instructors with
`mobile computers (e.g., laptops) to interact with each other
`in a variety of ways. For example, the RHDS may be used
`to support the automatic determination of which users are
`present in an online activity. The software may achievethis
`by creating particular nodes within the RHDS when a
`participant joins or leaves an activity. In one implementa-
`tion, the replication of these nodes to all other connected
`computers allows each computer to independently verify
`whether a given participant is online or not.
`
`[0026] FIG. 4 depicts a flowchart indicating steps in an
`[0034] The RHDScanalso be used to facilitate the dis-
`exemplary method for changing a node inalocal data store.
`covery and configuration of resources in the local environ-
`ment. For example, a printer could host the RHDSand write
`into it a series of nodes that described what sort of printerit
`was, what costs were associated with using it, and other such
`data. Upon connecting to that network, the RHDS running
`on a laptop computer would automatically receiveall of this
`information. Application software could then query the
`contents of the local RHDS on the laptop and use that
`information to configure and access the printer. The network
`
`[0027] FIG. 5 depicts a flowchart indicating steps in an
`exemplary method for processing a received message.
`
`[0028] FIG. 6 depicts a pictorial representation of a data
`item, called a “node,” stored in the data synchronization
`service implemented by the system of FIG.3.
`
`[0029] FIG. 7 depicts a flowchart
`synchronizing clocks.
`
`indicating steps for
`
`

`

`US 2004/0153473 Al
`
`Aug. 5, 2004
`
`in question could potentially be a wireless networksothatall
`of these interactions could occur without any physical
`connection between the laptop and the printer.
`
`[0035] The software system described herein may, in one
`implementation, include several exemplary features:
`
`1. One Message: The protocol described
`[0036]
`herein relies on the exchange of one type of message
`that carries a small amountof information. Addition-
`
`ally, participating computers are, in one implemen-
`tation, required to retain no state other than the
`contents of the replicated data store itself. This
`makes the protocol suitable for implementation on
`computers with limited resources.
`
`2. Idempotency: The messages exchanged by
`[0037]
`the protocol are idempotent meaning that they can be
`lost or duplicated by the network layer with no
`adverse effect on the operation of the system other
`than reduced performance. This makes the protocol
`viable in situations where network connectivity is
`poor.
`
`3. Peer-to-Peer: In one implementation, there
`[0038]
`is No requirementat any point in the execution of the
`protocol for the existence of a special “master” or
`“primary” computer. Replication may be supported
`between arbitrary sets of communication computers,
`and the set of communicating computers can change
`over time.
`
`4. Broadcast: The protocol described herein
`[0039]
`may operate in environments that support broadcast
`communications. Messages are broadcast and can
`used to perform pair-wise convergence by any
`receiver. This makesefficient use of available band-
`
`width since many replicas can be updated through
`the transmission of a single message.
`
`5. No Infrastructure: A replica can be created
`[0040]
`on any computer simply by executing the replication
`protocol. No consensus on the current set of active
`replicas is required.
`
`6. Transient Data: The protocol described
`[0041]
`herein supports both persistent and transient data.
`Transient data is replicated, but may be automati-
`cally removed from all replicas once it has expired.
`This makesit possible to aggressively replicate data
`without exhausting the resourcesof the participating
`computer systems.
`
`[0042] System
`
`[0043] FIG. 1 depicts an exemplary data processing sys-
`tem suitable for use in accordance with methods and systems
`consistent with the present invention. Each computer 102,
`104 and 105 has operating software operating thereon which
`aids in the replication and synchronization of information.
`FIG. 1 shows computers 102 and 105 connected to a
`network, which may be wired or wireless, and may be a
`LAN or WAN,and any of the computers may represent any
`kind of data processing computer, such as a general-purpose
`data processing computer, a personal computer, a plurality
`of interconnected data processing computers, video game
`console, clustered server, a mobile computing computer, a
`personal data organizer, a mobile communication computer
`including mobile telephones or similar computers. The com-
`
`puters 102, 104 and 105 may represent computers in a
`distributed environment, such as on the Internet. Computer
`105 may have the same components as computers 102 and
`104, although not shown. There may also be many more
`computers 102, 104 and 105 than shownon the figure.
`
`[0044] A computer 102 includes a central processing unit
`(“CPU”) 106, an input-output (“I/O”) unit 108 such as a
`mouse or keyboard, or a graphical input computer such as a
`writing tablet, and a memory 110 such as a random access
`memory (“RAM”) or other dynamic storage computer for
`storing information and instructions to be executed by the
`CPU. The computer 102 also includes a secondary storage
`112 such as a magnetic disk or optical disk that may
`communicate with each other via a bus 114 or other com-
`munication mechanism. The computer 102 mayalso include
`a display 116 such as such as a cathode ray tube (“CRT”) or
`LCD monitor, and an audio/video input 118 such as a
`webcam and/or microphone.
`
`[0045] Although aspects of methods and systems consis-
`tent with the present invention are described as being stored
`in memory 110, one having skill in the art will appreciate
`that all or part of methods and systems consistent with the
`present invention may be stored on or read from other
`computer-readable media, such as secondary storage, like
`hard disks, floppy disks, and CD-ROM; a carrier wave
`received from a network such as the Internet; or other forms
`of ROM or RAM either currently knownorlater developed.
`Further, although specific componentsofthe data processing
`system are described, one skilled in the art will appreciate
`that a data processing system suitable for use with methods,
`systems, and articles of manufacture consistent with the
`present invention may contain additional or different com-
`ponents. The computer 102 may include a human user or
`may include a user agent. The term “user” may refer to a
`humanuser, software, hardwareor any other entity using the
`system. A user of a computer may include a student or an
`instructor in a class. The mechanism via which users access
`
`and modify informationis a set of application programming
`interfaces (“API”) that provide programmatic access to the
`replicated hierarchical data store 124 in accordance with the
`description discussed below. As shown, the memory 110 in
`the computer 102 may include a data synchronization sys-
`tem 128, a service core 130 and applications 132 which are
`discussed further below. Although only one application 132
`is shown, any numberof applications may be used. Addi-
`tionally, although shown on the computer 102 in the memory
`110, these components may reside elsewhere, such as in the
`secondary storage 112, or on another computer, such as
`another computer 102. Furthermore, these components may
`be hardware or software whereas cmbodiments in accor-
`
`dance with the present invention are not limited to any
`specific combination of hardware and/or software. As dis-
`cussed below,
`the secondary storage 112 may include a
`replicated hierarchical data store 124.
`
`[0046] FIG.1 also depicts a computer 104 that includes a
`CPU 106, an I/O unit 108, a memory 110, and a secondary
`storage computer 112 having a replicated hierarchical data
`store 124 that communicate with each other via a bus 114.
`The memory 110 maystore a data synchronization system
`126 which manages the data synchronization functions of
`the computer 104 and interacts with the data store 124 as
`discussed below. The secondary storage 112 may store
`directory information, recorded data, data to be shared,
`
`

`

`US 2004/0153473 Al
`
`Aug. 5, 2004
`
`information pertaining to statistics, user data, multi media
`files, etc. The data store 124 may also reside elsewhere, such
`as in memory 110. The computer 104 may also have many
`of the components mentioned in conjunction with the com-
`puter 102. There may be many computers 104 working in
`conjunction with one another. The data synchronization
`system 126 may be implemented in any way, in software or
`hardware or a combination thereof, and may be distributed
`among many computers. It may also be represented by any
`number of components, processes, threads,etc.
`
`[0047] The computers 102, 104 and 105 may communi-
`cate directly or over networks, and may communicate via
`wired and/or wireless connections, including peer-to-peer
`wireless networks, or any other method of communication.
`Communication may be done through any communication
`protocol, including knownandyet to be developed commu-
`nication protocols. The computers 102, 104 and 105 may
`also have additional or different components than those
`shown.
`
`[0048] FIG. 2 depicts the logical structure of an exem-
`plary replicated hierarchical data store 124. Each particular
`instance of the data store 124 is hosted on its respective
`computer system, 102, 104 and 105. The computers are
`connected to each other via a communications network that
`may be a wired connection (such as provided by Ethernet)
`or a wireless connection (such as provided by 802.11 or
`Bluetooth). The system may be implementedasa collection
`of software modulesthat provide replication of the data store
`across all instances of the data store as well as providing
`access to the data store on the local computer 102, 104 and
`105.
`
`[0049] The replicated data store 124 maybestructured as
`a singly rooted tree of data nodes. When the data store 124
`has converged according to the protocol described below,in
`one implementation, all instances of the data store 124 will
`be identical with one another both in structure and in
`
`content, except for local nodes which may differ from one
`instance to another. If there is a partition of the network such
`that, for example, computer 105 is no longer able to com-
`municate with computers 102 and 104 for a period of time,
`then the data stores in 102/104 and 105 will evolve inde-
`pendently from one another. That is, a user making changes
`to the data store 124 on computer 105 can make those
`changes without consulting the system data synchronization
`126 on computers 102 and 104. Similarly, users on com-
`puters 102 and 104 can make changesto their respective data
`stores 124 without consulting the data synchronization sys-
`tem 126 on computer 105.
`
`[0050] When connectivity is restored amongst all comput-
`ers 102, 104 and 105, the system propagates the indepen-
`dently made changesacross all instances of the data store
`124. In those cases where users made conflicting indepen-
`dent changes to the data store 124,
`these conflicts are
`resolved on a node-by-node basis. For each node for which
`there is a conflict, in one implementation,all instances of the
`data store 124 converge to the value of the node that was
`most recently modified (for example, in accordance with the
`description discussed below).
`
`FIG.3 depicts a block diagram of a data synchro-
`{0051]
`nization system 126. Each system 126 may include three
`exemplary components, a protocol engine 302, a local
`memory resident version of the data store 124, and a
`
`database, which includes both an in-memory component304
`and persistent storage on disk 112, which providespersistent
`storage associated with computer 102. The protocol engine
`302 on each computer 102, 104 and 105 communicates with
`the protocol engine on other computers 102, 104 and 105 via
`communication links for the purpose of replicating changes
`made on one computer system to other computer systems.
`
`[0052] FIG. 4 depicts steps in an exemplary method to
`change a nodein a local data store. For example, to change
`the value of a node or entry on computer 102 (step 402), an
`application program 130 communicates the desired change
`to the data synchronization system 126 using the API’s
`exposed by the system. The data synchronization system
`126,
`in turn, communicates the changes to the protocol
`engine 302 (step 404). The protocol engine 302 verifies that
`the local changes are consistent with the local data store
`(step 406). Consistency, described in detail below, involves
`whether a change violates integrity constraints on the sys-
`tem. If the change is not consistent with the data store 124,
`an error is returned to the user (step 408).
`
`If the change is consistent with the data store 124,
`[0053]
`it is determined whether the change is in conflict with the
`value in the data store (step 410), and then the memory
`resident copy of the data store is modified (step 414)if there
`is conflict. Conflicts in the directory may mean that two or
`moreentities of the directory have made changesin the same
`location, entry or value within the directory. A change may
`be in conflict if it is more recent than the local value in the
`
`data store 124. The conflicts are resolved by selecting and
`implementing the most recent modification,1.e., the one with
`the highest time stamp. If the change is not in conflict, e.g.,
`not more recent than the local data store, the change may be
`discarded (step 412). On a regular basis, changes to the
`memory resident data store 124 may be written to persistent
`storage 112 to ensure that the contents of the data store
`survive computer
`reboots
`and failures. After making
`changes to the memory resident copy of the data store 124,
`the protocol engine 302 writes a message to the network
`containing details of the change made. In one implementa-
`tion, these messages are broadcast to the network so they
`will be received by other protocol engines 302 on other
`computers 102, 104 and 105 (step 418).
`
`[0054] FIG. 5 depicts steps of an exemplary method for
`processing a received message. On computer 104,
`for
`example, this message is received (step 502) and sent to the
`protocol engine 302 (step 504). The protocol engine 302
`verifies that the received changes are consistent with the
`local data store 124 (step 506). If the change is not consistent
`with the data store, the protocol engine 302 identifies the
`nearest parent in the data store that is consistent with the
`change (508) and broadcasts the state of the nearest parent
`(step 518) which will notify others and will be used to
`resolve the structural difference.
`
`[0055] The protocol engine 132 then verifies whether the
`change conflicts with the contents of the local data store 124
`(step 510). The most recent modification, i.e., the modifi-
`cation with the highest timestamp, may be selected and
`implemented. If there is conflict, e.g., the change is more
`recent than the local data store value, the protocol engine
`302 applies the changes to the memoryresident data store
`124 (step 514). If the change does not conflict, the change
`may be discarded (step 512). On a regular basis,
`these
`
`

`

`US 2004/0153473 Al
`
`Aug. 5, 2004
`
`changes to the memoryresident data store 124 are written to
`persistent storage 112 to ensure that the contents of the data
`store survive computer reboots and failures. After modifying
`the local data store 124,
`the protocol engine 302 may
`determineif the child hashes, described below,conflict,e.g.,
`whether the children of the changed node conflict with the
`message. If so,
`the children and possibly the parent are
`broadcast to the rest of the network (step 518) to resolve
`differences.
`
`In one implementation, methods and systems con-
`[0056]
`sistent with the present invention may providea replicated
`hierarchical data store 124 that may, in one implementation
`include the following exemplary features:
`
`Thereplicated hierarchical data store 124is a
`[0057]
`singly rooted tree of nodes.
`
`[0058] Each node has a name and value.
`
`[0059] The name of a nodeis specified whenit is
`created and may not be changed thereafter.
`
`[0060] The nameofthe nodeis a non-emptystring.
`
`[0061] Each node may have zero or more children
`nodes.
`
`[0062] Each child node is associated with a
`namespace.
`
`[0063] The namesofall child nodes within a given
`namespace are unique.
`
`[0064] The namespaceis a, possibly empty, string.
`
`[0065] The namespace of a child nodeis specified
`whenthat nodeis created and may not be changed
`thereafter.
`
`[0066] The parent of a nodeis specified whenit is
`created, and may not be changed thereafter.
`
`[0067] Each node mayoptionally have a value.
`
`[0068] This value is represented as a, possibly
`empty, string.
`
`[0069] Nodes may be deleted.
`
`[0070] When a node is deleted, all of its child
`nodes are deleted (the delete operation is applied
`recursively).
`
`[0

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket