`Apple v. Virnetx
`Case IPR2013-00349
`
`Page 1 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 2 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 3 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 4 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 5 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 6 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 7 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 5 of 35
`
`US 7,490,151 B2
`
`I
`
`I
`
`I
`
`soosfiV2:as
`
`qamwmméa
`
`w.o_”_
`
`
`
`o._.m_>:.<zmm._.._<mzo
`
`wz_m$ooE$3
`
`M2528
`
`~_ommaoE
`
`n__20_.:._>>
`
`m_§m>_§z§E%
`
`3$5Ev_§Ez
`
`Page 8 of 62
`
`Agog;
`
`ozamaog
`
`E:mz_m_>_89m>_2_,_$:<$15
`
`9289z_E8.3mommaog.3E:
`
`Page 8 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 9 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 10 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 11 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 12 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 13 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 14 of 62
`
`
`
`U.S. Patent
`
`6m,
`
`2
`
`mg::”m$E9:_.53E_mE:”m$%9:_W958<3:2_mm%2a_$58m$5:<2:92SEs__%ma03:Os:2SE5385m5w$§E_Emamg:
`
`
`
`
`
`
`
`mm§<oSa2:;5<3E
`
`2:
`
`m_‘_‘GE
`
`7,g23%mNE
`
`5<2:2.m$%9a_M958
`
`02:2Em:_>__Ema
`
`mg:2”mm§§_.53E
`
`7%:
`
`FE05m_
`
`
`
`
`3:mmnmamog>>_._am<3:<5:2”m$%o<>>_._am
`3”m$m8<>>_._5%mg:mg:3”m$§<:1En.
`
`
`mzétmzmmxm.ms_§Ez$Im_
`
`$05:%_9m_._
`
`F8:
`
`Page 15 of 62
`
`Page 15 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 16 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 14 of 35
`
`US 7,490,151 B2
`
`
`
`am:%2z_2_5m_n_
`
`33,;
`
`BE,mm25
`
`025z_
`
`BE,mm25
`
`02$z_
`
`82%mm25
`
`02%z_
`
`$m$m8E_
`
`B_m<>mm25
`
`025z_
`
`82>mm25
`
`oz;z_
`
`82%mm26
`
`023z_
`
`mm<>>e_<_._
`
`$m$§<
`
`382._._<mem_>_$
`
`:m_En_s_8mo
`
`_>_8z§
`
`2%SEmeax:
`
`82%mm25
`
`02$_,__
`
`woos.
`
`mo
`
`._.zm_s__n_oms_m_
`
`%o8m__2oE._
`
`m88w_s_oEN
`
`2n_>mm:
`
`mméei_m
`
`oz_nEo_._
`
`ms.o_“_
`
`Page 17 of 62
`
`Page 17 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 18 of 62
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Page 19 of 62
`
`
`
`U.S. Patent
`
`US 7,490,151 B2
`
`
`
`
`oz_s_ooz_._.E>>$>_§<5;__oeB_aE2oz?~__§n__E_oe_85m5.1oz;:_>az,E.E2EsmzmomE:_2mzs:z__.93H.E0u$951wmin__:__oe_omIo
`
`
`
`
`
` $>_§mz___EU#w_as;__oe_..mIoW>>ooz_>>.u:.<on5.>>mmzommmm$>m.$m>32wauaxom_mm_>_mommEsmzmooz<:E0m_<n_Mumm_o<m=.__.E>>n=._.z_o$6m_._omm:__>_mzs:o.z__>_ooz_
`anE2Esmzmo.X202%2%;3”
`
`$>_%<$m02%E1?52ez_%§_025
`
`
`
`
`
`:e2_<.:_mE_s_mz<Ez___E0
`
`3.O_u_
`
`31-02%©
`zo:.<N_zom_._oz>m5:;©
`
`
`
`§_mz$_me:s_mz<E.28¢
`
`Page 20 of 62
`
`
`
`.§§:Es>._._<o_oo_mmn_
`
`Page 20 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 18 of 35
`
`US 7,490,151 B2
`
`‘_.
`
`FIG.15
`
`L()
`05
`’
`
`EQ
`
`4095
`
`Z
`
`4095
`
`4095
`
`i
`
`0
`
`
`
`
`
`(ETHERNETLAN-TWOAADDRESSBLOCKS)
`
`Page 21 of 62
`
`Page 21 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 19 of 35
`
`US 7,490,151 B2
`
`
`
`7//////////////////////A
`
`
`
`°
`.
`7//////////////////////A
`_'///////////////////////A
`
`,/////////////////////m
`7//////////////////////4
`
`
`I INACTIVE
`VA ACTIVE
`USED
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`7//////////////////////4
`V//I///////////////////.
`7/////'/////////////////.
`
`7////////////////////I/A
`
`FIG. 17
`
`000
`
`W|NDOW_S|ZE
`
`W|NDOW_S|ZE
`
`Page 22 of 62
`
`Page 22 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 20 of 35
`
`US 7,490,151 B2
`
`
`
`—— 7
`
`/////////////////////A
`
`000
`
`W|NDOW_S|ZE
`
`7//////////////////////,
`0'///////////////////////.
` A
`,/////////////////////J
`
`
`W|NDOW_SlZE
`
`.:
`
`FIG. 18
`
`Page 23 of 62
`
`Page 23 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 21 of 35
`
`US 7,490,151 B2
`
`
`
`’
`
`
`
`
`
`000
`
`
`
`W|NDOW_S|ZE
`
`
`
`W////////////////////.
`
`V//////////////////////A
`
`
`
`
`
`
`
`_
`.
`7//////////////////////A
`
`r/////////////////////M
`7////////////////////m
`V///////////////////////A
`
`
`
`I INACTIVE
`VA ACTIVE
`USED
`
`000
`
`
`
`W|NDOW_S|ZE
`
`:
`
`’.:
`
`
`
`
`7/////////////////////4
`
`FIG. 19
`
`Page 24 of 62
`
`Page 24 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 22 of 35
`
`US 7,490,151 B2
`
`2011
`
`FIG.20
`
`COMPUTER #2
`
`
`
`
`COMPUTER #1
`
`
`Page 25 of 62
`
`Page 25 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 23 of 35
`
`US 7,490,151 B2
`
`2101
`
`2102
`
`2103
`
`2104
`
`2105
`
`2106
`
`2107
`
`2108
`
`2109
`
`
`
`
`
`
`
`
`
`AD TABLE
`
`IP1
`
`IP3
`
`|P2
`
`|P4
`
`AE TABLE
`
`AF TABLE
`
`BD TABLE
`
`BE TABLE
`
`T’
`V
`
`CD TABLE
`
`CE TABLE
`
`CF TABLE
`
`FIG. 21
`
`LINK DOWN
`
`
`
`2100 /(
`
`Page26of62
`
`Page 26 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 24 of 35
`
`US 7,490,151 B2
`
`
`
`
`
`MEASURE
`
`QUALITY OF
`TRANSMISSION
`PATH X
`
`
`
`MORE
`
`THAN ONE
`
`TRANSMITTER
`TURNED
`ON?
`
`2209
`
`SET WEIGHT
`
`TO MIN. VALUE
`
`DECREASE
`WEIGHT FOR
`PATH X
`
`INCREASE WEIGHT
`FOR PATH X
`
`TOWARD STEADY
`
`
`STATE VALUE
`
`
`
`
`
`
`ADJUST WEIGHTS
`
`FOR REMAINING
`
`PATHS SO THAT
`WEIGHTS EQUAL ONE
`
`
`
`
`Page 27 of 62
`
`FIG. 22A
`
`Page 27 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 25 of 35
`
`US 7,490,151 B2
`
`(EVENT) TRANSMITTER
`FOR PATH x
`
`TURNS OFF
`
`
`
`
`2210
`
`2211
`
`2212
`
`2313
`
`2214
`
`AT LEAST
`
`DROP ALL PACKETS
`
`ONE TRANSMITTER
`
`UNTILATRANSMITTER
`
`TURNED ON?
`
`TURNS ON
`
`SET WEIGHT
`
`TO ZERO
`
`ADJUST WE|GHTS
`FOR REMAINING
`PATHS so THAT
`
`WEIGHTS EQUAL ONE
`
`FIG. 22B
`
`Page 28 of 62
`
`Page 28 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 26 of 35
`
`US 7,490,151 B2
`
`am
`
`82
`
`AA
`
`_.mv_o<..._
`
`$:__>_mz<E
`
`
`
` was._._s_mz<m_._./88
`
`
`
`wasm>_m_8m
`
`82\._Qm.
`
`Page 29 of 62
`
`mm.o:
`
`._._._m_.u_>>
`
`._.zm__E.m:3<
`
`zo_.Sz_.z
`
`
`
`>.:._<:ov_z_._
`
`._.zm__>_m_~=._m<m_s_
`
`zozoza
`
`Page 29 of 62
`
`
`
`
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 27 of 35
`
`US 7,490,151 B2
`
`$5n__>_8
`
`%E.n_s_8
`
`Ea:
`
`Page 30 of 62
`
`Page 30 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 28 of 35
`
`US 7,490,151 B2
`
`moms
`
`Ea$2
`
`mmd:
`
`E:maze
`
`$2
`
`mmmgomm
`
`Page 31 of 62
`
`Page 31 of 62
`
`
`
`U.S. Patent
`
`mmb.M
`
`.mHm
`
`7SU
`
`2
`
`38
`
`
`
`%Eamomswsomm
`
`mw:8maammzs
`
`
`
`M3omM,m:_mm9E
`
`88oz_&o_._n__
`
`$52&0:
`
`9~mnme_H5mm>>
`
`fimgomm
`
`88
`
`Page 32 of 62
`
`Page 32 of 62
`
`
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 30 of 35
`
`US 7,490,151 B2
`
`2701
`
`RECEIVE DNS
`REQUEST FOR
`
`TARGET SITE
`
`2703
`
`2702
`
`2704
`
`
`
`
`
`SECURE SITE
`
`ACCESS TO
`
`REQUESTED?
`
`
`
`
`
`
`YES
`
` USER
`AUTHORIZED TO
`
`
`CONNECT?
`
`
`
`
`YES
`
`RETURN
`
`"HOST UNKNOWN"
`ERROR
`
`2706
`
`ESTABLISH
`VPN WITH
`
`TARGET SITE
`
`FIG. 27
`
`Page 33 of 62
`
`Page 33 of 62
`
`
`
`meM
`
`US 7,490,151 B2
`
`F$58::ms%8
`
`$85%
`
`
`
`P.22mea
`
`Page 34 of 62
`
`._.mo_._
`
`g$5n_28
`
`mmd:
`
`Page 34 of 62
`
`
`
`
`U.S. Patent
`
`hR
`
`W
`
`7m
`
`m0,
`
`2
`
`3mm.0_..._
`
`
`
`
`
`Mg22xE_ooo¢$5n_s_oS_ea<_._22
`
`W25_._o_:ga32
`
`gags
`
`$59.
`
`+l
`
`§.§V:m_%mEa_>_8so:
`
`m2%
`
`EN22
`
`33:Em5%,_oo5%
`
`Page 35 of 62
`
`Page 35 of 62
`
`
`
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 33 of 35
`
`Us 7,490,151 B2
`
`$E_2mz<E
`
`m>_§m
`
`Page 36 of 62
`
`52cm._w_n_88
`
`Page 36 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 34 of 35
`
`US 7,490,151 B2
`
`3104
`
`3105
`
`HACKER
`
`CUENT#2
`
`
`
`FIG.31
`
`
`
`
`320832093210
`
`
`TWRXTWRXTWRX
`
`
`
`
`
`
`3107
`
`06
`
`C\l
`i‘_
`(‘O
`
`Page 37 of 62
`
`Page 37 of 62
`
`
`
`U.S. Patent
`
`Feb. 10, 2009
`
`Sheet 35 of 35
`
`US 7,490,151 B2
`
`SERVER
`
`PASS DATA UP STACK
`
`GENERATE NEW CKPT_N
`GENERATE NEW CKPT_R
`FOR TRANSMITTER SIDE
`TRANSM|TSYNC_ACK
`CONTAINING CKPT_O
`
`CKPT_O=CKPT_N
`GENERATE NEW CKPT_N
`GENERATE NEW CKPT_R
`¥P‘3I’fNT§”Ifi*I’%‘°é“¢'IlcT:EI'3c?<'”E
`-
`CONTANNG CKPT-O
`
`CLIENT
`
`SEND DATA PACKET
`
`%’;E}a“IGc‘§"5’IIP'# N
`
`— -
`
`—
`
`TRANSMHTER OFF
`
`IE CKPT_o IN SYNC_ACK
`MATCHES TRANSMITTERS
`CKPT_o
`UPDATE RECEIVERS
`CKPT_R
`KILL TIMER, TURN
`TRANSMITTER ON
`
`SEND DATA PACKET
`USING CKPT_N
`CKPT_o=CKPT_N
`GENERATE NEW CKPT_N
`START TIMER, SHUT
`TRANSMITTER OFF
`
`WHEN TIMER EXPIRES
`TRANSMIT SYNC_REQ
`USING TRANSMITTERS
`CKPT_O,STARTTlMER
`
`IECKPT 0 IN SYNC ACK
`MATCHES TRANSMITTERS
`CKPT_O
`UPDATE RECE|VER'S
`CKPT_R
`KILL TIMER, TURN
`TRANSMITTER ON
`
`Page 38 of 62
`
`SYNC REQ
`—
`
`FIG. 32
`
`Page 38 of 62
`
`
`
`US 7,490,151 B2
`
`1
`ESTABLISHMENT OF A SECURE
`COMMUNICATION LINK BASED ON A
`
`DOMAIN NAME SERVICE (DNS) REQUEST
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application is a divisional application of 09/504,783
`(filed Feb. 15, 2000), now U.S. Pat. No. 6,502,135, issued
`Dec. 31, 2002, which claims priority from and is a continua-
`tion-in-part of previously filed U.S. application Ser. No.
`09/429,643 (filed Oct. 29, 1999) now U.S. Pat. No. 7,010,604.
`The subject matter of the ’643 application, which is bodily
`incorporated herein, derives from provisional U.S. applica-
`tion No. 60/106,261 (filed Oct. 30, 1998) and 60/137,704
`(filed Jun. 7, 1999).
`
`GOVERNMENT CONTRACT RIGHTS
`
`This invention was made with Government support under
`Contract No. 360000-1999-000000-QC-000-000 awarded by
`the Central Intelligence Agency. The Government has certain
`rights in the invention.
`
`BACKGROUND OF THE INVENTION
`
`A tremendous variety of methods have been proposed and
`implemented to provide security and anonymity for commu-
`nications over the Internet. The variety stems, in part, from the
`different needs of different Internet users. A basic heuristic
`
`framework to aid in discussing these different security tech-
`niques is illustrated in FIG. 1. Two terminals, an originating
`terminal 100 and a destination terminal 110 are in communi-
`cation over the Internet. It is desired for the communications
`
`to be secure, that is, immune to eavesdropping. For example,
`terminal 100 may transmit secret information to terminal 110
`over the Internet 107. Also, it may be desired to prevent an
`eavesdropper from discovering that terminal 100 is in com-
`munication with terminal 1 10. For example, ifterminal 1 00 is
`a user and terminal 110 hosts a web site, terminal 100’s user
`may not want anyone in the intervening networks to know
`what web sites he is “visiting.” Anonymity would thus be an
`issue, for example, for companies that want to keep their
`market research interests private and thus would prefer to
`prevent outsiders from knowing which web-sites or other
`Internet resources they are “visiting.” These two security
`issues may be called data security and anonymity, respec-
`tively.
`Data security is usually tackled using some form of data
`encryption. An encryption key 48 is known at both the origi-
`nating and terminating terminals 100 and 110. The keys may
`be private and public at the originating and destination termi-
`nals 100 and 110, respectively or they may be symmetrical
`keys (the same key is used by both parties to encrypt and
`decrypt). Many encryption methods are known and usable in
`this context.
`
`To hide trafiic from a local administrator or ISP, a user can
`employ a local proxy server in communicating over an
`encrypted channel with an outside proxy such that the local
`administrator or ISP only sees the encrypted trafiic. Proxy
`servers prevent destination servers from determining the
`identities of the originating clients. This system employs an
`intermediate server interposed between client and destination
`server. The destination server sees only the Internet Protocol
`(IP) address ofthe proxy server and not the originating client.
`The target server only sees the address of the outside proxy.
`This scheme relies on a trusted outside proxy server. Also,
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`
`proxy schemes are vulnerable to trafiic analysis methods of
`determining identities of transmitters and receivers. Another
`important limitation of proxy servers is that the server knows
`the identities of both calling and called parties. In many
`instances, an originating terminal, such as terminal A, would
`prefer to keep its identity concealed from the proxy, for
`example, ifthe proxy server is provided by an Internet service
`provider (ISP).
`To defeat traffic analysis, a scheme called Chaum’s mixes
`employs a proxy server that transmits and receives fixed
`length messages, including dummy messages. Multiple origi-
`nating terminals are connected through a mix (a server) to
`multiple target servers. It is difficult to tell which of the
`originating terminals are communicating to which ofthe con-
`nected target servers, and the dummy messages confuse
`eavesdroppers’ efforts to detect communicating pairs by ana-
`lyzing traffic. A drawback is that there is a risk that the mix
`server could be compromised. One way to deal with this risk
`is to spread the trust among multiple mixes. If one mix is
`compromised, the identities of the originating and target ter-
`minals may remain concealed. This strategy requires a num-
`ber of alternative mixes so that the intermediate servers inter-
`
`posed between the originating and target terminals are not
`determinable except by compromising more than one mix.
`The strategy wraps the message with multiple layers of
`encrypted addresses. The first mix in a sequence can decrypt
`only the outer layer of the message to reveal the next desti-
`nation mix in sequence. The second mix can decrypt the
`message to reveal the next mix and so on. The target server
`receives the message and, optionally, a multi-layer encrypted
`payload containing return information to send data back in
`the same fashion. The only way to defeat such a mix scheme
`is to collude among mixes. If the packets are all fixed-length
`and intermixed with dummy packets, there is no way to do
`any kind of trafiic analysis.
`Still another anonymity technique, called ‘crowds,’ pro-
`tects the identity of the originating terminal from the inter-
`mediate proxies by providing that originating terminals
`belong to groups ofproxies called crowds. The crowd proxies
`are interposed between originating and target terminals. Each
`proxy through which the message is sent is randomly chosen
`by an up stream proxy. Each intermediate proxy can send the
`message either to another randomly chosen proxy in the
`“crowd” or to the destination. Thus, even crowd members
`carmot determine if a preceding proxy is the originator of the
`message or if it was simply passed from another proxy.
`ZKS (Zero-Knowledge Systems) Anonymous IP Protocol
`allows users to select up to any of five different pseudonyms,
`while desktop software encrypts outgoing trafiic and wraps it
`in User Datagrarn Protocol (UDP) packets. The first server in
`a 2+-hop system gets the UDP packets, strips off one layer of
`encryption to add another, then sends the traffic to the next
`server, which strips off yet another layer of encryption and
`adds a new one. The user is permitted to control the number of
`hops. At the final server, trafiic is decrypted with an untrace-
`able IP address. The technique is called onion-routing. This
`method can be defeated using trafiic analysis. For a simple
`example, bursts of packets from a user during low-duty peri-
`ods can reveal the identities of sender and receiver.
`
`Firewalls attempt to protect LANs from unauthorized
`access and hostile exploitation or damage to computers con-
`nected to the LAN. Firewalls provide a server through which
`all access to the LAN must pass. Firewalls are centralized
`systems that require administrative overhead to maintain.
`They can be compromised by virtual-machine applications
`(“applets”). They instill a false sense of security that leads to
`security breaches for example by users sending sensitive
`
`Page 39 of 62
`
`Page 39 of 62
`
`
`
`US 7,490,151 B2
`
`3
`information to servers outside the firewall or encouraging use
`of modems to sidestep the firewall security. Firewalls are not
`useful for distributed systems such as business travelers,
`extranets, small teams, etc.
`
`SUMMARY OF THE INVENTION
`
`A secure mechanism for communicating over the intemet,
`including a protocol referred to as the TunneledAgile Routing
`Protocol (TARP), uses a unique two-layer encryption format
`and special TARP routers. TARP routers are similar in func-
`tion to regular IP routers. Each TARP router has one or more
`IP addresses and uses normal IP protocol to send IP packet
`messages
`(“packets” or “datagrams”). The IP packets
`exchanged between TARP terminals via TARP routers are
`actually encrypted packets whose true destination address is
`concealed except to TARP routers and servers. The normal or
`“clear” or “outside” IP header attached to TARP IP packets
`contains only the address of a next hop router or destination
`server. That is, instead of indicating a final destination in the
`destination field of the IP header, the TARP packet’s IP
`header always points to a next-hop in a series of TARP router
`hops, or to the final destination. This means there is no overt
`indication from an intercepted TARP packet of the true des-
`tination of the TARP packet since the destination could
`always be next-hop TARP router as well as the final destina-
`tion.
`
`Each TARP packet’s true destination is concealed behind a
`layer of encryption generated using a link key. The link key is
`the encryption key used for encrypted communication
`between the hops intervening between an originating TARP
`terminal and a destination TARP terminal. Each TARP router
`
`can remove the outer layer of encryption to reveal the desti-
`nation router for each TARP packet. To identify the link key
`needed to decrypt the outer layer of encryption of a TARP
`packet, a receiving TARP or routing terminal may identify the
`transmitting terminal by the sender/receiver IP numbers in the
`cleartext IP header.
`
`Once the outer layer of encryption is removed, the TARP
`router determines the final destination. Each TARP packet
`140 undergoes a minimum number of hops to help foil trafiic
`analysis. The hops may be chosen at random or by a fixed
`value. As a result, each TARP packet may make random trips
`among a number of geographically disparate routers before
`reaching its destination. Each trip is highly likely to be dif-
`ferent for each packet composing a given message because
`each trip is independently randomly determined. This feature
`is called agile routing. The fact that different packets take
`different routes provides distinct advantages by making it
`difficult for an interloper to obtain all the packets forming an
`entire multi-packet message. The associated advantages have
`to do with the inner layer of encryption discussed below.
`Agile routing is combined with another feature that furthers
`this purpose; a feature that ensures that any message is broken
`into multiple packets.
`The IP address of a TARP router can be changed, a feature
`called IP agility. Each TARP router, independently or under
`direction from another TARP terminal or router, can change
`its IP address. A separate, unchangeable identifier or address
`is also defined. This address, called the TARP address, is
`known only to TARP routers and terminals and may be cor-
`related at any time by a TARP router or a TARP terminal using
`a Lookup Table (LUT). When a TARP router or terminal
`changes its IP address, it updates the other TARP routers and
`terminals which in turn update their respective LUTs.
`The message payload is hidden behind an inner layer of
`encryption in the TARP packet that can only be unlocked
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`4
`
`using a session key. The session key is not available to any of
`the intervening TARP routers. The session key is used to
`decrypt the payloads ofthe TARP packets permitting the data
`stream to be reconstructed.
`
`Communication may be made private using link and ses-
`sion keys, which in turn may be shared and used according to
`any desired method. For example, public/private keys or sym-
`metric keys may be used.
`To transmit a data stream, a TARP originating terminal
`constructs a series of TARP packets from a series of IP pack-
`ets generated by a network (IP) layer process. (Note that the
`terms “network layer,” “data link layer,” “application layer,”
`etc. used in this specification correspond to the Open Systems
`Interconnection (OSI) network terminology.) The payloads
`of these packets are assembled into a block and chain-block
`encrypted using the session key. This assumes, of course, that
`all the IP packets are destined for the same TARP terminal.
`The block is then interleaved and the interleaved encrypted
`block is broken into a series of payloads, one for each TARP
`packet to be generated. Special TARP headers IPT are then
`added to each payload using the IP headers from the data
`stream packets. The TARP headers can be identical to normal
`IP headers or customized in some way. They should contain a
`formula or data for deinterleaving the data at the destination
`TARP terminal, a time-to-live (TTL) parameter to indicate
`the number of hops still to be executed, a data type identifier
`which indicates whether the payload contains, for example,
`TCP or UDP data, the sender’ s TARP address, the destination
`TARP address, and an indicator as to whether the packet
`contains real or decoy data or a formula for filtering out decoy
`data if decoy data is spread in some way through the TARP
`payload data.
`Note that although chain-block encryption is discussed
`here with reference to the session key, any encryption method
`may be used. Preferably, as in chain block encryption, a
`method should be used that makes unauthorized decryption
`difficult without an entire result of the encryption process.
`Thus, by separating the encrypted block among multiple
`packets and making it difficult for an interloper to obtain
`access to all of such packets, the contents of the communica-
`tions are provided an extra layer of security.
`Decoy or dummy data can be added to a stream to help foil
`traffic analysis by reducing the peak-to-average network load.
`It may be desirable to provide the TARP process with an
`ability to respond to the time of day or other criteria to gen-
`erate more decoy data during low trafiic periods so that com-
`munication bursts at one point in the Internet cannot be tied to
`communication bursts at another point to reveal the commu-
`nicating endpoints.
`Dummy data also helps to break the data into a larger
`number of inconspicuously-sized packets permitting the
`interleave window size to be increased while maintaining a
`reasonable size for each packet. (The packet size can be a
`single standard size or selected from a fixed range of sizes.)
`One primary reason for desiring for each message to be bro-
`ken into multiple packets is apparent if a chain block encryp-
`tion scheme is used to form the first encryption layer prior to
`interleaving. A single block encryption may be applied to
`portion, or entirety, of a message, and that portion or entirety
`then interleaved into a number of separate packets. Consid-
`ering the agile IP routing of the packets, and the attendant
`difficulty of reconstructing an entire sequence of packets to
`form a single block-encrypted message element, decoy pack-
`ets can significantly increase the difficulty of reconstructing
`an entire data stream.
`
`The above scheme may be implemented entirely by pro-
`cesses operating between the data link layer and the network
`
`Page 40 of 62
`
`Page 40 of 62
`
`
`
`US 7,490,151 B2
`
`5
`layer of each server or terminal participating in the TARP
`system. Because the encryption system described above is
`insertable between the data link and network layers, the pro-
`cesses involved in supporting the encrypted communication
`may be completely transparent to processes at the IP (net-
`work) layer and above. The TARP processes may also be
`completely transparent to the data link layer processes as
`well. Thus, no operations at or above the Network layer, or at
`or below the data link layer, are affected by the insertion ofthe
`TARP stack. This provides additional security to all processes
`at or above the network layer, since the difficulty of unautho-
`rized penetration of the network layer (by, for example, a
`hacker) is increased substantially. Even newly developed
`servers running at the session layer leave all processes below
`the session layer vulnerable to attack. Note that in this archi-
`tecture, security is distributed. That is, notebook computers
`used by executives on the road, for example, can communi-
`cate over the Internet without any compromise in security.
`IP address changes made by TARP terminals and routers
`can be done at regular intervals, at random intervals, or upon
`detection of “attacks.” The variation of IP addresses hinders
`
`trafiic analysis that might reveal which computers are com-
`municating, and also provides a degree of immunity from
`attack. The level of immunity from attack is roughly propor-
`tional to the rate at which the IP address of the host is chang-
`ing.
`As mentioned, IP addresses may be changed in response to
`attacks. An attack may be revealed, for example, by a regular
`series of messages indicating that a router is being probed in
`some way. Upon detection of an attack, the TARP layer pro-
`cess may respond to this event by changing its IP address. In
`addition, it may create a subprocess that maintains the origi-
`nal IP address and continues interacting with the attacker in
`some manner.
`
`Decoy packets may be generated by each TARP terminal
`on some basis determined by an algorithm. For example, the
`algorithm may be a random one which calls for the generation
`of a packet on a random basis when the terminal is idle.
`Alternatively, the algorithm may be responsive to time of day
`or detection of low traffic to generate more decoy packets
`during low trafiic times. Note that packets are preferably
`generated in groups, rather than one by one, the groups being
`sized to simulate real messages. In addition, so that decoy
`packets may be inserted in normal TARP message streams,
`the background loop may have a latch that makes it more
`likely to insert decoy packets when a message stream is being
`received. Alternatively, if a large number of decoy packets is
`received along with regular TARP packets, the algorithm may
`increase the rate of dropping of decoy packets rather than
`forwarding them. The result of dropping and generating
`decoy packets in this way is to make the apparent incoming
`message size different from the apparent outgoing message
`size to help foil trafiic analysis.
`In various other embodiments of the invention, a scalable
`version ofthe system may be constructed in which a plurality
`of IP addresses are preassigned to each pair of communicat-
`ing nodes in the network. Each pair of nodes agrees upon an
`algorithm for “hopping” between IP addresses (both sending
`and receiving), such that an eavesdropper sees apparently
`continuously random IP address pairs (source and destina-
`tion) for packets transmitted between the pair. Overlapping or
`“reusable” IP addresses may be allocated to different users on
`the same subnet, since each node merely verifies that a par-
`ticular packet includes a valid source/destination pair from
`the agreed-upon algorithm. Source/destination pairs are pref-
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`6
`erably not reused between any two nodes during any given
`end-to-end session, though limited IP block sizes or lengthy
`sessions might require it.
`Further improvements described in this continuation-in-
`part application include: (1) a load balancer that distributes
`packets across different transmission paths according to
`transmission path quality; (2) a DNS proxy server that trans-
`parently creates a virtual private network in response to a
`domain name inquiry; (3) a large-to-small link bandwidth
`management feature that prevents denial-of- service attacks at
`system chokepoints; (4) a trafiic limiter that regulates incom-
`ing packets by limiting the rate at which a transmitter can be
`synchronized with a receiver; and (5) a signaling synchro-
`nizer that allows a large number of nodes to communicate
`with a central node by partitioning the communication func-
`tion between two separate entities
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is an illustration of secure communications over the
`
`Internet according to a prior art embodiment.
`FIG. 2 is an illustration of secure communications over the
`
`Internet according to a an embodiment of the invention.
`FIG. 3a is an illustration of a process of forming a turmeled
`IP packet according to an embodiment of the invention.
`FIG. 3b is an illustration of a process of forming a turmeled
`IP packet according to another embodiment of the invention.
`FIG. 4 is an illustration of an OSI layer location of pro-
`cesses that may be used to implement the invention.
`FIG. 5 is a flow chart illustrating a process for routing a
`tunneled packet according to an embodiment ofthe invention.
`FIG. 6 is a flow chart illustrating a process for forming a
`tunneled packet according to an embodiment ofthe invention.
`FIG. 7 is a flow chart illustrating a process for receiving a
`tunneled packet according to an embodiment ofthe invention.
`FIG. 8 shows how a secure session is established and
`
`synchronized between a client and a TARP router.
`FIG. 9 shows an IP address hopping scheme between a
`client computer and TARP router using transmit and receive
`tables in each computer.
`FIG. 10 shows physical link redundancy among three Inter-
`net Service Providers (ISPs) and a client computer.
`FIG. 11 shows how multiple IP packets can be embedded
`into a single “frame” such as an Ethernet frame, and further
`shows the use of