throbber
(12) United States Patent
`Reuter et al.
`
`USOO6745207B2
`US 6,745,207 B2
`Jun. 1, 2004
`
`(10) Patent No.:
`(45) Date of Patent:
`
`(54)
`
`(75)
`
`(73)
`
`(21)
`(22)
`(65)
`
`(60)
`
`(51)
`(52)
`
`(58)
`
`(56)
`
`SYSTEMAND METHOD FOR MANAGING
`VIRTUAL STORAGE
`
`Inventors: James E. Reuter, Colorado Springs,
`CO (US); David W. Thiel, Colorado
`Springs, CO (US); Richard F. Wrenn,
`Colorado Springs, CO (US); Andrew
`C. St. Martin, Las Vegas, NV (US)
`Assignee: Hewlett-Packard Development
`Company, L.P., Houston, TX (US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 424 days.
`
`Notice:
`
`Appl. No.: 09/872,583
`Filed:
`Jun. 1, 2001
`Prior Publication Data
`US 2002/0019908 A1 Feb. 14, 2002
`Related U.S. Application Data
`Provisional application No. 60/209,108, filed on Jun. 2,
`2000.
`Int. Cl................................................. G06F 12/00
`U.S. Cl. ........................... 707/200; 709/1; 707/203;
`707/100
`Field of Search .................... 707/1-10, 100-104.1,
`707/200; 700/19–20, 245, 248; 709/1, 100,
`223-226, 227–229, 321–327; 711/1-6,
`100, 200, 203, 209
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,226,141. A * 7/1993 Esbensen
`5,392.244. A * 2/1995 Jacobson et al.
`5,408.630 A * 4/1995 Moss
`5,485,321. A * 1/1996 Leonhardt et al.
`5,542,065 A * 7/1996 Burkes et al.
`5,546,557 A * 8/1996 Allen et al.
`5,546.558 A * 8/1996 Jacobson et al.
`
`
`
`5,574,862 A * 11/1996 Marianetti
`5,617,552 A * 4/1997 Garber et al.
`5,651,133 A * 7/1997 Burkes et al.
`5,918.229 A * 6/1999 Davis et al.
`6.269.431 B1 * 7/2001 Dunham
`6,321,276 B1 * 11/2001 Forin
`6,343,324 B1
`1/2002 Hubis et al. ................ 709/229
`6,389,432 B1 * 5/2002 Pothapragada et al. ..... 707/205
`6,460,113 B1 * 10/2002 Schubert et al. ............ 711/111
`6,538,669 B1 * 3/2003 Lagueux et al. ............ 345/764
`6,633.962 B1 * 10/2003 Burton et al................ 711/163
`6,640.278 B1 * 10/2003 Nolan et al. .....
`... 711/6
`6,647.387 B1 * 11/2003 McKean et al. .
`... 707/9
`6,654,830 B1 * 11/2003 Taylor et al. ................. 710/74
`OTHER PUBLICATIONS
`Verita, “ Applications in Storage Area Networking, Using
`Storage Management Software in Today's SAN Enviro
`ment”, Verita Software Corp., Feb. 2000, pp. 1-11.*
`Kenneth Jensen, “Managing the SAN”, Gadzoox Networks
`Inc., 1999, pp. 1-12.*
`Montague, Robert M. et al., Virtualizing the SAN, Morgan
`Keegan & Company, Inc., Jul. 5, 2000, pp. 1-20.
`* cited by examiner
`Primary Examiner-Greta Robinson
`ASSistant Examiner-Linh Black
`(57)
`ABSTRACT
`Preferred embodiments of the present invention provide a
`System and method for the management of Virtual Storage.
`The System and method include an object-oriented computer
`hardware/software model that can be presented, for
`example, via a management interface (e.g., via graphical
`user interfaces, command line interfaces, application pro
`gramming interfaces, etc.). In Some preferred embodiments,
`the model Separates physical Storage management from
`Virtual disks presented to hosts and management can be
`automated Such that the user (e.g., customer, manager and/or
`administrator) specifies goals rather than means-enhancing
`ease of use while maintaining flexible deployment of Storage
`CSOUCCS.
`
`18 Claims, 28 Drawing Sheets
`
`ROOtpOOL
`
`u
`
`- - - - - - - - - - - - S
`
`WRTUADISKX
`LUN1, UN2, LUN3
`
`WRUADSKY
`LUN1, LUN2
`
`-S WRUADISKZ
`
`UN 3
`
`phYSICA SORAGE
`
`Adobe - Exhibit 1009, page 1
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 1 of 28
`
`US 6,745,207 B2
`
`LSOH
`
`·LSOH
`
`0 || ||
`
`09),09||
`
`TENNY/HO
`NOI_L\/OINT, WWOO
`
`
`
`L ‘61-I
`
`
`
`(~001
`
`Adobe - Exhibit 1009, page 2
`
`

`

`U.S. Patent
`
`US 6,745,207 B2
`
`(TOOd-gnS)
`
`TOOd
`
`z '61)
`
`
`
`
`
`
`
`Adobe - Exhibit 1009, page 3
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 3 of 28
`
`US 6,745,207 B2
`
`
`
`
`
`WWEIA TVOISAHdNWE||A. TV/O|5)OT
`
`
`
`
`
`
`
`
`
`
`
`
`
`c '61-I
`
`ALIOwdwo
`
`Adobe - Exhibit 1009, page 4
`
`

`

`U.S. Patent
`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 4 of 28
`
`US 6,745,207 B2
`US 6,745,207 B2
`
`
`€NN1'ZNN1'LNM
`WOLYlAL
`XHSI
`
`--------------100dLOOY
`€ (~~~~ F
`
`ZNN1‘LNM
`
`100d-ans
`oe
`
`
`SVOIMAWV
`
`
`
`enZASICWNLYIA
`
`pbig
`
`
`
`AJOVHOLSWOISAHd
`
`J
`
`
`
`dadO0unaV1VN
`
`=100d-8NS100d-8NS100d-8NS
`
`Adobe- Exhibit 1009, page 5
`
`Adobe - Exhibit 1009, page 5
`
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 5 of 28
`
`US 6,745,207 B2
`
`
`
`(WW)
`
`G -61-I
`
`0Z),
`
`
`
`SETOSNO O LNETTO LNE WESOV/N\/W
`
`
`
`(OW)
`
`_LNESD\/E!OV-??-HELNI
`
`JLNEVNEÐVNV/WETIOSNOO TVOOT
`
`
`
`TEMAET WOT
`
`Adobe - Exhibit 1009, page 6
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 6 of 28
`
`US 6,745,207 B2
`
`
`
`-
`
`-
`
`
`
`
`
`* ,' .EWTTOA G CUIVRH
`
`I-II-II-II–L––L––L––L––L––L
`
`Z '61)
`
`Adobe - Exhibit 1009, page 7
`
`

`

`Jun.1, 2004
`
`Sheet 7 of 28
`
`US 6,745,207 B2
`
`AIWV4GA
`
`AllNV4OA
`
`ATINVAGA
`
`U.S. Patent g‘big
`
`auvosia||—SnacanaATINYSHSICWALYIA|maN[|
`¥agTosanpaso_f
`
`
`
`dWVXa4YA0T104NSICWNLYIA
`
`YSHLOP-Laews)TGYIGGAAWthoe
`vaamn@-f
`
`A8x2.
`
`Zo—,
`
`
`
`YIGGANNOAF_}-[4]
`
`dLPTE)
`
`agamnd@--[5]
`
`
`
`NOILVOIAVNMSICTWALYIA
`
`Adobe- Exhibit 1009, page 8
`
`Adobe - Exhibit 1009, page 8
`
`
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 8 of 28
`
`US 6,745,207 B2
`
`?_ESMO HB )
`+---------------~~~~ ~~~~ ~~~~~~~--~~~~ ?
`
`
`
`
`
`TERONY/O LXEN . SnOIABHd
`
`6 (61-)
`
`
`
`BHL
`
`
`
`
`
`
`
`">{SICI TWO LAJIA MEN HWVN W????OJ?
`
`
`
`
`
`9 do I d?ls:TvOldÅL- GHVZIM »SIC Twnl>JIA MEN
`
`
`
`
`
`Adobe - Exhibit 1009, page 9
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 9 of 28
`
`US 6,745,207 B2
`
`
`
`
`
`MOVE-ALIMGSYONNIN(@)
`
`IWOIdAL-GYVZIMASICWALYIAMAN
`"FOVYOLSNSICGALVOOTIV4OLNNOWYLSOWSHLSAWNSNOOLN‘SaYNniv4XSI
`
`
`
`
`
`JIONISLSNIVDVNOILOSLOYdGNVSONVWYOSYAdHOIHHLOGSACIAONd++0Give
`ISICTWALYIAMSNSHLOSAdIT0dONIHOVO-SLIMMONY13A37CIVSH.AdIOadS
`
`
`
`
`
`
`
`JONVWYOIHad-HDIH34VSAMSAVSACIAOUdONIHOVOWOVE-SALIMMGSydORMIN
`
`
`
`(1+0-dlvu)GAYONNIN(eo)T3A31divaWONWINIA
`
`
`
`
`
`MOVE-SLIMMGAYONHINNNC)
`
`HONOSHL-ALIMC)
`
`
`
`3NONO‘ADNOdSHOVOSLIM
`
`
`
`(S/e-cive)ALIdWdC)
`
`
`
`(o-dlvel)GadiisC)
`
`
`
`NOLLVYNSISANOOGay!ISAg
`
`94O2dALS
`
`
`
`
`
`NOILVENSISNOSGayISad
`
`
`
`
`
`TADNVOLXANSNOIAZYd
`
`
`
`"NOLLWYSdOALINM
`
`Adobe- Exhibit 1009, page 10
`
`Adobe - Exhibit 1009, page 10
`
`

`

`U.S. Patent
`
`US 6,745,207 B2
`
`
`
`
`
`
`
`TE ONVO|STROIAER-jd
`
`Adobe - Exhibit 1009, page 11
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 11 of 28
`
`US 6,745,207 B2
`
`ANOZSHAdIOadS 9407
`
`
`
`AHLAZI1V4eOLGASNAJDVYOLSSHLYO(NOILVDOTTV9I907)
`qSONVO|XNSNOIAZYd
`
`.“ASICWALYIAMIN
`
`
`
`d3aLSIWOIdAL-GYVZIMXSIWNLYIAMAN
`
`‘NOILVENSISANOODQ3awISAG
`
`
`
`NOILOITAS3NOZ
`
`NVAOLNV
`
`Adobe- Exhibit 1009, page 12
`
`Adobe - Exhibit 1009, page 12
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 12 of 28
`
`US 6,745,207 B2
`
`LINNVAstDadS 94OS$da.LS
`
`
`
`
`
`TaONVO1XSNSNOIARYd
`
`*AdALLINN
`
`‘SINVNLINN
`
`Adobe- Exhibit 1009, page 13
`
`©:A1no-avau
`
`
`
`(e)‘aLIuM-avayiSSHO00V
`
`
`
`
`
`“SIGTWNLUIAFHLYOS1OSO0LONd
`
`WOIdAL-GHVZIMYSIOTWN.LYIAMAN
`
`Adobe - Exhibit 1009, page 13
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 13 of 28
`
`US 6,745,207 B2
`
`IWOIdAL-GUVZIMSICWALYIAMAN
`SalluadOudSSS90VLINNSHL3SNN3HLONYXSICTWALYIAMANSHL30NOLLVA89
`
`
`
` LINN©:anavsia@ :aalavna©:aino-avay=@):aLramavay‘SSAIOV
`FHLHSINIS‘SLSOH4013SAUVELIGHVNVOLMSIGTWALYIAMANSHLINASSudOL
`
`
`
`
`
`TaADNVD—LXSNSNOIAaed
`sssoov|ai|.awwn|adAL|"10901L0Ud
`ALIMM-GVae€S-ze-6/-vl62xSI0€1SOS
`
`
`
`
`"HSIOTWOLYIAMANSHLYOsSSADOVLINNSHLAdload
`94O9dALS
`
`
`NHOMSSATILVAE\‘WALSASLSOH
`
`‘A9Vd
`
`Adobe- Exhibit 1009, page 14
`
`Adobe - Exhibit 1009, page 14
`
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 14 of 28
`
`US 6,745,207 B2
`
`EldÅ L
`
`
`
`00:£Z 966 L-OECD-6
`
`HEHLO ?T)
`
`ETTTTTTEE
`
`
`
`
`
`
`
`
`
`
`
`GL ‘61-I
`
`
`
`gaBMn+ @--EJ
`
`Adobe - Exhibit 1009, page 15
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 15 of 28
`
`US 6,745,207 B2
`
`[x]-ATINV4NSICTWOLYIA
`
`oo:ez9661-Oa0-8K—)
`
`00:629661-050-2FE)
`
`00:€29661-030-6EJ
`AALLOVES)
`dIVXAoaYIGGAAWravans@-a
`ZS-AS._4ms.
`
`9gsapaeazata
`
`gL‘big
`
`YIGGAYNOAP_He]
`
`ANALPE
`
`aaamnd@--L]
`
`NOILVOIAVNXSICWALYIA
`
`Adobe- Exhibit 1009, page 16
`
`Adobe - Exhibit 1009, page 16
`
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 16 of 28
`
`US 6,745,207 B2
`
`
`
`
`
`
`
`
`
`
`
`
`Z L '61)
`
`LTHETIOHINOO—
`
`€THETIOHINOO-
`
`SHNOIsº-HE
`
`Adobe - Exhibit 1009, page 17
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 17 of 28
`
`US 6,745,207 B2
`
`
`
`Too?TITQIE??HE
`
`Tood v Allo{}E;
`
`INm-1
`
`|Nm-1 ; ;
`
`ioog-a-Alto?E!
`TOOdTZTOTTE??E|
`
`8GBMn+ º-E
`
`Adobe - Exhibit 1009, page 18
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 18 of 28
`
`US 6,745,207 B2
`
`AWVSCA
`
`AllWV4GA
`
`AMINV4GA
`
`Y3AC1O4HSICWNALYIA
` 43507104GA|3dAL(QW)ALIOWd9SIWWN
`
`
`
`dWVXAIMIGCAAN\WESMNAN\Y3C104SICTWNLYIA40SLNALNOO
`dWVX3
`
`ligaYaqTOsNSICTWNLUIAT)
`vaamnd@-ajo]8]0@lo|s)
`YaHLOP+aEmaOE
`YIGCGAAW1a.
`
`6.‘big
`
`
`
`NOILVSIAVNXSIIWALYIA
`
`zeAS-x2.send
`
`dW3LTl!
`
`YIGGAYNOATe)|
`
`ggamnd@-[+]
`
`Adobe- Exhibit 1009, page 19
`
`Adobe - Exhibit 1009, page 19
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 19 of 28
`
`US 6,745,207 B2
`
`dWVXS
`
`430104GA
`
`AllWv4GA
`
`ATIWW4OA
`
`ATIWVAGA
`
`S3llLuadOud
`
`Oz‘bly
`
`430104ASICWNALYIA
` AWVN3Y|BdAL(GIN)ALIOVdYauvosid
`
`
`siabeeunvesnnfS|nef
`uaawaciossiatwniaia|P|
`YIGGAAW1avaamnd@-&jo]O]0Molo
`YaHLOF_}-cnEmaTo
`
`HIGOAYNOAP-L]
`
`ZS--ASxX
`
`dwalP+)|
`
`NOILVOIAVNSICTWALYIA
`
`gaamni@-La]
`
`Adobe- Exhibit 1009, page 20
`
`Adobe - Exhibit 1009, page 20
`
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 20 of 28
`
`US 6,745,207 B2
`
`Le‘big
`
`Yaq104GAYSHLO[7][anv]ams]a
`
`AUWV4GA0002|ASSHOPT}a)ATICA000rxs
`
`
`|adAL|(Qn)_-AlsvdSVN
`
`ATIWV4GA00SzZS
`
`dWvxa/ldaLANDWALYIASOSLNSLNOO
`
`dWVx4YAd104SICWNLYlA
`liaa|ywaqiosysiaiwnuiAF_)
`NOILVSIAVNSICWALYIA
`
`MIGANOAP7}-E]
`YIGGAAWa)a.
`vaamns@-&
`
`ANALET
`
`
`
` AS-XS..,
`
`Zoa,
`
`gaamns@--[5]
`
`Adobe- Exhibit 1009, page 21
`
`Adobe - Exhibit 1009, page 21
`
`
`
`
`
`
`

`

`U.S. Patent
`
`Jun. 1, 2004
`
`Sheet 21 of 28
`
`US 6,745,207 B2
`
`BALLOW966 | -/\ON-Z],
`
`zz ‘61-I
`
`
`
`
`
`NOLLWEDIAWN XSIG TVn LHIA
`
`
`
`
`
`
`
`
`
`Adobe - Exhibit 1009, page 22
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 22 of 28
`
`US 6,745,207 B2
`
`AVVLS
`
`vaamns@.8
`auyosia|AdAL|GallWwauoX\dNVX>S/HIGGAAW\WSSM4141303.157350NN
`
`
`
`
`
`31aVvls966}-94cLOHSdVNS
`BAILOV9661-AO
`
`9661-03SaILYadOud
`AYNSIANOD=LidseSXATINVAXSIGWALYIA
`
`
`NOILIGNOSSNOIW3SSYV3T9
`NOILVOIAVNSICTWOLYlA
`YIGGAYNOAF_}-E]|
`
`
`dWx3eBo.
`YIGGAAWtIa.
`
`AWVNAY
`
`Ed‘big
`
`ids.
`
`AB...
`
`dW3lFe)7eo_
`
`agamns@-[a]
`
`Adobe- Exhibit 1009, page 23
`
`Adobe - Exhibit 1009, page 23
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 23 of 28
`
`US 6,745,207 B2
`
`ve‘big
`
`SvITvavSALLOV|ES=aSvdWN
`X\dWVXa/uidLADIoNinos
`a1avis9661-030-600:€z9661-050-6E—)
`
`
`
`a1avis9661-030-800:€29661-on0-eE~
`
`
`31avLls9661-030-200:¢z9661-050-2EY
`
`
`
`SAILOW
`AYNOISNOO|LIGA|MSIQIWNLYIA&XATINVAHSICWNLYIA
`NOILVSIAVNSIGIWALYIA
`dWvxa4onYIGGAAWftavaamnde-
`
`dWALtea:7e-
`
`js.
`
`AS...
`
`
`
`YIGGANNOAF_}-F
`
`gqamn@--f5]
`
`Adobe- Exhibit 1009, page 24
`
`Adobe - Exhibit 1009, page 24
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 24 of 28
`
`US 6,745,207 B2
`
`
`
`
`
`A1aVNaSLigMJWWN
`
`xSICIWALIASXATIWV4SICWALYIA Gc‘bly
`
`
`fs]inttNoo[lida
`“LSSADNALSISNOODWOudSAOWSY
`
`
`
`“15SAONALSISNOOOLCAV
`SNOLLWLNIS3NdF1aWNSAAILOV|ES
`
`SNOILVLN3S3udG318Vsid
`LOALOYdALIYMlynLYiA30SLNSLNOO
`
`
`00:829661-9306EY
`YICGAAWaanvan@-a
`
`“ALIOWdVOASVAYMONI
`
`ALIDWdV¥OSONdSy
`
`—
`
`00:€z966t-O50-8EJ
`
`
`
`“A0I1OdONIHDVOLAS
`
`““SAALNVYVNSGiveLAS
`
`00:629661-0450-2ES
`
`dN3LPTE]
`
`MIGGAYNOAPT}
`
`gaamni@-[5]
`
`NOILVOIAVNASICWALYIA
`
`Adobe- Exhibit 1009, page 25
`
`Adobe - Exhibit 1009, page 25
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 25 of 28
`
`US 6,745,207 B2
`
` aaA[SALLOWWLVOS21VS]--SSILYSdOUdHSIWNLYIA
`LOHSdVNSFONVNNOIYSdJOS0LO¥dLINNSS390VLINNWusNna9
`
`
`
`
`
`
`
`ZL-€b-022664-daS-62.“AivdSS3090V1SV1
`
`
`
`
`6r:ZE-01ZBGL-NNI-22.*31ivd0NOLLVaeo
`
`
`SP:E0'8Z661-ONV-SC*ALVONOILVOISIGOW
`
`
`
`
`SALASVOAWNevel,>dAaLVOOTISADVYOLSXSId
`
`SALAGVOAW129‘ALIOVdV9GSLVOOTIV
`TWWYONNOILIGNOD
`
`OaTIGVNASNOLLVLNASSYd©)‘SLVLS
`
`VIN‘31VG313740YOGaMyvVAN
`
`GNVW3GNOaLvooTV@):A0I10dNOILVOOTIV39VUOLS
`
`
`aLIuM-avay@)‘SS3900V
`
`
`
`
`
`
`
`TAONVO
`
`
`
`Ga1sVSIGSNOILVINASSYdC)
`
`
`
`313730YOsGaUWWCO)
`
`NOLLVWYOSNI
`
`
`
`gaLvooTiyATINAC)
`
`
`
`aaawasayATINSCE
`
`AINO-avadO)
`
`NOILVXNSIANOD
`
`ALIOWdWO
`
`Adobe- Exhibit 1009, page 26
`
`Adobe - Exhibit 1009, page 26
`
`
`
`
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 26 of 28
`
`US 6,745,207 B2
`
`SLIYM-dVay
`€S-Ze-6/-pL62SIO
`
`E-ISOS
`
`X1NO-0Wa4
`yav-as-plLStasva
`XNWy20718
`
`TAONVO
`
`dL35740
`
`AAIGOW
`
`Le‘bi
`
`
`
`
`LINN|ssaoov.INN[|Give|tiefeNOLLVONdaeSNIHOWSTWu3Na9JO90L0ud
`ee]WALSAS1SOH
`
`LOHSdVNSJONVNYOsANad
`
` SS309DV©-amevsia(e):aa1gvNa©)AINO-avay@)suas‘SSADOV
`
`
`
`
`av-3ss7lXNWHOOT1EsLasva.AINO-0V34uMOIW\SSNOLS\\
`
`
`
`ce-62-Fh€-ISDS62HS!0-35LMM-dvadJAOYXOAOWSFILVAE\\
`
`
`
`2e-6L-F1&1SOS62HSIdS.LIYM-dvauTAVASSTLVSEN
`
`
`CO-GLPl&1SOS62xSIdSLIMM-OVae.NHOFS3ILV38N
`
`
`ce-62-Fle-ISOS62XSIdSLIMM:-OVaYOONIMSSTEVA
`
`
`
`ain|aaatunn|amvwamn[ssacov|wausss1s0H
`
`
`
`[SALLOV\WLVGSatvs)--
`
`
`SHILYadOUdSICWNALYIA
`
`-LINNOLSS3D0VMANaqv
`
`
`
`“1O90LOuddLINN
`
`
`
`‘SSS00VONILSIXAONILSIXA
`
`Adobe- Exhibit 1009, page 27
`
`Adobe - Exhibit 1009, page 27
`
`
`
`
`
`
`
`
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 27 of 28
`
`US 6,745,207 B2
`
`aiaqga
`
`ASIGOW
`
`gz‘6i4
`[aTasmOxAINo-avae(@)‘aLra-avad‘SSA0OV
`
`
`
`
`LOHSdVNSSADNVNAOseAd|ee
`qaONVO
`
`:XINO-GVau£0-6¢-OV-8S-b1bstasvaXNWX9018
`
`
`[1]
`.
`
`
`JOD0.1L0UdLINNMANSLVSYO
`
`
`SLIYM-OVay“GZ-€S-ZE-6L-Fb62HS!Id€-1SOS
`
`
`[SALLOWWLVYGSATS!~-S3ILYadOudNSICTWALYIA
`1090.L0¥dLINNSS300VLINN|Give
`
`NOILVOMNdayONIHOWOTWHaNna®|.
`
`
`
`—
`
`*STOOO.LONdLINNONILSIXS
`
`
`
`Adobe- Exhibit 1009, page 28
`
`Adobe - Exhibit 1009, page 28
`
`
`
`
`
`
`

`

`U.S. Patent
`
`Jun.1, 2004
`
`Sheet 28 of 28
`
`US 6,745,207 B2
`
`SSILYadOUdNSICTWALYIA
`waonvo|{__0
`LOHSdVNSTONWNNOSad
`
`
`
`[AAILOVWWWLYO"SATVSI--
`
`_TO90.LO¥dLINA
`SNdWVONIVA?}ANOZ
`ssaoovinn[aive|-
`
`NOILOATSSANOZ
`
`NOILVYNSIANODLNAYYNS
`
`‘NOLLVYENDISNOOGawISsad
`
`ONIHOVD,|TveaN39
`NVAOLN
`
`NOILVY9SO1
`
`Adobe- Exhibit 1009, page 29
`
`Adobe - Exhibit 1009, page 29
`
`
`

`

`US 6,745,207 B2
`
`1
`SYSTEMAND METHOD FOR MANAGING
`VIRTUAL STORAGE
`
`RELATED APPLICATIONS
`This application claims priority to U.S. Provisional Appli
`cation No. 60/209,108, filed on Jun. 2, 2000, entitled Struc
`ture For Managing The Virtualization Of Block Storage, the
`disclosure of which is hereby incorporated by reference in
`its entirety. Additionally, the entire disclosures of the present
`assignee's following utility patent applications filed on the
`Same date as the present application are both incorporated
`herein by reference in their entireties: Ser. No.: 09/872,921,
`to James Reuter, et al., entitled Structure And Process For
`Distributing SCSI LUN Semantics Across Parallel Distrib
`uted Component; and Ser. No.: 9/872,721, to James Reuter,
`et al., entitled Data Migration Using Parallel, Distributed
`Table Driven I/O Mapping.
`FIELD OF THE INVENTION
`The present invention relates to Systems and methods for
`managing Virtual disk Storage provided to host computer
`Systems.
`
`15
`
`2
`hierarchy. The virtual disk can be a logical “disk” that is
`visible to one or more host System(s). It is independent of
`physical Storage and is preferably managed by Setting
`attributes. On the other hand, the Storage pool hierarchy
`provides a boundary between the virtual and physical parts
`of the model via “encapsulation' of physical Storage Such
`that physical components may change without affecting the
`virtual parts of the model.
`Preferably, management can be automated Such that the
`user (e.g., customer, manager and/or administrator) specifies
`goals rather than means-enhancing ease of use while
`maintaining flexible deployment of Storage resources. The
`preferred embodiments of the invention may advanta
`geously reduce the cost and/or complexity of managing
`Storage-by simplifying the management of change. In
`preferred embodiments, one or more of the following and
`other advantages can be realized with the present invention.
`Erased Boundaries
`Typically, Storage controller or Subsystem boundaries can
`cause inefficient use of capacity, capacity to be in the wrong
`place, manual rebalancing to be required and/or problems
`with host access to capacity. The preferred embodiments of
`the present invention can enable, for example, a host
`independent, controller-independent, Storage area network
`(SAN)-wide pool of storage for virtual disks, effectively
`erasing these boundaries and the problems caused by these
`boundaries. Among other things, this can also simplify the
`acquisition and deployment of new Storage because new
`Storage can Simply be more capacity in the pool.
`Centralized Management
`Typically, each Storage Subsystem in a SAN is managed
`Separately, causing boundaries in the management model
`with resulting complexities and inefficiencies of manage
`ment. The preferred embodiments of the present invention
`enable, among other things, a single, central management
`view of an entire SAN.
`Uniform Capabilities
`Typically, when a SAN has multiple Storage Subsystems,
`the Subsystems may have different capabilities, adding com
`plexity and confusion to the management of the Storage and
`the hosts using the Storage. The preferred embodiments of
`the present invention can provide, e.g., a virtual disk that has
`uniform management capabilities and that is independent of
`the capabilities offered by the Subsystems providing the
`capacity. Among other things, this can reduce management
`complexity. With the preferred embodiments of the present
`invention, Virtual disks can be managed with attributes that
`are independent of the physical Storage, Separating the
`virtual parts of the model from the physical parts of the
`model.
`The preferred embodiments of the present invention can
`enable features Such as: a) Substantially no disruption of
`Service to host Systems and applications during management
`operations; b) easy to add/remove storage Subsystems; c)
`more efficient use of Space; d) less wasted Space overhead;
`e) volume expansion; f Snapshot copies, g) Selective pre
`sentation of virtual disks only to desired hosts; h) attribute
`based management of Virtual disks; i) host Systems
`de-coupled from Storage management; and/or j) future
`extensions easily added without disruption to hosts or to
`Storage Subsystems.
`The above and other embodiments, features and advan
`tages will be further appreciated upon review of the follow
`ing description of the preferred embodiments in conjunction
`with the accompanying drawings.
`BRIEF DESCRIPTION OF THE DRAWINGS
`Preferred embodiments of the invention are shown by
`way of example and not limitation in the accompanying
`
`25
`
`40
`
`45
`
`BACKGROUND OF THE INVENTION
`Virtual disk Storage is relatively new. Typically, Virtual
`disks are created, presented to host computer Systems and
`their capacity is obtained from physical Storage resources in,
`for example, a storage area network.
`In Storage area network management, for example, there
`are a number of challenges facing the industry. For example,
`in complex multi-vendor, multi-platform environments,
`Storage network management is limited by the methods and
`capabilities of individual device managers. Without com
`35
`mon application languages, customers are greatly limited in
`their ability to manage a variety of products from a common
`interface. For instance, a single enterprise may have NT,
`SOLARIS, AIX, HP-UX and/or other operating systems
`Spread acroSS a network. To that end, the Storage Network
`ing Industry Association (SNIA) has created work groups to
`address Storage management integration. There remains a
`Significant need for improved management Systems that can,
`among other things, facilitate Storage area network manage
`ment.
`While various Systems and methods for managing array
`controllers and other isolated Storage Subsystems are known,
`there remains a need for effective Systems and methods for
`representing and managing virtual disks in various Systems,
`Such as for example, in Storage area networks.
`SUMMARY OF THE INVENTION
`In response to these and other needs, the preferred
`embodiments of the present invention provide a System and
`method for the management of Virtual Storage. The System
`and method include an object-oriented computer hardware/
`Software model that can be presented via a management
`interface (e.g., via graphical user interfaces, GUIs, com
`mand line interfaces, CLIS, application programming
`interfaces, APIs, etc.), Via documents (e.g., customer
`documents, training documents or the like, including elec
`tronic documents, Such as Word documents, PDF files, web
`pages, etc., or physical documents), or via other means.
`In preferred embodiments, the model advantageously
`provides the Separation of physical Storage management
`from virtual disks presented to the hosts. This is preferably
`done using virtual disks in conjunction with a storage pool
`
`50
`
`55
`
`60
`
`65
`
`Adobe - Exhibit 1009, page 30
`
`

`

`3
`drawings in which like reference numbers represent like
`parts throughout and in which:
`FIG. 1 is a schematic illustration of a distributed virtual
`Storage network;
`FIG. 2 is a schematic illustration of a preferred object
`oriented model of the present invention;
`FIG. 3 is a Schematic illustration of a storage pool
`hierarchy bridging the virtual and physical realms in pre
`ferred embodiments of the present invention;
`FIG. 4 is a Schematic illustration of an illustrative Storage
`pool hierarchy;
`FIG. 5 is a Schematic illustration of a management agent
`and corresponding management consoles that can be used in
`Some preferred embodiments of the invention;
`FIGS. 6 and 7 schematically illustrate management opera
`tions that can be employed in Some preferred embodiments
`of the present invention;
`FIGS. 8 to 15 illustrate graphical user interfaces that can
`be provided to facilitate management of Virtual Storage in
`relation to the creation of a virtual disk in Some illustrative
`embodiments of the invention;
`FIGS. 16 to 18 illustrate some exemplary navigational
`Views that can be presented to a user to facilitate manage
`ment of the Storage System in Some illustrative embodiments
`of the invention; and
`FIGS. 19 to 29 illustrate some exemplary disk manage
`ment and properties views that can be presented to a user to
`facilitate management and Selection of disk properties in
`Some illustrative embodiments of the invention.
`
`15
`
`25
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENTS
`I. Preferred Environments (e.g., Storage Area Networks)
`The present invention can be applied in a wide range of
`Systems, e.g., in Storage area network (SAN) systems and in
`other environments. In Some embodiments, the present
`invention can be applied in, e.g., heterogeneous SAN envi
`ronments (e.g., at the storage level). In some other
`embodiments, the present invention can be applied in, e.g.,
`open SAN environments (e.g., at the fabric level). In Some
`other embodiments, the present invention can be applied in,
`e.g., non-SAN environments (e.g., at the server level). The
`present invention can also be applied in various Systems
`shown in the above-identified patent applications incorpo
`rated herein-by-reference and in other Systems as would be
`apparent to those in the art based on this disclosure.
`In Some non-limiting preferred embodiments, the present
`invention can be applied in a virtualized Storage area net
`work (SAN) system 100 using one or more distributed
`mapping tables, as needed to form one or more virtual disks
`for input/output (I/O) operations between hosts and storage
`containers 160, as illustrated in FIG. 1. In particular, the
`table contains a mapping that relates a position in a virtual
`disk 150 with an actual location on the Storage containers
`160.
`The system 100 principles of distributed, virtual table
`mapping can be applied to any known SAN. It should
`therefore be appreciated that the Storage devices are known
`technologies and may refer to any type of present or future
`known programmable digital Storage medium, including but
`not limited to disk and tape drives, writeable optical drives,
`etc. Similarly, the hosts 140 may be any devices, such as a
`computer, printer, etc., that connect to a network to acceSS
`data from a storage device.
`Likewise, the Storage network is also intended to include
`any communication technology, either currently known or
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,745,207 B2
`
`4
`developed in the future, Such as the various implementations
`of Small Computer Systems Interface (SCSI) or Fibre Chan
`nel. This distributed virtualization is most useful in envi
`ronments where a large amount of Storage is available and
`connected using Some Sort of “storage network infrastruc
`ture. One preferred implementation uses Switched Fibre
`Channel connected Storage. However, nothing in the design
`of the system 100 precludes its use on other types of storage
`networks, including Storage networks that are not yet
`invented.
`The hosts access the table through multiple mapping
`agents 110. The system 100 uses multiple agents 110 that are
`associated with the hosts 140. Preferably, each host has a
`separate agent 110, but the system 100 could be easily
`configured So that more than one host 140 connects to an
`agent 110. If multiple hosts 140 connect to the same agent
`110, the hosts 140 may share that agent's mapping table
`(alternately, there may be independent tables per host). The
`agent 110 Stores the mapping table in Volatile memory Such
`as DRAM. As a result, if one of the agents 110 loses power,
`that agent 110 loses its copy of the table. Such an event could
`take place if the mapping agent 110 is embedded in the host
`140, for example, a backplane card Serving as the mapping
`agent 110, and the host 140 system loses power.
`By Storing the mapping table in Volatile memory, the table
`can be easily and rapidly accessed and modified on the
`agents 110. Storing the mapping table in Volatile memory
`has the further advantage of Substantially reducing the cost
`and complexity of implementing the agents 110 as mapping
`agents. Overall, the agents 110 allow the performance
`Sensitive mapping process to be parallelized and distributed
`optimally for performance. The mapping agents 110 reside
`on a host 140 and are in communication with a virtual disk
`drive 150.
`The system 100 further comprises a controller 120 that is
`separate from the mapping agents 110. The controller 120
`administers and distributes the mapping table to the agents
`110. Control of the mapping table is centralized in the
`controller 120 for optimal cost, management, and other
`implementation practicalities. The controller 120 further
`Stores the mapping table in a Semi-permanent memory, Such
`as a magnetic disk or an EPROM, so that the controller 120
`retains the table even after a power loSS. In this way, the
`responsibility for persistent Storage of mapping tables lies in
`the controller 120 so that costs and complexity may be
`consolidated. Any controller 120 known in the art of digital
`information Storage may be employed as needed to imple
`ment the present invention. Within this framework, each of
`the mapping agents 110 preferably interacts only with the
`controller 120 and not with the other agents 110.
`Furthermore, the architecture allows for a controller 120
`comprised of redundant, cooperating physical elements that
`are able to achieve very high availability. As a result, the
`system 100 is highly scaleable and tolerant of component
`failures.
`The interactions of the controller 120 and the agents 110
`are defined in terms of functions and return values. In a
`distributed System, this communication is implemented with
`messages on Some Sort of network transport Such as a
`communication channel 130. The communication channel
`130 may employ any type of known data transfer protocol,
`such as TCP/IP. In one implementation, the communication
`channel 130 is the storage network itself. The communica
`tion channel 130 has access to non-virtual Storage containers
`160. Any suitable technique may be used to translate
`commands, faults, and responses to network messages.
`
`Adobe - Exhibit 1009, page 31
`
`

`

`S
`II. Preferred Management Model
`FIG. 2 illustrates an object-oriented model employed in
`Some preferred embodiments of the invention. The objects in
`the illustrated model are described in detail below. The
`objects include operations that either humans or automated
`policy can invoke-e.g., based on the model, a user (e.g., a
`System administrator) can assign storage resources via a
`System management interface.
`As shown, the host folder, the virtual disk folder, and the
`Storage pool objects can reference themselves. That is,
`multiple instances of these objects can be referenced under
`the same object type. This captures the notion of a tree
`Structured hierarchy. For example, the folder object repre
`Senting the root of the tree always exists and Sub-folders can
`be created as needed. This is generally analogous to a
`WINDOWS folder hierarchy, which is also a tree structure.
`A WINDOWS EXPLORER folder browser interface, for
`example, would be an illustrative graphical user interface
`representation of this kind of Structure. Similarly, command
`line interfaces may Support this concept with a notion Such
`as “current directory.”
`Host
`The host object 140" represents a host System (e.g., a
`computer, etc.) that consumes a virtual disk and Supports one
`or more applications.
`Host Agent
`The host agent object 110' is a component that provides
`Virtualizing capability to the hosts (e.g., a "mapping agent').
`A host has Zero or more host agents through which Virtual
`diskS can be presented to that host. If a host has Zero
`asSociated agents, presentation is not possible. The model
`preferably allows this because there may be temporary
`Situations where a host does not have an agent (e.g., one has
`not been added or repaired). A host agent may serve multiple
`hosts or, alternatively, a host agent may attach to only a
`Single host.
`The presented unit, described below, references all host
`agents through which a host may be reached for a given
`Virtual disk. A host agent may be used by Zero or more
`presented units to present Zero or more virtual disks to a
`host.
`Virtual Disk
`The virtual disk object 150' represents a block-store disk
`as Seen by a host System. It is independent of physical
`Storage and is a logical object that contains the data that the
`System Stores on behalf of host Systems.
`Virtual disk Service operations are preferably similar to
`those of a locally attached physical disk. A virtual disk can
`include, for example, a compact (non-sparse) linear array of
`fixed-size data blocks indexed by nonnegative integers,
`which may be read or written. A read operation transferS the
`data from a set of consecutively indexed data blocks to the
`host System. A write operation transferS data from the host
`System to a set of consecutively indexed data blockS.
`While a virtual disk can be seen by host systems as a
`compact linear array of blocks, an implementation may Save
`Space by not allocating physical Storage to any block that has
`never been written. Read operations issued to blocks that
`have never been written can, for example, transfer a block of
`all Zeros. In Some embodiments, Several virtual disks may
`share resources. Preferably, however, such virtual disks
`behave Similarly to independent physical disks as Seen
`through the Service interface.
`In contrast to typical Service operations, Virtual disk
`management operations can be unique. For example, the
`notion of performing a SnapShot operation is foreign to
`today's physical disks. Many management operations can
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,745,207 B2
`
`6
`treat Virtual disks like independent objects, which can be
`desirable because customers understand physical disks to be
`independent objects. However, other operations can either
`expose or control relationships between Virtual disks. These
`relationships include temporal relationships, shared capacity
`relationships, performance interdependencies, data reliabil
`ity co-dependencies, availability co-dependencies and/or
`other relationships.
`Derived Unit
`The derived unit object 250 adds protocol personality
`(e.g., SCSI, Fiber Channel, CI, etc.) to a block-store repre
`Sented by the Virtual disk-i.e., the derived unit Supplies the
`I/O protocol behavior for the virtual disk. When a virtual
`disk is presented to a host, a derived unit is created to add
`Semantics (e.g., SCSI) to the block storage provided by the
`virtual disk.
`If desired, more than one derived unit can be allowed per
`Virtual disk, Such as for Such cases where an administrator
`wants to treat these as independent disks that happen to have
`shared contents. However, this may be of limited use in
`Some cases and products can be made that will only allow
`one derived unit per virtual disk. Preferably, a derived unit
`is always associated with only one virtual disk.
`While Some illustrative derived units involve SCSI
`protocols, the architecture allows for derived units for pro
`tocol types other than SCSI. The SCSI model can be
`Selected, for ex

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket