throbber
Storage Area Networks; Unclogging
`LANs and Improving Data
`Accessibility
`
`"This important technology is moving into
`
`the mainstream in distributed networking
`
`and will be the normal, adopted way of
`
`attaching and sharing storage in a few
`
`short years."
`
`Michael Peterson,
`
`Strategic Research Corporation
`
`Kevin J. Smith
`
`Mylex Corporation
`
`Fremont, CA
`
`White Paper
`
`WPO30898.doc
`
`IBM-Oracle 1006
`Page 1 of 22
`
`

`

`Table of Contents
`
`Executive Summary ................................................................................................................................. 2
`
`Server-Dependent or Server-Independent Storage ................................................................................ 3
`
`SAN Taxonomy ....................................................................................................................................... 4
`
`Fibre Channel SAN’s ............................................................................................................................... 6
`
`SAN’s and Clusters .................................................................................................................................. 9
`
`SAN’s and NAS ..................................................................................................................................... 10
`
`SAN-Attached RAID Array Requirements ............................................................................................. 11
`
`Mylex External Array SAN Controllers ................................................................................................... 14
`
`Mylex Product Line SAN Features ........................................................................................................ 15
`
`References
`
`Copies of this and other white papers may be obtained from the Mylex web site
`(www.Mylex.com).
`
`RAID Controllers Are Not Created Equal;
`Many Will Not Survive on Wolfpack-fl Clusters
`
`DAC960PJ/PR Two Node Clustering
`
`Legal Notice
`
`ServerNet is a registered trademark of Tandem Computer Incorporated.
`VAXClusters is a registered trademark of Digital Equipment Corporation.
`NetWare is a registered trademark of Novell.
`Windows NT Server is a registered trademark of Microsoft.
`ESCON and SSA are registered trademarks of IBM.
`
`Other product names mentioned herein may be trademarks and/or registered trademarks
`of their respective companies.
`
`5/29/98
`
`IBM-Oracle 1006
`Page 2 of 22
`
`

`

`The key challenges facing IS executives are satisfying increasingly diverse networking
`requirements and reducing network complexity to lower total cost of ownership (TCO).
`Success depends on efficiently delivering:
`

`Hi.qh bandwidth for warehousing and web-based applications, i.e. multimedia,

`Low, predictable latency for time-sensitive applications, e.g. video conferencing,
`¯ Performance and resiliency for mission critical applications, e.g. OLTP.
`
`When computing resources were used only for internal operations, the cost of information
`bottlenecks and network failures was limited to lost productivity. However, as computing
`resources are used to engage customers, as well as manage operations, bottlenecks and
`network failures translate into lost business and lost productivity.
`
`A primary benefit of Storage Area Networks (SAN’s) is unclogging network arteries by
`moving bulk data transfers off client networks and onto specialized sub-networks, often
`referred to as the networks behind the servers.
`
`Figure 1. LAN’s and SAN’s
`
`With SAN’s, pools of storage (and related traffic) are removed from the LAN, externalized
`and shared by mainframes, UNIX and PC servers. In addition to de-congesting client
`networks, cross-platform data sharing and amortizing storage costs across servers, a SAN
`topology adds value by providing:
`

`Flexible, modular expansion by de-coupling server and storage investments,
`¯ Bandwidth and capacity scaling by eliminating SCSI and PCI bottlenecks,

`Increased fault tolerance and availability with redundant paths to data,

`Increased application performance with multiple gigabit links to data,
`¯ Simplified systems integration and enriched storage management,

`Improved data protection and security through centralization, and

`Lower total cost-of-ownership (TCO).
`
`MYLEX
`
`IBM-Oracle 1006
`Page 3 of 22
`
`

`

`Sewer-Dependent Storage
`A paradigm shift from centralized to distributed storage began in the 1980’s driven by
`peer-to-peer networks, inexpensive UNIX and PC servers and the notion that moving
`computing and storage resources closer to workgroups would increase productivity. The
`result was islands of computing and disparate networks tied together by gateways. IS
`managers were faced with multiple copies of inconsistent data, networks that were
`expensive to manage and corporate assets (data) that were difficult to access and
`vulnerable to intrusion. The AberdeenGroup, a respected market research firm, refers to
`this environment as server-dependent storage (Figure 4).
`
`Figure 2. Server-Dependent Storage
`
`Server-Independent Storage
`Emerging SAN technologies mirror today’s LAN technologies with gigabaud shared and
`dedicated bandwidth. The AberdeenGroup advises: "Unless enterprises view and
`implement storage as if it were part of a giant network across the enterprise, they will pay
`too much for their storage and will face extreme, labor-intensive difficulties in performing
`vital storage-related functions, such as managing the storage and backing up and moving
`critical data." While giant SAN’s may someday become a reality, local SAN’s with server-
`independent storage are being deployed today.
`
`SAN
`
`Figure 3. Server-Independent Storage
`
`MYLEX
`
`IBM-Oracle 1006
`Page 4 of 22
`
`

`

`SAN: Storage Area Network or System Area Network?
`
`SAN is one of the more overloaded acronyms in computer jargon; its meaning is context
`sensitive. To systems people, SAN means System Area Network, and to storage people,
`SAN means Storage Area Network. Some people consider both definitions synonymous.
`However, while System Area Network and Storage Area Network topologies can be
`similar or even identical, there is an important distinction between the two technologies.
`
`System Area Network
`
`A System Area Network is a specialized network used in clusters configurations for node-
`to-node, node-to-device (primarily disk), and device-to-device communications that
`provides both high bandwidth and low latency. Low latency is the distinguishing
`characteristic of a System Area Network. Short message latency across a System Area
`Network is generally less than 10 microseconds, an order of magnitude less than Fibre
`Channel or Gigabit Ethernet. Low latency is a prerequisite to high performance for
`applications distributed across cluster nodes, e.g. parallel DBMS’s. Instances of a
`distributed application in a cluster environment frequently exchange messages to
`synchronize program execution or access to shared resources. Most System Area
`Networks use proprietary protocols, however, this is expected to change when the Vl
`Architecture is introduced in 1999. The Vl Architecture is an interconnect-independent set
`of protocols and API’s that standardize the interface between OS’s and cluster
`interconnects. ServerNet developed by Tandem Corporation, and SCl, an ANSI standard
`implemented by Dolphin, are examples of System Area Networks. System Area Network
`technologies can also be used to implement Storage Area Networks.
`
`Storage Area Network
`
`A Storage Area Network can be designed with a specialized or standard networking
`technology, e.g. Fibre Channel. Its purpose is to provide high bandwidth connections
`between servers and storage devices, and between storage devices, e.g. storage arrays
`and tape libraries. The primary objective of Storage Area Networks is high bandwidth for
`moving large amounts of data; latency is a secondary consideration. Storage Area
`Networks have been implemented with ESCON and HIPPI interfaces, and more recently
`with SSA and Fibre Channel. They can be deployed in homogeneous, e.g. all UNIX
`servers, or heterogeneous environments, e.g. a mix of UNIX and NT servers, and can be
`local to servers or remote from servers and connected to other (remote) Storage Area
`Networks. They use standard channel protocols, such as SCSl riding on top of Fibre
`Channel. In Storage Area Networks, storage is de-coupled from servers and managed as
`an independent resource.
`
`Storage Area Networks can be configured in fabric topologies with switches to
`interconnect servers and devices or implemented in loop topologies with hubs to simplify
`cable management and increase loop resiliency.
`
`In this paper, SAN is used in the Storage Area Network context.
`
`MYLEX
`
`4
`
`IBM-Oracle 1006
`Page 5 of 22
`
`

`

`Related Terminology
`
`SAN-Attached Storage (SAS)
`
`SAN-Attached Storage refers to shared storage devices connected to servers and
`possibly each other via a Storage Area Network (typically Fibre Channel or SSA in open
`system environments).
`
`Network-Attached Storage (NAS)
`
`Network-Attached Storage are devices that directly attach to client networks, typically an
`Ethernet LAN, and provide optimized file services to clients and servers on the network
`using standard file I/O protocols, such as NFS and SMB, and networking protocols, such
`as IP. These devices are essentially specialized servers that functions as a server in a
`client-server relationship with other network computers requesting file access. Mylex,
`Network Appliance and Auxpex NAS products are examples of leading-edge Network-
`Attached Storage devices.
`
`[Storage attached to application servers but not directly attached to a LAN is sometimes
`referred to as Network-Attached Storage. In a sense, this is true since clients can access
`server-dependent storage over the network. However, this is stretching the NAS definition
`since application servers are not optimized for file serving.]
`
`SAN Interconnects and Topologies
`
`SAN’s can be designed in switched fabrics or arbitrated loops. Fibre Channel is an ideal
`SAN interconnect because it provides scaleable performance with virtually unlimited
`addressing and can span campus-wide distances. Bus technologies, such as SCSl, are
`inappropriate due to bandwidth, distance and device attachment limitations.
`
`SAN Interconnect Devices
`
`Switches, hubs and routers are interconnect devices that can be employed to construct
`SAN networks. Switches are used in fabrics to provide scaleable performance. Hubs are
`deployed in loop configurations to simplify cable management and enhance fault
`tolerance. Routers are useful for interconnecting complex SAN’s, particularly over long
`distances for data vaulting or disaster protection applications.
`
`SAN Interfaces
`
`ESCON, SSA and Fibre Channel are candidate SAN interfaces. ESCON is the dominate
`interconnect in mainframe environments. SSA is a newer IBM technology with
`performance characteristics similar to Fibre Channel Arbitrated Loops and has become a
`popular SAN interconnect. However, Fibre Channel appears to be emerging as the
`defacto industry standard SAN interface based on the breath of vendor support and
`market acceptance.
`
`MYLEX
`
`IBM-Oracle 1006
`Page 6 of 22
`
`

`

`Industry Standard
`
`Fibre Channel defines a set of high performance serial I/O protocols and interconnects for
`flexible data transfer. It was developed by an ANSI committee and is supported by over
`seventy system, networking and storage vendors. Fibre Channel was designed to:
`
`¯ Provide a common interface for transferring large amounts of data at high
`transmission rates and low error rates,
`¯ Enable the simultaneous use of different transport protocols, such as, SCSI and IP,
`over a common interface, and
`¯ Support multiple physical interfaces and transmission media with varying distance and
`cost characteristics.
`
`Networks and Channels
`
`Fibre Channel was designed to provide seamless integration between networks that
`connect users to servers and channels that connect storage to servers. Networks
`connects heterogeneous computers located anywhere in the world and enables them to
`communicate with one another at any point in time on a peer-to-peer basis.
`Consequently, networks use complex protocols to authenticate users, set up sessions,
`route data, correct errors and cope with unstructured environments. Complex protocols
`impose high overhead on network computers. Conversely, channels are employed in
`structured, predictable environments to connect storage and other devices to serves over
`distance limited, low error rate transmission paths.
`
`Fibre Channel has not been extensively deployed in networks; the cost of Fibre Channel
`hardware is high relative to 10/100 and Gigabit Ethernet, and network infrastructures are
`in place to support Ethernet. However, Fibre Channel is rapidly becoming a storage
`interconnect standard; it provides the bandwidth, distance and flexibility required for
`Storage Area Networks:
`

`
`Full duplex transmission at 25 MB/s and 100 MB/s at distances up to:

`25 meters between devices using video or mini-coax copper cable,

`500 meters between devices using multi-mode fibre cables, and

`10,000 meters using single mode fibre cables,

`Full duplex 200 MB/s and 400 MB/s data rates in the not too distant future,
`¯ Extended addressibility:

`126 devices on an arbitrated loop, and

`
`1,000’s of devices in switched fabrics,

`Low error rates and end-to-end error checking for high data integrity,
`¯ Optional redundancy for high availability,

`Low cost in arbitrated loop topologies, and
`¯ Scaleable bandwidth in switched fabric topologies.
`
`MYLEX
`
`IBM-Oracle 1006
`Page 7 of 22
`
`

`

`Loops or Fabrics
`
`Fibre Channel devices, called nodes, have a single or multiple ports to communicate with
`other nodes. Today, most Fibre Channel Host Bus Adapters (HBA’s) provide a single port
`but HBA’s with dual ports are expected in the future. Each port has a pair of electrical (for
`copper cables) or optical transceivers (for fibre cables); one for transmitting data and the
`other for receiving data. The pair of conductors is referred to as a link. Fibre Channel
`nodes can be connected with a single link or dual links for fault tolerance.
`
`Storage Area Networks can be configured as arbitrated loops with bandwidth shared by
`nodes on the loops or as switched fabrics with dedicated bandwidth between
`communicating nodes.
`
`Fibre Channel Arbitrated Loops (FC-AL)
`
`In a FC-AL, nodes arbitrate to gain access to the loop and then pairs of nodes establish a
`logical point-to-point connection to exchange data; the other nodes on the loop act as
`repeaters. With FC-AL, bandwidth is constant and shared; only one pair of nodes on the
`loop can communicate at any point in time. FC-AL is similar in operation to other shared
`media networks, such as FDDI or Token Ring.
`
`Figure 4. Fibre Channel Arbitrated Loop and Fabric Topologies
`
`Fibre Channel Switched Fabrics
`
`In a switched topology, the full bandwidth of a link is available to pairs of communicating
`nodes and multiple pairs of nodes can simultaneously transmit and receive data. A._#s
`nodes are added to a switched configuration, the aggregate bandwidth of the network
`increases. The switch is an intelligent device that provides crossbar switching functions
`enabling multiple pairs of nodes to simultaneously communicate.
`
`MYLEX
`
`IBM-Oracle 1006
`Page 8 of 22
`
`

`

`Hubs or Switches
`
`Hubs and switches are interconnect devices used in Storage Area Networks. They are
`available with advanced management capabilities similar to LAN management
`techniques; SNMP-based management is generally provided and some devices conform
`to the new Web-Based Enterprise Management (WBEM) standard. Hubs and switches
`can be cascaded to increase node connectivity; hubs up to the FC-AL limit of 126 nodes,
`and switches to thousands of nodes.
`
`Hubs
`
`Except for two node configurations connected in a point-to-point fashion, hubs are
`generally used in FC-AL topologies, and can be used in a complementary role to switches
`in fabric topologies. Compared to switches, hubs are lower cost. They are useful to
`simplify cable management and contain bypass control switches that enable failed nodes
`on a loop to be bypassed without compromising the integrity of the loop. The hub’s
`bypass control switches also enable nodes to be hot plugged into a Storage Area Network
`or removed with affecting loop operation.
`
`Figure 5. SAN Interconnected with a FC-AL and Hub
`
`Switches
`
`Switches are used to create Fibre Channel fabrics. They provide the same resiliency
`features as hubs and enable the fabric’s aggregate bandwidth to scale as nodes are
`added. With hub-connected Storage Area Networks, aggregate bandwidth remains
`constant as nodes are added; hence, bandwidth per node decreases as more nodes
`share a fixed amount of bandwidth. With switched fabrics, bandwidth per node remains
`constant as nodes are added and hence, the aggregate bandwidth of the fabric increases
`as nodes are added; aggregate bandwidth is proportional to the number of nodes.
`
`Fabric
`Loop .....
`
`Figure 6. SAN With Switched and Shared (Loop) Interconnects
`
`MYLEX
`
`IBM-Oracle 1006
`Page 9 of 22
`
`

`

`Clusters
`
`A cluster is a group of autonomous servers that work together to enhance reliability and
`saleability, and can be managed as a single entity to reduce management costs. Clusters
`always share storage devices and sometimes share data.
`
`Cluster Models
`
`Clusters have been implemented using the Shared Disk and Shared Nothing models:
`
`¯ Shared Disk Model -- Data is simultaneously shared by cluster nodes. An access
`control mechanism, generally referred to as a distributed lock manager, synchronizes
`access from multiple nodes to shared data. Example: Digital VAXClusters.
`¯ Shared Nothing Model - Access to data is shared but at any point in time, disk
`volumes are exclusively owned by one of the nodes. Example: Microsoft NT Clusters
`(NT/E 4.0). In advanced shared nothing clusters, nodes can access data they do not
`own through the node that own the data. Example: Next generation NT Clusters.
`
`Advocates of the Shared Nothing model claim that Shared Nothing clusters are more
`scaleable because the overhead of a distributed lock manager increases as nodes are
`added and eventually bottlenecks the cluster.
`
`Figure 7. Four Node Cluster With Shared Access to RAID Arrays
`
`Clusters Use SAN Technologies
`
`¯ Data is removed from the network and stored on the network behind the servers,
`¯ Storage model is server-independent, not server-dependent,
`¯ Access to storage devices is shared, generally across high performance links,

`I/O bandwidth is scaleable (with switches); storage capacity is scaleable,
`¯ Redundant links can be used for fault tolerance and higher data availability,
`¯ Data is centralized; security and storage management can be enhanced,
`¯ Storage can be added incrementally; server and storage investments are de-coupled,
`¯ Storage nodes communicate and exchange data (in advanced cluster designs),
`¯ Hubs or switches provide simplified cable management and increased resiliency, and

`Total cost-of-ownership (TCO) decreases.
`
`MYLEX
`
`IBM-Oracle 1006
`Page 10 of 22
`
`

`

`Network-Attached Storage
`
`Network-Attached Storage is implemented with devices directly attach to client networks,
`typically Ethernet LAN’s, and provide optimized file services to clients and servers on the
`network using standard file I/O protocols such as NFS and SMB. Network-Attached
`Storage is essentially a specialized server that functions as a server in a client-server
`relationship with other network computers requesting file access.
`
`~ NAS
`
`Figure 8. NAS on LAN Segments and Connected to SAN
`
`SAN’s and Network-Attached Storage (NAS) are complementary technologies. A NAS
`device functions like a mini-SAN for LAN segments. With NAS devices, storage is moved
`from PC’s and workstations to file access optimized NAS devices where it can be
`protected, secured and managed. NAS design objectives are similar to SAN objectives
`but at lower price points.
`

`
`¯ Workstation-dependent storage is replaced by workstation-independent storage,
`¯ Data is centralized at the workgroup level where it can be more easily secured,
`shared, RAID-protected, backed-up and accessed in an optimal fashion by all clients,
`LAN’s can be more easily designed to avoid bottlenecks with centralized NAS storage
`on shared or dedicated LAN segments,
`¯ Modular expansion is enabled; workstation and storage investments are de-coupled,
`¯ Redundant links can be used to increase data accessibility, and

`Total cost-of-ownership (TCO) decreases.
`
`MYLEX
`
`IBM-Oracle 1006
`Page 11 of 22
`
`

`

`SAN Connectivity
`
`Fibre Channel Interface -- Fibre Channel is the optimal interface for storage arrays used
`in SAN applications. It provides the performance, distance and scaleability required for
`SAN environments. The Fibre Channel standard is widely supported and a broad range
`of Fibre Channel interconnect devices (hubs and switches) and storage devices (RAID
`arrays, tape and optical libraries, and disk, tape and optical drives) will be available with
`varying levels of features and performance in a competitive market environment.
`
`Dual Channels - High performance SAN-attached storage devices require multiple front-
`end SAN channels for performance and fault tolerance. Dual channel arrays offer twice
`the potential performance of single channel arrays at marginally higher costs and provide
`continuous access to data access if a SAN interconnect device or link fails.
`
`Heterogeneous Host Support
`
`Most enterprises have heterogeneous computing environments with mainframes in the
`data center, and UNIX, NT or NetWare servers distributed across the network.
`Investments in these systems (acquisition, application development and infrastructure
`costs) are substantial and migration to a homogeneous environment is a multi-year
`proposition. SAN-attached RAID arrays should support storage volumes formatted with
`different file systems and allow heteroqeneous systems to access to the volumes.
`
`Data Availability Features
`
`Duplex RAID Controllers - SAN’s should be desiqned without any single point of failure
`that can cause storage devices to become inaccessible. SAN-attached arrays should be
`configured with duplex controllers and with the disks connected to both controllers.
`Multiple SAN interfaces (on each controller) and duplex controllers with shared disks
`provide the level of fault tolerance required in SAN configurations. Redundant paths to
`data is a necessary but not a sufficient condition for high availability.
`
`Transparent Host Independent Failover- In addition to redundancy, controllers should
`implement a transparent failover/failback scheme such that logical disks, i.e. logical arrays,
`are continuously accessible. With a Fibre Channel interconnect, each controller requires a
`port held in reserve to accommodate a controller failover, i.e. assume a failed controller’s
`port ID; although the Fibre Channel standard allows ports to have multiple addresses, this
`feature has not been implemented in Fibre Channel chips. Failover and failback should be
`implemented such that transitions occur without any required host intervention.
`
`Alternate Path Support -- Following the no single point of failure principle, SAN-attached
`servers should have redundant Host Bus Adapters (HBA’s) with alternate path software
`support. Alternate path is generally implemented as a software driver and provides a level
`of indirection between the OS and HBA’s. If one HBA fails, the driver redirects I/O’s
`intended for the failed HBA to the alternate HBA so that I/O requests can be satisfied
`without the host OS’s knowing that an alternate path to storage was used.
`
`MYLEX 11
`
`IBM-Oracle 1006
`Page 12 of 22
`
`

`

`Data Integrity Features
`
`Mirrored Write Caching -- Most external RAID controllers implement write-back caching
`to enhance performance but few implement a mirrored write-back caching scheme to
`protect data. Data in a controller’s cache is vulnerable to power loss or controller failures
`until it is written to disk. To make matters far worse, if data in a write-back cache is lost,
`the application is oblivious to the loss since the controller has already acknowledged the
`write as complete. Controller battery back-up units can hold-up cache contents during
`power outages but cannot move cached data to an operational controller which can write it
`to disk. Mirroring write-back caches across controllers solves this problem and can be
`designed with minimal effect on cost or performance. With a mirrored cache architecture,
`I/O’s are written to cache memories in multiple controllers before the write is
`acknowledged as complete. If one controller subsequently fails, the surviving controller
`flushes the contents of the failed controller’s cache (which are stored in its cache) safely to
`disk. Mirrored caches protect data like mirrored disks.
`
`Data Management Features
`
`SAN-attached RAID arrays should support disk mirroring, all the commonly used RAID
`levels, on-line expansion of logical drives, on-line addition of logical drives and other data
`management software. Network-wide array management should be available from any
`client machine on the network.
`
`Bandwidth Scaling Capabilities
`
`Scaleable I/O Performance - SAN’s require a balanced design. The number (and
`compute power) of servers attached to a SAN can vary by orders of magnitudes, and
`SAN’s naturally tend to expand over time. However, adding servers to a SAN will result in
`marginal performance gains if the SAN-attached RAID arrays lack the horsepower to feed
`the additional nodes. Scaleable compute power requires scaleable I/O power. External
`storage arrays inherently provide scaleable I/O performance since arrays can be
`incrementally added to a SAN. However, a large number of under-powered or disk
`channel limited arrays are less cost-effective and more difficult to manage than a smaller
`number of arrays with performance better matched to the SAN’s I/O requirements.
`
`Active-Active Operation -- Duplex controllers can operate in active-passive or active-
`active mode just like cluster nodes. In this context, active-active implies a duplex controller
`configuration with both controllers simultaneously servicing SAN I/O requests. This is
`analogous to the active-active operation of cluster nodes. To realize the full performance
`potential and total cost-of-ownership (TCO) effectiveness of a SAN, all storage resources,
`must contribute to performance. Controllers that operate in active-passive mode with one
`controller idle until the other fails is a waste of a valuable SAN resource.
`
`MYLEX 12
`
`IBM-Oracle 1006
`Page 13 of 22
`
`

`

`Capacity Scaling Capabilities
`
`Scaleable I/O Capacity -- System storage has been increasing 50% a year since the first
`disk drive was invented. SAN’s require controllers with surplus back-end channel capacity
`to accommodate expanding storage needs. Storage controllers that only support a few
`disk channels are marginal for SAN applications. Controllers with more back-end
`channels can not only accommodate more storage but their arrays can also be configured
`to provide performance and data availability benefits. Logical arrays on controllers with
`two or three disk channels are striped vertically down a channel. Since applications direct
`most I/O to a single logical array, vertically stripping arrays can cause a single I/0
`processor (lOP) to become a bottleneck (the hot disk phenomena moved into the
`controller). Horizontal striping balances the I/O load across IOP’s.
`
`lOP
`
`External RAID Controller
`lOP
`lOP
`lOP
`
`lOP
`
`Figure 9. Vertically and horizontally stripped arrays.
`
`If an lOP fails, vertically striped arrays on the failed channel become unavailable to the
`controller with the failed lOP. However, if a RAID 5 array is horizontally striped across
`channels, then an lOP failure causes the loss of a single disk which RAID 5 algorithms
`can repair on-the-fly without disrupting application access to data. This failure mode is
`identical to a single disk failure in a RAID 5 set.
`
`MYLEX
`
`IBM-Oracle 1006
`Page 14 of 22
`
`

`

`Mylex Fibre Channel Product Line
`
`Mylex offers a seamless product line of external RAID array controllers designed to meet
`the performance, connectivity, cost, interface, topology, data integrity, data availability and
`data management requirements of SAN and cluster environments.
`
`Mylex external controllers are ready to be packaged in stand-alone or rack-mountable
`JBOD (disk) enclosures. External controllers are shared storage resources packaged
`apart from systems (internal controllers plug into a system bus, typically PCl).
`
`The RAID firmware and management software implemented across the product line
`deliver a uniform set of data protection and performance optimization features:
`
`¯ At va~ing levels of performance and storage connectivity,
`¯ With dual fibre host interfaces, and Ultra or Ultra-2 LVD disk interfaces, and
`¯ At price points for enW level SAN’s and with performance for enterprise SAN’s.
`
`Mylex array controllers are available in simplex configurations for network servers and
`duplex (dual) configurations for SAN’s and clusters. In duplex mode, advanced features
`are implemented to accelerate performance, protect data and guarantee data accessibility.
`
`¯ Active-active controller operation for increased performance,

`Transparent failover/failback for high data availability, and
`¯ Cache mirroring for guaranteed data integrity.
`
`Figure 10. Relative Performance
`
`Figure 11. Product Line Features
`
`Duplex controllers (gray boxes) deliver up to twice the performance of simplex controllers
`(white boxes). Fibre controllers provide over twice the performance of SCSl controllers.
`Data protection, accessibility and management features are important in any SAN or
`cluster environment and are uniformly implemented across the product line:
`
`¯ DAC SX
`¯ DAC SF
`¯ DAC FL
`
`Dual Ultra SCSI host ports and four Ultra SCSI disk channels
`Dual Fibre Channel host ports and six Ultra SCSl disk channels
`Dual Fibre Channels host ports and four Ultra-2 LVD disk channels
`
`MYLEX
`
`14
`
`IBM-Oracle 1006
`Page 15 of 22
`
`

`

`Heterogeneous Server Support
`Mylex external array controllers can accommodate SAN’s configured with heterogeneous
`operating systems, such as UNIX and NT. Disk volumes are configured into logical arrays
`and then formatted with NTFS or UNIX file systems. Up to eight logical arrays accessed
`as LUN’s (Logical Unit Numbers) can be configured on each controller. With Mylex
`controllers, a logical array is the unit of space allocation and RAID level protection. Each
`logical array can be configured with the RAID level that provide the optimal level of
`performance and fault tolerance for applications accessing the arrays.
`
`NT and UNIX servers can simultaneously access Mylex array controllers formatted with
`NT and UNIX volumes. This level of flexibility is required by enterprise SAN’s and
`facilitates migration from heterogeneous to homogeneous computing environments.
`
`UNIX ~
`System
`
`_ ~ ~] ~ NT
`Server
`~;-A~~
`
`FL Cc 1troller
`
`FL Con oiler
`
`FL Controller
`
`II
`
`||
`
`FL Controller
`
`I
`
`Figure 12. DAC FL Attahed to a Fibre Channel SAN
`
`In Figure 12, two pairs of duplex DAC FL controllers are SAN-attached. Each controller
`has redundant paths to host systems and pairs of controllers provide redundant paths to
`disks. The dark shaded disks are formatted with UNIX file systems and the lightly shaded
`disks with NTFS files systems. The SAN-attached servers can have multiple paths
`through the SAN interconnect to both pairs of redundant controllers, and redundant paths
`from the controllers to the disks are provided for fault tolerance and high data availability.
`
`Mylex SAN-attached controllers provide reliable and simultaneous access from
`heterogeneous servers to provide a higher level of resource sharing and an integrated
`storage environment.
`
`MYLEX
`
`is
`
`IBM-Oracle 1006
`Page 16 of 22
`
`

`

`Active-Active with Transparent Failover / Failback
`
`A key SAN benefit is high data availability; SAN devices should be able to fail without
`negatively impacting access to data (aside from momentary transition delays). Mylex
`external RAID array controllers incorporate features that increase data accessibility.
`
`¯ Active-Active Controller O

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket