Advertisment

Optimizing SAN for Business Continuity

author-image
CIOL Bureau
Updated On
New Update

A shift is taking place in the way businesses deploy storage networks. The reason is that these networks have come to play a significant role in helping organizations to ensure business continuity during system failures and site outages. Where data backup was once confined to a few servers, tape drives, and switches-easily controlled and secured in a single data center-the spate of recent natural disasters and other emergencies has caused most businesses to rethink this design.

Advertisment

To better protect themselves, more organizations are backing up data in two or more locations and using TCP/IP protocols for fast, versatile access from distributed sites. The now common use of IP-based storage technologies such as Internet Small Computer System Interface (iSCSI) and Fibre Channel over IP (FCIP) allows users to be automatically redirected to backup resources in geographically diverse locations, in the event that data should become inaccessible in a primary site.

This strategy is a boon to data availability. However, it brings with it some new considerations for the backup network. These issues involve ensuring the performance of storage data across long-haul links, as well as the security of that data in transit and the scalability of the storage-area network (SAN) footprint. In short, IT managers are beginning to face many of the same issues with their storage networks that have confronted them in their data networks. As with data networks, increasing volumes of storage data are traversing the WAN, which introduces distance-driven delay and new security exposures into the design equation.

 
Advertisment

IT professionals should consider the following issues as they build SANs that now might reside many thousands of miles away from the sites attempting to gain access to them:

  • Are there ways to offset distance-induced delay to accelerate SAN performance?
  • How can storage data, now leaving the data center and transiting common data networks, be secured against eavesdropping, alteration, or theft?
  • How can strong authentication and authorization of users, devices, and IT management personnel be ensured?
  • Is there a way to partition access to SAN resources, much in the way that Ethernet virtual LANs (VLANs) partition access to live servers, using logical user groups, for scalability, fault isolation, and security?
  • How do organizations manage a heterogeneous SAN environment?

A combination of industry standards and interfaces, along with features in vendor equipment, help IT managers ensure performance acceleration, security, and management in the SAN for improving business continuance.

Advertisment

Accelerating the SAN

Acceleration in the SAN is conceptually similar to application performance acceleration in data networks. Instead of proxying application protocols for local acknowledgement, however, SAN control protocols are locally acknowledged to reduce the number of round trips and associated time it takes to move blocks of data from point A to point B. For example, the SCSI protocol requires two roundtrips of acknowledgements for every write issued.

 

When local devices request data from a distant SAN, those devices can be acknowledged locally to reduce WAN-induced latency. There are two types of acceleration, or local acknowledgements, prominently in use: write acceleration and tape acceleration. Their use depends on which type of storage media is to be accessed. Both are supported in the Cisco MDS 9000 family of multiprotocol storage switches. These devices simultaneously support Fibre Channel, FCIP, iSCSI, and mainframe Fibre Connection (FICON) connections. They switch Fibre Channel data among similar ports and also encapsulate Fibre Channel data in IP and send it out an Ethernet interface for IP transit.

Advertisment

Both types of acceleration boost performance.

Write acceleration. This acknowledgement enhancement used for disk-to-disk and host-to-disk transmissions reduces SCSI's two roundtrips to one, thereby doubling performance. In this case, an acknowledgement of the receipt of intact data is sent after the second roundtrip of the handshaking process.

Tape acceleration. This acknowledgement enhancement builds on write acceleration, described above, to accelerate moving storage data from a media server to a tape drive. Performance is first enhanced by local acknowledgements that reduce the roundtrip WAN acknowledgements by half. A configurable file mark mechanism in tape systems, however, also allows the IT administrator to set the long-distance acknowledgement mechanism to take place after a desired number of data blocks, rather than after every single one to reduce the number of required acknowledgements even further. Data is buffered between acknowledgements. As the tape media performance is notoriously sluggish, reducing the acknowledgements required to every X number of data blocks buys significant performance benefits.

Advertisment
 

Compression

As in data networks, compression can also be used to increase the effective WAN bandwidth, avoid congestion, and improve performance. Cisco storage switches support different data compression algorithms, selectable depending on configuration, that allow compression ratios as high as 30:1, depending on data compressibility of the data block. Typical ratios for common database traffic are 2:1 to 3:1.

Securing storage data

Typical network security concerns are now beginning to apply to SANs. SANs have generally been small and localized within a single data center. Now, however, long-haul networks involving several service provider infrastructures might be used to move critical storage data that may never before have left a data center except on a piece of physical media in a truck. The result of this shift in the treatment of storage data is the need to apply the security features prevalent in IP network elements to the Fibre Channel environment. This involves protecting data in transit, securing against unauthorized user and device access, and guarding against malicious management misconfiguration. In a network of storage switches this involves encryption, authentication, and securing the SAN management infrastructure.

Advertisment

Encryption

Data encryption is important for preventing intruders from viewing or modifying confidential information. Cisco storage switches use IPSec protocol to help ensure confidentiality and data integrity of storage data in transit. The MDS 9000 multiprotocol SAN switches, for example, include integrated hardware-based IPSec encryption / decryption supporting Advanced Encryption Standard (AES), Data Encryption Standard (DES), and Triple Data Encryption Standard (3DES) algorithms for iSCSI and FCIP storage traffic.

Authentication and authorization

These functions are now necessary to avoid accidental corruption and malicious attacks on SAN data. They enable only certified users and devices to connect to stored data. Storage switch-to-switch authentication and authentication of other switches connecting to a storage fabric use the cryptographically secure key exchange and device authentication components of the draft Fibre Channel-Security Protocols (FC-SP) standard of the American National Standards Institute's InterNational Committee for Information Technology Standards (INCITS) T11 Technical Committee. Organizations can authenticate users and devices locally in the storage switch, reducing latency, or remotely through centralized authentication, authorization, and accounting (AAA) servers.

Secured management infrastructure

The data center management functions of network and storage devices must also be secured to thwart unauthorized access. Malicious users with access to the console of a networked storage device can easily alter configuration. As with other Cisco network elements, the MDS 9000 switches provide secured management functions, including Secure Sockets Layer (SSL) and Secure Shell (SSH) Protocol Version 2, which secure remote access using authentication and encryption. SSHv2 can be used in conjunction with backend user authentication protocols such as TACACS+ and RADIUS that may already be in place in the organization. In this case, the storage switch acts as a client to the back-end AAA servers running these protocols. Finally, Simple Network Management Protocol version 3 (SNMPv3) support provides authentication and authorization services for accessing SNMP management information bases (MIBs).

Advertisment
 

VSANs for scale and fault isolation

A well-planned virtual SAN (VSAN) architecture reduces the total number of SANs (or fabrics) deployed, while enabling businesses to separate their backup, recovery, and remote data mirroring domains from application-specific SANs. VSANs allow network administrators to segment a single, physical SAN fabric into many logical, completely independent SANs. As with VLANs in an Ethernet data network, this approach enables the creation of separate SAN domains without having to build out multiple separate and costly physical infrastructures.

Isolated VSAN topologies allow administrators to use simple zoning to restrict access and traffic flow among devices by securing access at the edge. Businesses can segregate even a single storage switch into multiple virtual environments, or domains. They can completely separate different VSANs to help ensure that fabric instability or a device outage is isolated within a single VSAN and does not cause a fabric-wide disruption.

Managing diversity

As storage network environments continue to grow, organizations are deploying storage solutions using equipment from multiple vendors, which each arrive with their own separate SAN management program. Administrators require a way to effectively manage the heterogeneous storage environment in a way that ensures maximum performance and cost effectiveness. The Fabric Manager for SANs lets administrators view and manage the heterogeneous fabric as a collection of devices, recreating the entire topology and representing it as a customizable map. Any device in the fabric that supports the INCITS T11 Fibre Channel-Generic Services-3 (FC-GS-3) standard for in-band management can be discovered and mapped as part of the topology. A topology window displays the discovered devices for customization and navigation, while an inventory window displays a tree-like structure of both physical and virtual elements. Yet another window displays the tools administrators can use to configure, monitor, and troubleshoot devices.

 

Fabric Manager also supports open interfaces with access to raw performance and configuration information within switches that can be used by third-party management applications. Support for the Storage Networking Industry Association's (SNIA) Storage Management Initiative Specification (SMI-S), for example, enables element management across multiple vendors' SAN management products.

Organizations tackling business continuance build out their SANs, they are finding themselves face to face with many of the WAN-centric performance, security, and management issues that have confronted them in their data networks. As increasing volumes of storage data are traversing the WAN, distance driven delay and new security exposures are rearing their heads. Enterprises should look to support for SAN acceleration techniques, multifaceted security support, and support for industry-standard management interfaces and capabilities to ensure that their SANs perform well and remain secure, cost-effective, and manageable."

The author is the Director of Engineering in Cisco's Data Center, Switching, and Security Technology Group.