The IBM TotalStorage DS6000 Series: Concepts and Architecture Enterprise-class storage functions in a compact and modular design On demand scalability and multi-platform connectivity Enhanced configuration flexibility with virtualization ibm.com/redbooks Front cover Cathy Warrick Olivier Alluis Werner Bauer Heinz Blaschek Andre Fourie...
Page 3
International Technical Support Organization The IBM TotalStorage DS6000 Series: Concepts and Architecture March 2005 SG24-6471-00...
Page 4
DS6000 microcode was used for the screen captures and command output, so some details may vary from the currently available microcode. Note: This book contains detailed information about the architecture of IBM's DS6000 product family. We recommend that you consult the product documentation and the Implementation Redbooks for more detailed information on how to implement the DS6000 in your environment.
Page 6
DS6000 Series: Concepts and Architecture...
IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead.
Other company, product, and service names may be trademarks or service marks of others. DS6000 Series: Concepts and Architecture DFSORT™ Enterprise Storage Server® ESCON® FlashCopy® FICON® Geographically Dispersed Parallel Sysplex™ GDPS® HACMP™ IBM® IMS™ Lotus Notes® Lotus® Multiprise® MVS™ Netfinity® Notes® OS/390® OS/400® Parallel Sysplex®...
This revision reflects the addition, deletion, or modification of new and changed information described below. Changed information (see change bars) Fixed errors in table on p. 330-331 SAN boot is available for the IBM® eServer BladeCenter® Updated with August 2005 announcement information Updated code load information Updated DS CLI user management information...
Page 18
DS6000 Series: Concepts and Architecture...
French Atomic Research Industry (CEA - Commissariat à l'Energie Atomique), he joined IBM in 1998. He has been a Product Engineer for the IBM High End Systems, specializing in the development of the IBM DWDM solution. Four years ago, he joined the SAN pre-sales support team in the Product and Solution Support Center in Montpellier working in the Advanced Technical Support organization for EMEA.
Page 20
ESS and FAStT. He has worked at IBM for six and a half years. Before joining IBM, Chuck was a hardware CE on UNIX systems for 10 years and taught basic UNIX at Midland College for six and a half years in Midland, Texas.
Page 21
Anthony Vandewerdt is an Accredited IT Specialist who has worked for IBM Australia for 15 years. He has worked on a wide variety of IBM products and for the last four years has specialized in storage systems problem determination. He has extensive experience on the IBM ESS, SAN, 3494 VTS and wave division multiplexors.
Page 22
Amit Dave, Selwyn Dickey, Chuck Grimm, Nick Harris, Andy Kulich, Jim Tuckwell, Joe Writz IBM Rochester Charlie Burger, Gene Cullum, Michael Factor, Brian Kraemer, Ling Pong, Jeff Steffan, Pete Urbisci, Steve Van Gundy, Diane Williams IBM San Jose Jana Jamsek IBM Slovenia DS6000 Series: Concepts and Architecture...
Many thanks to the graphics editor, Emma Jacobs, and the editor, Alison Chandler. Become a published author Join us for a two- to six-week residency program! Help write an IBM Redbook dealing with specific products or solutions, while getting hands-on experience with leading-edge technologies.
Page 24
xxii DS6000 Series: Concepts and Architecture...
Introducing the IBM TotalStorage Chapter 1. DS6000 series This chapter provides an overview of the features, functions, and benefits of the IBM TotalStorage DS6000 series of storage servers. The topics covered include: Overview of the DS6000 series and its benefits...
1.1 The DS6000 series, a member of the TotalStorage DS Family IBM has a wide range of product offerings that are based on open standards and share a common set of tools, interfaces, and innovative features. The IBM TotalStorage DS Family...
1.2 IBM TotalStorage DS6000 series unique benefits The IBM TotalStorage DS6000 series is a Fibre Channel based storage system that supports a wide range of IBM and non-IBM server platforms and operating environments. This includes open systems, zSeries, and iSeries servers.
Figure 1-3 DS6800 with five DS6000 expansion enclosures in a rack DS6800 controller enclosure (Model 1750-511) IBM TotalStorage systems are based on a server architecture. At the core of the DS6800 controller unit are two active/active RAID controllers based on IBM’s industry leading PowerPC®...
Page 31
The host ports auto-negotiate to either 2 Gbps or 1 Gbps link speeds. Attachment to up to seven (7) DS6000 expansion enclosures. Very small size, weight, and power consumption. All DS6000 series enclosures are 3U in height and mountable in a standard 19-inch rack. Chapter 1. Introducing the IBM TotalStorage DS6000 series...
DS6000 series systems. The software runs on a Windows or Linux system that the client can provide. IBM TotalStorage DS Storage Manager The DS Storage Manager is a Web-based graphical user interface (GUI) that is used to perform logical configurations and Copy Services management functions.
As data and storage capacity are growing faster year by year most customers can no longer afford to stop their systems to back up terabytes of data, it just takes too long. Therefore, IBM has developed fast replication techniques that can provide a point-in-time copy of the customer’s data in a few seconds or even less.
ESS. It provides a synchronous copy of LUNs or zSeries CKD volumes. A write I/O to the source volume is not complete until it is acknowledged by the remote system. Metro Mirror supports distances of up to 300km. Chapter 1. Introducing the IBM TotalStorage DS6000 series...
Asynchronous Cascading PPRC. You first copy your data synchronously to an intermediate site and from there you go asynchronously to a more distant site. Metro/Global Copy is available on the DS6800, but the following General Statement of Direction from IBM was included in the October 12, 2004 Hardware Announcement: IBM intends to offer a long-distance business continuance solution across three sites allowing for recovery from the secondary or tertiary site with full data consistency.
Most vendors feature Fibre Channel Arbitrated Loops, which can make it difficult to identify failing disks and is more susceptible to losing access to storage. The IBM DS6000 series provides dual active/active design architecture, including dual Fibre Channel switched disk drive subsystems which provide four paths to each disk drive.
In this case an Ethernet connection to the external network is necessary. The DS6800 can use this link to place a call to IBM or to another service provider when it requires service. With access to the machine, service personnel can perform service tasks, such as viewing error and problem logs or initiating trace and dump retrievals.
It is part of a complete set of disk storage products that are all part of the IBM TotalStorage DS Family and it is the disk product of choice for environments that require the highest levels of reliability, scalability, and performance available from IBM for mission-critical workloads.
TotalStorage DS Command-Line Interface (DS CLI) and the IBM TotalStorage DS open application programming interface (API). It is unique in the industry that IBM can offer, with the DS6000 series and the DS8000 series of products, storage systems with common management and copy functions for the whole DS family.
Page 41
DS6000 series the copy is available for production after a few seconds. Some of the differences in functions will disappear in the future. For the DS6000 series there is a General Statement of Direction from IBM (from the October 12, 2004 Hardware Announcement):...
1.3.4 Use with other virtualization products IBM TotalStorage SAN Volume Controller is designed to increase the flexibility of your storage infrastructure by introducing a new layer between the hosts and the storage systems. The SAN Volume Controller can enable a tiered storage environment to increased flexibility in storage management.
1.4.3 IBM multipathing software IBM Multi-path Subsystem Device Driver (SDD) provides load balancing and enhanced data availability capability in configurations with more than one I/O path between the host server and the DS6800.
Page 44
DS6000 Series: Concepts and Architecture...
Front display panel Figure 2-1 DS6800 front view The front view of the DS6800 server enclosure is shown in Figure 2-1. On the left is the front display panel that provides status indicators. You can also see the disk drive modules or DDMs.
The DS6000 expansion enclosure is used to add capacity to an existing DS6800 server enclosure. From the front view, it is effectively identical to the server enclosure (so it is not pictured). The rear view is shown in Figure 2-3. You can see the left and right power supplies, the rear display panel, and the upper and lower SBOD (Switched Bunch Of Disks) controllers.
Page 50
Persistent memory chipset Server enclosure Figure 2-4 DS6000 architecture When a host performs a read I/O, the controllers fetch the data from the disk arrays via the high performance switched disk architecture. The data is then cached in volatile memory in case it is required again.
Using the PowerPC architecture as the primary processing engine sets the DS6800 apart from other disk storage systems on the market. The design decision to use processor memory as I/O cache is a key element of the IBM storage architecture. Although a separate I/O cache could provide fast access, it cannot match the access speed of main memory.
Page 52
is not feasible in real-life systems), SARC uses prefetching for sequential workloads. Sequential access patterns naturally arise in video-on-demand, database scans, copy, backup, and recovery. The goal of sequential prefetching is to detect sequential access and effectively pre-load the cache with data so as to minimize cache misses. For prefetching, the cache management uses tracks.
DDM in appearance but contains no electronics. As discussed earlier, from the front, the server enclosure and the expansion enclosure appear almost identical. When identifying the DDMs, they are numbered 1 to 16 from front top left to front bottom right as depicted in Figure 2-6.
Page 54
Figure 2-7 Industry standard FC-AL disk enclosure The main problems with standard FC-AL access to DDMs are: The full loop is required to participate in data transfer. Full discovery of the loop via LIP (loop initialization protocol) is required before any data transfer. Loop stability can be affected by DDM failures.
Expansion is achieved by adding expansion enclosures onto each loop, until each loop has four enclosures (for a total of 128 DDMs). The server enclosure is the first enclosure on loop 0, which is why we can only add a total of seven expansion enclosures.
Page 56
Second expansion enclosure Server enclosure First expansion enclosure Third expansion enclosure Figure 2-10 Switched disk expansion DDMs Each DDM is hot plugable and has two indicators. The green indicator shows disk activity while the amber indicator is used with light path diagnostics to allow the customer to identify and replace a failed DDM.
The RAID controller cards are the heart and soul of the system. Each card is the equivalent of a cluster node in an ESS. IBM has leveraged its extensive development of the ESS host adapter and device adapter function to create a total repackaging. It actually uses DS8000 host adapter and device adapter logic, which allows almost complete commonality of function and code between the two series (DS6000 and DS8000).
You add one expansion enclosure to each loop until both loops are populated with four enclosures each (remembering the server enclosure represents the first enclosure on the first loop). Note that while we use the term disk loops, and the disks themselves are FC-AL disks, each disk is actually attached to two separate Fibre Channel switches.
In addition, there is a serial port provided for each controller. This is not a modem port and is not intended to have a modem attached to it. Its main purpose is for maintenance by an IBM System Service Representative (SSR), and possibly for some initial setup tasks.
Page 60
In Figure 2-14, the server enclosure has two expansion enclosures attached to the loop (loop 0). The server enclosure itself is the first enclosure on loop 0. The upper controller in the server enclosure is cabled to the upper SBOD card in the expansion enclosure. The lower controller is cabled to the lower SBOD card.
Data in cache on battery CRU fault on rear Fault in expansion enclosure ports of the SBOD card. A ports on the first Cables attach to server enclosure disk contrl ports First expansion enclosure on loop 1 (first enclosure on loop)
Table 2-1 summarizes the purpose of each indicator. Table 2-1 DS6000 front panel indicators Indicator System Power (green) System Identify (blue) System Information (amber) System Alert (amber) Data Cache On Battery (green) CRU Fault on Rear (amber) Fault in External Enclosure (amber) 2.8 Rear panel All of the indicators on the DS6000 front panel are mirrored to the rear panel.
Page 63
This will be 0 or 1 depending on whether the expansion enclosure is attached to the the server enclosure it will always be 0. The right-hand digit displays the enclosure base address. It will range from 0 to 3. This address will be set automatically after the enclosure is powered on and joins the loop.
2.9 Power subsystem The power subsystem of the DS6800 consists of two redundant power supplies and two battery backup units (BBUs). DS6000 expansion enclosures contain power supplies but not BBUs. The power supplies convert input AC power to 3.3V, 5V, and 12V DC power. The battery units provide DC power, but only to the controller card memory cache in the event of a total loss of all AC power input.
There are thus two BBUs present in the DS6800 server enclosure. If you compare their function to that of the different batteries in the ESS, they are the NVS batteries. They allow un-destaged cache writes in the NVS area of controller memory to be protected in the event of a sudden loss of AC power to both power supplies.
Each DS6800 server enclosure ships with a special service cable and DB9 converter. This cable looks very similar to a telephone cable. These components should be kept aside for use by an IBM System Service Representative and will normally be used only for problem debug.
2.13 Summary This chapter has described the various components that make up a DS6000. For additional information, there is documentation available on the Web at: http://www-1.ibm.com/servers/storage/support/disk/index.html Chapter 2. Components...
Page 68
DS6000 Series: Concepts and Architecture...
3.1 Controller RAS The DS6800 design is built upon IBM’s highly redundant storage architecture. It has the benefit of more than five years of ESS 2105 development. The DS6800, therefore, employs similar methodology to the ESS to provide data integrity when performing fast write operations and controller failover.
Page 71
for odd LSSs Cache memory for even LSSs Controller 0 Figure 3-1 DS6800 normal data flow Figure 3-1 illustrates how the cache memory of controller 0 is used for all logical volumes that are members of the even LSSs. Likewise, the cache memory of controller 1 supports all logical volumes that are members of odd LSSs.
8 seconds. On logical volumes that are not configured with RAID-10 storage, certain RAID-related recoveries may cause latency impacts in excess of 15 seconds. If you have real-time response requirements in this area, contact IBM to determine the latest information on how to manage your storage to meet your requirements.
The DS6800 BBUs are designed to be replaced every four years. From the rear of the server enclosure, the left-hand BBU supports the upper controller, while the right hand BBU supports the lower controller.
Page 74
Single pathed host Controller 0 Figure 3-3 A host with a single path to the DS6800 For best reliability and performance, it is recommended that each attached host has two connections, one to each controller as depicted in Figure 3-4. This allows it to maintain connection to the DS6800 through both controller failure and HBA or HA (host adapter) failure.
Device Driver (SDD) to manage both path failover and preferred path determination. SDD is supplied free of charge to all IBM customers who use ESS 2105, SAN Volume Controller (SVC), DS6800, or DS8000. A new version of SDD (Version 1.6) will also allow SDD to manage pathing to the DS6800 and DS8000.
A physical FICON path is established when the DS6800 port sees light on the FICON fiber (for example, a cable is plugged in to a DS6800 host adapter, or a processor, or the DS6800 is powered on, or a path is configured online by OS/390). At this time, logical paths are established through the FICON port between the host and some or all of the LCUs in the DS6800, controlled by the HCD definition for that host.
array site (where the S stands for spare). A four disk array also effectively uses 1 disk for parity, so it is referred to as a 3+P array. In a DS6000, a RAID-5 array built on two array sites will contain either seven disks or eight disks, again depending on whether the array sites chosen had pre-allocated spares.
This normally means that two spares will be created in the server enclosure and two spares in the first expansion enclosure. Spares are created as the array sites are created, which occurs when the DDMs are installed. After four spares have been created for the entire storage unit, no more spares are normally needed.
DDM, then approximately half of the 146 GB DDM would be wasted since that space is not needed. The problem here is that the failed 73 GB DDM will be replaced with a new 73 GB DDM. So the DS6000 microcode will most likely migrate the data on the 146 GB DDM onto the recently replaced 73 GB DDM.
paths) since two paths to the expansion controller would be available for the remaining controller. disk disk contrl RAID controller 0 device adapter chipset loop 1 engine Fibre channel switch Enclosure midplane Midplane Figure 3-5 DS6000 switched disk connections 3.4 Power subsystem RAS As discussed in Chapter 2, “Components”...
Important: If you install the DS6000 so that both power supplies are attached to the same power strip, or where two power strips are used but they connect to the same circuit breaker or the same switch-board, then the DS6000 will not be well protected from external power failures.
Page 82
View an animation of the removal and replacement procedures. b. View an informational screen to determine what affect this repair procedure will have upon the DS6000. c. Order a replacement part from IBM via an internet connection. DS6000 Series: Concepts and Architecture...
Page 83
Figure 3-7 Power supply replacement via the GUI 2. Upon arrival of the replacement supply, the user physically removes the faulty power supply and then installs the replacement power supply. 3. Finally, the user checks the component view to review system health after the repair. An example of this is shown in Figure 3-8.
This indicator is similar in function to the xSeries Information indicator. It is present on both the server and expansion enclosures and is used in problem determination. This indicator will be on solid when a minor error condition exists in the system. For example, a log entry has been written that the user needs to look at.
If a part is designated a FRU, then this implies that the spare part needs to be replaced by an IBM Service Representative. Within CRU parts, there are currently two tiers: Tier 1 CRUs are relatively easy to replace, while Tier 2 CRUs are generally more expensive parts or parts that require more skill to replace.
Each DS6800 controller also has microcode that can be updated. All of these code releases come as a single package installed all at once. As IBM continues to develop and improve the DS6800, new releases of firmware and microcode will become available which offer improvements in both function and reliability.
progress using the DS Management Console GUI. Clearly a multipathing driver (such as SDD) is required for this process to be concurrent. There is also the alternative to load code non-concurrently. This means that both controllers are unavailable for a short period of time. This method can be performed in a smaller window of time.
Page 88
DS6000 Series: Concepts and Architecture...
In normal operation, however, disk drives are typically accessed by one device adapter and one server. Each path on each device adapter can be active concurrently, but the set of eight paths on the two device adapters can all be concurrently accessing independent disk drives.
An array site is a group of four DDMs. What DDMs make up an array site is pre-determined by the DS6000, but note, that there is no pre-determined server affinity for array sites. The DDMs selected for an array site are chosen from the same disk enclosure string (see Figure 4-2 on page 68).
According to the DS6000 sparing algorithm, up to two spares may be taken from the array sites used to construct the array on each device interface (loop). See Chapter 5, “IBM TotalStorage DS6000 model overview” on page 83 for more details.
So, an array is formed using one or two array sites, and while the array could be accessed by each adapter of the device adapter pair, it is managed by one device adapter. Which adapter and which server manages this array is defined later in the configuration path. 4.2.3 Ranks...
There is no predefined affinity of ranks or arrays to a storage server. The affinity of the rank (and it's associated array) to a given server is determined at the point it is assigned to an extent pool.
Page 95
Of course you could also define just one FB extent pool and assign it to one server, and define a CKD extent pool and assign it to the other server. Additional extent pools may be desirable to segregate ranks with different DDM types.
4.2.5 Logical volumes A logical volume is composed of a set of extents from one extent pool. On a DS6000 up to 8192 (8K) volumes can be created (8K CKD, or 8K FB volumes, or a mix of both types (4K CKD plus 4K FB). Fixed block LUNs A logical volume composed of fixed block extents is called a LUN.
Page 97
Extent Pool CKD0 Rank-x 1113 1113 1113 1113 3390 Mod. 3 Rank-y 1113 used used free Extent Pool CKD0 Rank-x 1113 1113 1113 1113 3390 Mod. 3 Rank-y 1113 used used used Figure 4-6 Allocation of a CKD logical volume Figure 4-6 shows how a logical volume is allocated with a CKD volume as an example.
Rank-a Rank-b Rank-a Rank-b Figure 4-7 Creation of an FB LUN iSeries LUNs iSeries LUNs are also composed of fixed block 1 GB extents. There are, however, some special aspects with iSeries LUNs. LUNs created on a DS6000 are always RAID protected. LUNs are based on RAID-5 or RAID-10 arrays.
The reformatting of the extents is a background process. IBM plans to further increase the flexibility of LUN/volume management. We cite from the DS6000 announcement letter the following Statement of General Direction:...
Page 100
For more information on PAV see Chapter 10, “DS CLI” on page 195. For open systems, LSSs do not play an important role except in determining which server the LUN is managed by (and which extent pools it must be allocated in) and in certain aspects related to Metro Mirror, Global Mirror, or any of the other remote copy implementations.
A DS6000 provides mechanisms to control host access to LUNs. In most cases a server has two or more HBAs and the server needs access to a group of LUNs. For easy management of server access to logical volumes, the DS6000 introduced the concept of host attachments and volume groups.
Page 102
Host attachment HBAs are identified to the DS6000 in a host attachment construct that specifies the HBA's World Wide Port Names (WWPNs). A set of host ports can be associated through a port group attribute that allows a set of HBAs to be managed collectively. This port group is referred to as host attachment within the GUI.
DB2-2, accessed by server AIXprod2. In our example there is, however, one volume in each group that is not shared. The server in the lower left has four HBAs and they are divided into two distinct host attachments. One can access some volumes shared with AIXprod1 and AIXprod2, the other HBAs have access to a volume group called docs.
As explained in the previous chapters, there are several options on how to create logical volumes. You can select an extent pool that is owned by one server. There could be just one extent pool per server or you could have several. The ranks of extent pools could come from arrays on different loops or from the same loop.
Host LVM volume Figure 4-12 Optimal distribution of data 4.3 Benefits of virtualization The DS6000 physical and logical architecture defines new standards for enterprise storage virtualization. The main benefits of the virtualization layers are: Flexible LSS definition allows maximization/optimization of the number of devices per LSS.
Chapter 5. overview This chapter provides an overview of the IBM TotalStorage DS6000 storage server which is from here on referred to as the DS6000. While the DS6000 is physically small, it is a highly scalable and powerfully performing storage server. Topics covered in this chapter are:...
Advanced functionality Extensive scalability Increased addressing capabilities The ability to connect to all relevant host server platforms 5.1.1 DS6800 Model 1750-511 The 1750-511 model contains control unit functions as well as a rich set of advanced functions, and holds up to 16 disk drive modules (DDMs). It provides a minimum capacity of 584 GB with 8 DDMs and 73 GB per DDM.
– Two 2 Gbps outbound ports – One Fibre Channel switch Disk enclosure which holds up to 16 Fibre Channel DDMs. Two AC/DC power supplies with imbedded enclosure cooling units. Support for attachment to DS6800 Model 1750-511. Chapter 5. IBM TotalStorage DS6000 model overview...
The DS6800 Model 1750-EX1 is also a 3 Electrical Industries Association (EIA) self-contained unit, as is the 1750-511, and it can also be mounted in a standard 19 inch rack. Figure 5-3 DS6800 Model 1750-EX1 rear view Controller model 1750-511 and expansion model 1750-EX1 have the same front appearance. Figure 5-3 displays the rear view of the expansion enclosure, which is a bit different compared to the rear view of the 1750-511 model.
Page 111
The DS6800 server enclosure can have from 8 up to 16 DDMs and can connect 7 expansion enclosures. Each expansion enclosure also can have 16 DDMs. Therefore, in total a DS6800 storage unit can have 16 + 16 x 7 = 128 DDMs.
Page 112
145GB, 10k rpm DDMs EXP 5 300GB, 10k rpm DDMs Cables between enclosures Up to 16 DDMs per enclosure Raid controller disk exp ports Server enclosure Raid controller disk contrl ports You can configure an intermix configration in a DS6800...
In this chapter, we describe the architecture and functions of Copy Services for the DS6000. Copy Services is a collection of functions that provide disaster recovery, data migration, and data duplication functions. Copy Services run on the DS6000 server enclosure and they support open systems and zSeries environments.
PPRC, which include: – IBM TotalStorage Metro Mirror, previously known as Synchronous PPRC – IBM TotalStorage Global Copy, previously known as PPRC Extended Distance – IBM TotalStorage Global Mirror, previously known as Asynchronous PPRC We explain these functions in detail in the next section.
Page 115
FlashCopy provides a point-in-time copy Source Target Write Read Figure 6-1 FlashCopy concepts When a FlashCopy operation is invoked, the process of establishing the FlashCopy pair and creating the necessary control bitmaps takes only a few seconds to complete. Thereafter, you have access to a point-in-time copy of the source volume.
The background copy may have a slight impact on your application because the real-copy needs some storage resources, but the impact is minimal because the host I/O is prior to the background copy. And if you want, you can issue FlashCopy with the option.
1. At first, you issue full FlashCopy with the change recording creating bitmaps in the server enclosure. The change recording bitmaps are used for recording the tracks which are changed on the source and target volumes after the last FlashCopy.
Volum e level FlashCopy Figure 6-3 Data Set FlashCopy Multiple Relationship FlashCopy Multiple Relationship FlashCopy allows a source to have FlashCopy relationships with multiple targets simultaneously. A source volume or extent can be FlashCopied to up to 12 target volumes or target extents, as illustrated in Figure 6-4. Figure 6-4 Multiple Relationship FlashCopy Note: If a FlashCopy source volume has more than one target, that source volume can be involved only in a single incremental FlashCopy relationship.
Page 119
Consistency Group FlashCopy Consistency Group FlashCopy allows you to freeze (temporarily queue) I/O activity to a LUN or volume. Consistency Group FlashCopy helps you to create a consistent point-in-time copy across multiple LUNs or volumes, and even across multiple storage units. What is Consistency Group FlashCopy? If a consistent point-in-time copy across many logical volumes is required, and the user does not wish to quiesce host I/O or database operations, then the user can use Consistency...
Page 120
Important: Consistency Group FlashCopy can create host-based consistent copies; they are not application-based consistent copies. The copies have consistency. This means that if you suddenly power off your server without stopping your applications and without destaging the data in the file cache, the data in the file cache may be lost and you may need recovery procedures to restart your applications.
Mirror and Copy 2244 function authorization model, which is 2244 Model RMC. DS6000 server enclosures can participate in Remote Mirror and Copy solutions with the ESS Model 750, ESS Model 800, and DS6000 and DS8000 server enclosures. To establish a PPRC relationship between the DS6000 and ESS, the ESS needs to have licensed internal code (LIC) version 2.4.2 or later.
Page 122
Server write Figure 6-7 Metro Mirror Global Copy (PPRC-XD) Global Copy copies data non-synchronously and over longer distances than is possible with Metro Mirror. When operating in Global Copy mode, the source volume sends a periodic, incremental copy of updated tracks to the target volume, instead of sending a constant stream of updates.
Page 123
This solution is based on the existing Global Copy and FlashCopy. With Global Mirror, the data that the host writes to the server enclosure at the local site is asynchronously shadowed to the server enclosure at the remote site. A consistent copy of the data is then automatically maintained on the server enclosure at the remote site.
Page 124
Efficient synchronization of the local and remote sites with support for failover and failback modes, helping to reduce the time that is required to switch back to the local site after a planned or unplanned outage. Server write Figure 6-9 Global Mirror How Global Mirror works We explain how Global Mirror works in Figure 6-10 on page 101.
Page 125
Global Mirror - How it works PPRC Primary Global Copy Local Site Automatic Cycle in an active Global Mirror Session 1. Create Consistency Group of volumes at local site 2. Send increment of consistent data to remote site 3. FlashCopy at the remote site 4.
Page 126
Note: When you implement Global Mirror, you setup the FlashCopy between the B and C No Background copy volumes with before the latest data is updated to the B volumes, the last consistent data in the B volume is moved to the C volumes. Therefore, at some time, a part of consistent data is in the B volume, and the other part of consistent data is in the C volume.
Primary server Server write DS8000 or ESS Figure 6-11 z/OS Global Mirror (DS6000 is used as secondary system) 6.2.4 Comparison of the Remote Mirror and Copy functions In this section we summarize the use of and considerations for Remote Mirror and Copy functions.
Page 128
Advantages Global Mirror can copy over nearly an unlimited distance. It is scalable across the server enclosures. It can realize low RPO with enough link bandwidth. Global Mirror causes little or no impact to your application system.
Note: To manage Global Mirror, you need many complicated operations. Therefore, we recommend management utilities (for example, Global Mirror Utilities) or management software (for example, IBM Multiple Device Manager) for Global Mirror. 6.2.5 What is Consistency Group? With Copy Services, you can create Consistency Group is a function to keep consistency means that the order of dependent writes is kept in the copy.
Page 130
In order for the data to be consistent, the deposit of the paycheck must be applied withdrawal of cash for each of the checking accounts. However, it does not matter whether the deposit to checking account A or checking account B occurred first, as long as the associated withdrawals are in the correct order.
Page 131
Because of the time lag for Consistency Group operations, some volumes in some LSSs are in an extended long busy state and other volumes in the other LSSs are not. In Figure 6-12, the volumes in LSS11 are in an extended long busy state, and the volumes in LSS12 and 13 are not.
Servers Figure 6-14 Consistency Group: Example 3 In this case, the copy created by Consistency Group operation reflects only the 1st and 3rd write operation, not including the 2nd operation. If you accept this result, you can use Consistency Group operation with your applications. But, if you cannot accept it, you should consider other procedures without Consistency Group operations.
The client must provide a computer to use as the MC. If they want, they can order a computer from IBM as the MC. An additional MC can be provided for redundancy. For further information about the Management Console, see Chapter 8, “Configuration planning”...
SRM applications and infrastructures. The DS Open API also enables the automation of configuration management through customer-written applications. Either way, the DS Open API presents another option for managing the DS6000 by complementing the use of the IBM TotalStorage DS Storage Manager Web-based interface and the DS Command-Line Interface.
6.4 Interoperability with ESS Copy Services also supports the IBM Enterprise Storage Server Model 800 (ESS 800) and the ESS 750. To manage the ESS 800 from the Copy Services for DS6000, you need to install licensed internal code version 2.4.2 or later on the ESS 800.
Page 136
DS6000 Series: Concepts and Architecture...
Chapter 7. This chapter discusses planning for the physical installation of a new DS6000 in your environment. Refer to the latest version of the IBM TotalStorage DS6000 Introduction and Planning Guide , GC26-7679, for further details. In this chapter we cover the following topics:...
You can install the DS6000 series in a 2101-200 system rack or in any other 19” rack that is compliant with the Electronic Industries Association (EIA) 310-D Type A standard. You have to use these feature codes when you order a system rack from IBM for your DS6000 series: The feature code 0800 is used to indicate that the DS6000 series ordered will be assembled into an IBM TotalStorage 2101-200 System Rack by IBM manufacturing.
Page 141
2. Determine whether the floor load rating of the location meets the following requirements: – The minimum floor load rating used by IBM is 342 kg per sq m (70 lb per sq ft). Service clearance requirements This section describes the clearances that the DS6000 series requires for service. We include the clearances that are required on the front and to the rear of the rack.
Use the following steps to calculate the required space for your storage units: 1. Determine the dimensions of each model configuration in your storage units. 2. Determine the total space that is needed for the storage units by planning where you will place each storage unit in the rack.
You have to decide where you want to install this software. 7.3.1 IBM TotalStorage DS Storage Manager The IBM TotalStorage DS Storage Manager is an interface that is used to perform logical configurations, service, copy services management, and firmware upgrades.
SRM applications and infrastructures. The DS Open API also enables the automation of configuration management through customer-written applications. Either way, the DS Open API presents another option for managing storage units by complementing the use of the IBM TotalStorage DS Storage Manager Web-based interface and the DS Command-Line Interface.
This following settings are required connect the DS6000 series to a network: Controller card IP address You must provide a dotted decimal address that you will assign to each storage server controller card in the DS6800. Since there are two controllers, you need two TCP/IP addresses.
The same considerations mentioned also apply to HBAs on the host. You might want to replace 1 Gbps HBAs by 2 Gbps HBAs. For fault tolerance, each server should be equipped with two HBAs connected to different Fibre Channel switches.
FICON port on the second controller card to allow failover from one server to the other to work. A FICON port on the DS6000 can be connected directly to a zSeries host or to a SAN switch supporting FICON connections. With Fibre Channel adapters...
Upon activation, disk drives can be logically configured up to the extent of authorization. As additional disk drives are installed, the extent of IBM authorization must be increased by acquiring additional DS6800 feature numbers. Otherwise, the additional disk drives cannot be logically configured for use.
A management console is a requirement for a DS6800. The user must provide a computer to use as the management console or the user can optionally order a computer from IBM. This computer must meet the following minimum set of hardware and operating system compatibility requirements: 1.4 GHz Pentium®...
ThinkCentre M51 Model 23U (8141-23U) Desktop system with a 3.0 GHz/800 MHz Intel Pentium 4 Processor. If a monitor is also required, IBM suggests the IBM 6737 ThinkVison C170 (6737-66x or 6737-K6x) 17-inch full flat shadow mask CRT color monitor with a flat screen (16-inch viewable image size).
A novice user with little knowledge of storage concepts who wants to quickly and easily set up and begin using the storage. An expert user who wants to quickly configure a storage plex, allowing the storage server to make decisions for the best storage allocation.
8.2.3 Local maintenance The DS6000 series is a customer maintained system. No IBM SSR (Service Support Representative) is involved, unless you have a service contract with IBM. The customer does problem determination, replacement parts ordering, and the actual replacement of the component.
If the user decides to ask IBM for help, and IBM needs to access the DS6000 remotely, then the user must provide connectivity from the DS Management Console to an IBM service support center. The DS Management Console must be connected to the DS6000 to enable IBM to access the DS6000 over this network to analyze the error condition.
8.2.7 Simple Network Management Protocol (SNMP) The storage unit generates SNMP traps and supports a read-only management information base (MIB) to allow monitoring by your network. You have to decide which server is going to receive the SNMP traps. 8.3 DS6000 licensed functions Licensed functions include the storage unit’s operating system and functions.
Once the OEL has been activated for the storage unit, you can configure the storage unit. Activating the OEL means that you have obtained the feature activation key from the IBM disk storage feature activation (DSFA) Web site and entered it into the DS Storage Manager. The feature activation process is discussed in more detail in 8.3.7, “Disk storage feature...
FICON attachment 8.3.2 Point-in-Time Copy function (PTC) The Point-in-Time Copy licensed function model and features establish the extent of IBM authorization for the use of IBM TotalStorage FlashCopy. When you order Point-in-Time Copy functions, you specify the feature code that represents the physical capacity to authorize for the function.
5303 5304 8.3.4 Parallel Access Volumes (PAV) The Parallel Access Volumes model and features establish the extent of IBM authorization for the use of the Parallel Access Volumes licensed function. Table 8-5 provides the feature codes for the PAV function.
Page 159
CKD volumes will be FlashCopied, then you only need to purchase a license for 5 TB PTC and not the full 20 TB. The 5 TB of PTC would be set to CKD only. Figure 8-4, shows an example of a FlashCopy feature code authorization. In this case, the user is authorized up to 25 TB of CKD data.
Page 160
I want to FlashCopy 10 TB of FB and 12 TB of CKD. solution User has 20 TB of FB data and has 25 TB of CKD data allocated. The user has to purchse a Point-in-Time function equal to the total of both the FB and CKD capacity, that is 20 TB (FB) + 25 TB (CKD) equals 45 TB.
IBM Disk Storage Feature Activation (DSFA) Web site at: http://www.ibm.com/storage/dsfa Management refers to the use of the IBM Disk Storage Feature Activation (DSFA) Web site to select a license scope and to assign a license value. The client performs these activities and then activates the function.
The DS6000 series offers high scalability while maintaining excellent performance. With the DS6800 (Model 1750-511), you can install up to 16 DDMs in the server enclosure. The minimum storage capability with 8 DDMs is 584 GB (73 GB x 8 DDMs). The maximum storage capability with 16 DDMs for the DS6800 model is 4.8 TB (300 GB x16 DDMs).
– To prepare for unexpected situations, a certain amount of margin is required for important systems. Additional enclosures to a maximum of four per loop Expansion enclosure 2 Server Enclosure Device adapter chipset inside controller 1 Expansion enclosure 1 Expansion enclosure 3...
Page 164
DS6000. IBM sales representatives and IBM Business Partners can download this tool from the IBM Web site. The values presented here are intended to be used only until Capacity Magic is available.
Configuration with the same type of DDMs For example, if you configure a DS6000 with all the same type of 32 DDMs (a server enclosure with an expansion enclosure) attached to two switched FC-AL loops, you can configure four RAID arrays of 8 drives and the DS6000 will automatically assign four spare disks (two spare disks for each loop).
Page 166
See Figure 8-10 and Figure 8-11 on page 143. These figures are simple examples for 64 DDM configurations with RAID 5 or RAID 10 arrays. RAID arrays are configured with spare disks in the server enclosure and the first expansion enclosure (each enclosure has two spare disks). Expansion enclosures 2 and 3 don’t have spare disks.
Page 167
32 DDMs of 300 GB are installed in expansion enclosures 2 and 3. In this case, in the first installation of disks, you configure four spare disks in the server enclosure and expansion enclosure 1. And, in the next installation, you also configure four spare disks in expansion enclosures 2 and 3.
Page 168
32 DDMs of 73 GB and 15,000 RPM are installed in expansion enclosures 2 and 3. In this case, in the first installation, you configure four spare disks in the server enclosure and expansion enclosure 1. And, in the next installation, you also configure four spare disks in expansion enclosures 2 and 3.
• Minimum 2 spares of the largest capacity Array Site on each loop – First two enclosures (server and Exp 1) needs two spare disks each – And next two enclosures (Exp2 and Exp3) also need two spare disks each (because of faster rpm) –...
The advantages of using software packages are: Small application impact when using data migration package. Little disruption for non-database files. Backup or restore cycles offloaded to another server. Standard database utilities. The disadvantages of using a software package are: Cost of data migration package.
The disadvantages of remote copy technologies include: Same storage device types are required. For example, in a Metro Mirror configuration you need ESS 800 mirroring to a DS6000 (or an IBM approved configuration), but cannot have non-IBM disk systems mirroring to a DS6000.
DS6000 series. 8.6 Planning for performance The IBM TotalStorage DS6000 is a cost effective, high performance, high capacity series of disk storage that is designed to support continuous operations and allows your workload to be easily consolidated into a single storage subsystem. To have a well-balanced disk system...
Magic provides insight when you are considering deploying remote technologies such as Metro Mirror. Confer with your sales representative for any assistance with Disk Magic. Note: Disk Magic is available to IBM sales representatives and IBM Business Partners only. 8.6.2 Number of host ports Plan to have an adequate number of host ports and channels to provide the required bandwidth to support your workload.
Multiple DS6000s 8.6.8 Preferred paths In the DS6000, host ports have a fixed assignment to a server (or controller card). In other words, preferred paths to a server avoid having to cross the PCI-X connection. Therefore, there is a performance penalty if data from a logical volume managed by one server is accessed from a port that is located on the other server.
DS6000. It is the client’s responsibility to configure the storage server to fit their specific needs. It is not in the scope of this redbook to show detailed steps and scenarios for every possible setup. For further assistance, help and guidance can be obtained from an IBM FTSS or an IBM Business Partner.
Page 177
Server ports can be grouped in attachments for convenience. For example, pSeries2 shows two host attachments, with two ports in each attachment. Server pSeries3 shows one host attachment with four ports grouped in the one attachment.
Page 178
An extent pool consists of one or several ranks. Ranks in the same extent pool must be of the same data format (CKD or FB). Each extent pool is associated with server 0 or server 1. Although it is possible to create extent pools with ranks of different drive capacities, speeds, and RAID types, we recommend creating them to consist of the same RAID type, speed, and capacity.
Page 179
Extent pools are assigned to server 0 and server 1 during configuration and receive their server affinity at this time. If you are using the custom configuration, we recommend, for user manageability reasons, that the client associate the rank even numbers to server 0 and the rank odd numbers to server 1 when defining the extent pools during the configuration process.
Any extent can be used to make a logical volume. There are thresholds that warn you when you are nearing the end of space utilization on the extent pool. There is a Reserve space option that will prevent the logical volume from being created in reserved space until the space is explicitly released.
Page 181
In Figure 9-3, we show two volume groups, Volume Group 1 and Volume Group 2. A pSeries server with 1 host attachment (four ports grouped in that attachment) resides in Volume Group1. The xSeries2 server has 1 host attachment (2 ports grouped into the attachment).
Page 182
Host System A Host Attachment WWPN WWPN Volume Group 1 Figure 9-4 Example of Volume Group, LUNs and host attachment definition In Figure 9-4 we show three hosts (Host A, Host B, and Host C). The three hosts are defined in the logical configuration as three different WWPN of each host system in groups called host attachments.
Page 183
Figure 9-5 shows an example of the relationship between LSSs, extent pools, and volume groups: Extent Pool 4, consisting of two LUNs, LUNs 0210 and 1401, and Extent Pool 5, consisting of three LUNs, LUNs 0313, 0512, and 1515. Here are some considerations regarding the relationship between LSSs, extent pools, and volume groups: Volumes from different LSSs and different extent pools can be in one volume group as shown in Figure 9-5.
Page 184
– FB LSSs definitions are configured during the volume creation. LSSs have no predetermined relation to physical ranks or extent pools other than their server affinity to either server0 or server1. – One LSS can contain volumes/LUNs from different extent pools.
9.1.2 Summary of the DS Storage Manager logical configuration steps It is our recommendation that the client consider the following concepts before performing the logical configuration. These recommendations are discussed in this chapter. Planning When configuring available space to be presented to the host, we suggest that the client approach the configuration from the host and work up to the DDM (raw physical disk) level.
Page 186
Raw or physical DDM layer At the very top, you can see the raw DDMs. There are four DDMs in a group called a four-pack. They are placed into the DS6000 in pairs of two, making eight DDMs. DDM X represents one four-pack and DDM Y represents a pair from another four-pack.
Figure 9-9 Recommended Logical Configuration steps 9.2 Introducing the GUI and logical configuration panels The IBM TotalStorage DS Storage Manager is a program interface that is used to perform logical configurations and copy services management functions. The DS Storage Manager program is installed via a GUI (graphical mode) using the install shield.
MC only one digit higher than the default MC. 9.2.2 Introduction and Welcome panel The IBM TotalStorage DS6000 Storage Manager (DS Storage Manager) is a software application that runs on an MC. It is the interface provided for the user to define and maintain the configuration of the DS6000.
Page 189
OLE license activation key and all the license activation keys from the Disk Storage Feature Activation (DSFA) Web site at: http://www.ibm.com/storage/dsfa This application provides logical configuration and Copy Services functions for a storage unit attached to the network. This feature provides you with real-time (online) configuration support.
Page 190
Figure 9-13 View of the fully expanded Real-time Manager menu choices – Copy Services You can use the Copy Services selections of the DS Storage Manager if you choose Real-time during the installation of the DS Storage Manager and you purchased these optional features.
Page 191
Some configuration methods require extensive time. Because there are many complex functions available to you, you are required to make several decisions during the configuration process. However, with Express configuration, the storage server makes several of those decisions for you. This reduces the time that configuration takes, and simplifies the task for you.
Page 192
Create and define the users and passwords Select User administration → Add user and click Go , as shown in Figure 9-15. The screen that is returned is shown in Figure 9-16. On this screen you can add users and set passwords, subject to the following guidelines: –...
To use the information center, click the question mark ( ? ) icon shown in Figure 9-17. Figure 9-17 View of the information center 9.2.3 Navigating the GUI Knowing what icons, radio buttons, and check boxes to click in the GUI, will help you efficiently navigate your way through the configurator and successfully configure your storage.
Page 194
With reference to the numbers shown in Figure 9-18, the icons shown have the following meanings: 1. Click icon 1 to hide the My Work menu area. This increases the space for displaying main work area of the panel. 2. Icon 2 hides the black banner across the top of the screen, again to increase the space to display the panel you're working on.
Page 195
1 2 3 4 5 6 Figure 9-20 View of the Storage Complexes section The buttons displayed on the Storage complexes screen, shown in Figure 9-19 and called out in detail in Figure 9-20, have the following meanings. Boxes 1 through 6 are for selecting and filtering: –...
Figure 9-22 View of radio buttons and check boxes in the host attachment panel In the example shown in Figure 9-22, the radio button is checked to allow specific host attachments for selected storage unit I/O ports only. The check box has also been selected to show the recommended location view for the attachment.
Figure 9-24 View of the Define properties panel, with the Nickname defined Do not click Create new storage unit at the bottom of the screen shown in Figure 9-24. Click Next and Finish in the verification step. 9.3.2 Configuring the storage unit To create the storage unit, expand the Manage Hardware section, click Storage units (2), click Create from the Select action pull-down and click Go .
Page 198
Figure 9-25 View of the General storage unit information panel As illustrated in Figure 9-25, fill in the required fields as follows: 1. Click the Machine Type-Model from the pull-down list. 2. Fill in the Nickname. 3. Type in the Description. 4.
Page 199
Figure 9-26 View of Specify DDM packs panel, with the Quantity and DDM type added Click Next to advance to the Define licensed function panel, under the Create storage unit path, as shown in Figure 9-27. Figure 9-27 View of the Defined licensed function panel Fill in the fields shown in Figure 9-27 as follows: The number of licensed TB for the Operation environment.
Follow the steps specified on the panel shown in Figure 9-27. The next panel, shown in Figure 9-28, requires you to enter the storage type details. Figure 9-28 Specify the I/O adapter configuration panel Enter the appropriate information and click OK . 9.3.3 Configuring the logical host systems To create a logical host for the storage unit that you just created, click Host Systems, as shown in Figure 9-29.
Page 201
Figure 9-30 View of Host Systems panel, with the Go button selected Click the Select Storage Complex action pull-down, and highlight the storage complex on which you wish to configure, click Create and Go . The screen will advance to the General host information panel shown in Figure 9-31.
Page 202
Figure 9-32 View of Define Host Systems panel Enter the appropriate information in the Define host ports panel, as shown in Figure 9-32. Note: Selecting Group ports to share a common set of volumes will group the host ports together into one attachment. Each host port will require a WWPN to be entered now, if you are using the Real-time Manager, or later if you are using the Simulated Manager.
Page 203
Click Next , and the screen will advance to the Select storage units panel shown in Figure 9-34 . Figure 9-34 View of the Select storage unit panel Highlight the Available storage units that you wish, click Add and Next . The screen will advance to the Specify storage units parameters shown in Figure 9-35 on page 180.
Figure 9-35 View of the Specify storage units parameters panel Under the Specify storage units parameters, do the following: 1. Click the Select volume group for host attachment pull-down, and highlight Select volume group later . 2. Click any valid storage unit I/O ports under the This host attachment can login to field. 3.
Page 205
Figure 9-36 View of the Definition method panel From the Definition method panel, if you choose Create arrays automatically , the system will automatically take all the space from an array site and place it into an array. You will also notice that by clicking the check box next to Create an 8 disk array , that two 4 disk array sites will be placed together to form an eight disk array.
Page 206
Enter the appropriate information for the quantity of the arrays and the RAID type. Note: If you choose to create eight disk arrays, then you will only have half as many arrays and ranks as if you would have chosen to create four disk arrays. Click Next to advance to the Add arrays to ranks panel shown in Figure 9-38.
Page 207
Figure 9-39 View of creating custom arrays from four disk array sites. At this point you can select from the list of four disk array sites to put together to make an eight disk array. If you click Next the second array-site selection panel is displayed, as shown in Figure 9-40 on page 184.
Figure 9-40 View of the second array-site selection panel From this panel you can select the array sites from the pull-down list to make an eight disk array. 9.3.5 Creating extent pools To create extent pools, expand the Configure Storage section, click Extent pools , click Create from the Select Action pull-down and click Go .
The extent pools will take on either a server 0 or server 1 affinity at this point, as shown in Figure 9-42. Figure 9-42 View of Define properties panel Click Next and Finish . 9.3.6 Creating FB volumes from extents Under Simulated Manager , expand the Open systems section and click Volumes .
Page 210
Figure 9-43 View of the Select extent pool panel To determine the quantity and size of the volumes, use the calculators to determine the max size versus quantity as shown in Figure 9-44. Figure 9-44 The Define volume properties panel It is here that the volume will take on the LSS numbering affinity.
Note: Since server 0 was selected for the extent pool, only even LSS numbers are selectable, as shown in Figure 9-44. You can give the volume a unique name and number that may help you manage the volumes, as shown in Figure 9-45.
Page 212
Figure 9-46 The Define volume group properties filled out Select the host attachment you wish to associate the volume group with. See Figure 9-47. Figure 9-47 The Select host Attachments panel, with an attachment selected DS6000 Series: Concepts and Architecture...
Select the volumes for the group panel, as shown in Figure 9-48. Figure 9-48 The Select volumes for group panel Click Finish . 9.3.8 Assigning LUNs to the hosts Under Simulated Manager , perform the following steps to configure the volumes: 1.
12.Under the Define alias assignments panel, do the following: a. Click the check box next to the LCU number. b. Enter the starting address. c. Select the order of Ascending or Descending. d. Select the aliases per volumes. For example, 1 alias to every 4 base volumes, or 2 aliases to every 1 base volume.
9.3.13 Displaying the storage units WWNN in the DS Storage Manager GUI Under Simulated manager , perform the following steps to display the WWNN of the storage unit: 1. Click Simulated manager as shown in Figure 9-50. Figure 9-50 View of the Real-time Manager panel 2.
9.4 Summary In this chapter we have discussed the configuration hierarchy, terminology and concepts. We have recommended an order and methodology for configuring the DS6000 storage server. We have included some logical configuration steps and examples and explained how to navigate the GUI.
Page 218
DS6000 Series: Concepts and Architecture...
Now with the DS CLI, commands can be saved as scripts, which significantly reduces the time to create, edit and verify their content. The DS CLI uses a syntax that is consistent with other IBM TotalStorage products. All new products will also use this same syntax.
The DS CLI can be used to invoke the following Copy Services functions: FlashCopy - Point-in-time Copy IBM TotalStorage Metro Mirror - Synchronous Peer-to-Peer Remote Copy (PPRC) IBM TotalStorage Global Copy - PPRC-XD IBM TotalStorage Global Mirror - Asynchronous PPRC 10.3 Supported environments...
Server (known as Server A). For backup, a second server can also be defined (known as Server B). Figure 10-1 on page 199 shows the flow of commands from host to server. When the Copy Services (CS) server receives a command, it determines whether the volumes involved are owned by cluster 1 or cluster 2.
Figure 10-1 Command flow for ESS 800 Copy Services commands A CS server is now able to manage up to eight F20s and ESS 800s. This means that up to sixteen clusters can be clients of the CS server. All FlashCopy and remote copy commands are sent to the CS server which then sends them to the relevant client on the relevant ESS.
Page 224
Ethernet switches within the DS8000 base frame to deliver the commands to the relevant storage server. This means that the DS8000 itself is not on the same network as the open systems host. The S-HMC therefore acts as a bridge between the...
Page 225
Open systems host CLI script ESS CLI software DS CLI software Network interface Figure 10-3 Command flow for the DS6000 For the DS6000, it is possible to install a second network interface card within the DS Storage Manager PC. This would allow you to connect it to two separate switches for improved redundancy.
- has read-only access to commands no_access - cannot perform any tasks The functions of these groups are fairly self describing and are fully detailed both in the IBM TotalStorage DS8000 Command-Line Interface User’s Guide , SC26-7625 and IBM TotalStorage DS6000 Command-Line Interface User’s Guide , SC26-7681, and the help...
Page 228
If you are familiar with UNIX, then a simple example of creating a script is shown in Example 10-3. Example 10-3 Creating a DS CLI script /opt/ibm/dscli >echo “dscli lsuser > userlist.txt” > listusers.sh /opt/ibm/dscli >chmod +x listusers.sh /opt/ibm/dscli >./listusers.sh /opt/ibm/dscli >cat userlist.txt...
In this example, the script was placed in a file called listAllUsers.cli, located in the scripts folder within the DS CLI folder. It is then executed by using the dscli -script command, as shown in Example 10-6. Example 10-6 Executing DS CLI in script mode C:\Program Files\IBM\dscli> dscli -script scripts\listAllUsers.cli Name Group ===============...
Example 10-9 Sample Windows bat file to test return codes @ECHO OFF dscli lsflash -dev IBM.2105-23953 1004:1005 if errorlevel 6 goto level6 if errorlevel 5 goto level5 if errorlevel 4 goto level4...
Using this sample script, Example 10-10 shows what happens if there is a network problem between the DS CLI client and server (in this example a 2105-800). The DS CLI provides the error code (in this case CMUN00018E) which can be looked up in the DS CLI Users Guide.
# The following command creates 32 CKD volumes (0200-021F will be created) # These ckd volumes are on LCU 02 mkckdvol -dev IBM.2107-9999999 -extpool P0 -cap 3339 -name ckd_vol_#h 0200-021F # The following command creates 32 CKD volumes (0400-041F will be created) # These ckd volumes are on LCU 04 mkckdvol -dev IBM.2107-9999999 -extpool P0 -cap 3339 -name ckd_vol_#h 0400-041F...
Note: You might consider requesting assistance from IBM in the migration phase. Depending on your geography, IBM can offer CLI migration services to help you ensure the success of your project.
In Example 10-12, the list task command is used. This is an ESS CLI command. Example 10-12 Using the list task command to list all saved tasks (only the last five are shown) arielle@aixserv:/opt/ibm/ESScli > esscli list task -s 10.0.0.1 -u csadmin -p passw0rd Wed Nov 24 10:29:31 EST 2004 IBM ESSCLI 2.4.0...
Example 10-13. Example 10-13 Using the command line to get the contents of a FlashCopy task mitchell@aixserv:/opt/ibm/ESScli > esscli show task -s 10.0.0.1 -u csadmin -p passw0rd -d "name=Flash10041005" Wed Nov 24 10:37:17 EST 2004 IBM ESSCLI 2.4.0...
This is no different than with the current ESS CLI, except that userids created using the ESS 800 Web Copy Service Server are not used (the userid used is an ESS Specialist userid). If you have DS CLI access to a DS offline configuration tool, S-HMC or DS Storage Management console, then you can create an encrypted password file.
Page 238
# If the -dev parameter is needed in a command then it will default to the value here "devid" is equivalent to "-dev storage_image_ID" # the default server that DS CLI commands will be run on is 2105 800 23953 devid: IBM.2105-23953 If you don’t want to create an encrypted password file, or do not have access to a simulator or...
Page 239
Note also that the syntax between the command and the profile is slightly different. Example 10-17 Example of a DS CLI command that specifies the username and password C:\Program Files\IBM\dscli>dscli -hmc1 10.0.0.1 -user admin -passwd passw0rd lsuser Name Group...
10.11 Summary This chapter has provided some important information about the DS CLI. This new CLI allows considerable flexibility in how DS6000 and DS8000 series storage servers are configured and managed. It also detailed how an existing ESS 800 customer can benefit from the new flexibility provided by the DS CLI.
This chapter discusses early performance considerations regarding the DS6000 series. Disk Magic modelling for DS6000 is going to be available in early 2005. Contact your IBM sales representative for more information about this tool and the benchmark testing that was done by the Tucson performance measurement lab.
Total Cost of Ownership (TCO), dictates inventing smarter architectures that allow for growth at a component level. IBM understood this early on, introduced its Seascape® architecture, and brought the ESS into the marketplace in 1999 based on this architecture.
11.2 Where do we start? The IBM Enterprise Storage Server 2105 (ESS) already combined everything mentioned in the previous paragraph when it appeared in the marketplace in 1999. Over time the ESS evolved in many respects to enhance performance and to improve throughput. Despite the powerful design, the technology and implementation used eventually reached its life cycle end.
RAID rank saturation and reached their limit of 40 MB/sec for a single stream file I/O. IBM decided not to pursue SSA connectivity, despite its ability to communicate and transfer data within an SSA loop without arbitration.
relatively small logical volumes, we ran out of device numbers to address an entire LSS. This happens even earlier when configuring not only real devices (3390B) within an LSS, but also alias devices (3390A) within an LSS in z/OS environments. By the way, an LSS is congruent to an logical control unit (LCU) in this context.
Page 248
To host servers Memory Processor Processor Adapter Adapter Adapter Adapter Memory Processor Processor To storage servers Adapter Adapter Fibre Channel switch 16 DDM Fibre Channel switch Figure 11-2 Switched FC-AL disk subsystem Performance is enhanced as both DAs connect to the switched Fibre Channel disk subsystem backend as displayed in Figure 11-3 on page 225.
Page 249
Adapter Processor Adapter Fibre Channel switch Fibre Channel switch Figure 11-3 High availability and increased bandwidth connecting both DA to two logical loops These two switched point-to-point loops to each drive, plus connecting both DAs to each switch, accounts for the following: There is no arbitration competition and interference between one drive and all the other drives because there is no hardware in common for all the drives in the FC-AL loop.
11.3.3 New four-port host adapters Before looking into the server complex we briefly review the new host adapters and their enhancements to address performance. Figure 11-5 on page 227 depicts the new host adapters.
11.3.4 Enterprise-class dual cluster design for the DS6800 The DS6000 series provides a dual cluster or rather a dual server design, which is also found in the ESS and DS8000 series. This offers an enterprise-class level of availability and functionality in a space efficient, modular design at a low price.
Page 252
2 Gbps Fibre Channel ports Figure 11-7 DS6800 server enclosure with its Fibre Channel switched disk subsystem The DS6800 controls, through its two processor complexes, not only one I/O enclosure as Figure 11-7 displays, but can connect to up to 7 expansion enclosures. Figure 11-8 on page 229 shows a DS6800 with one DS6000 expansion enclosure.
Fibre Channel switches through its two remaining ports. This is similar to inter-switch links between Fibre Channel switches. Through the affinity of extent pools to servers, the DA in a server is used to drive the I/O to the disk drives in the host extent pools owned by its server.
Figure 11-9 DS6000 interconnects to expansion enclosures and scales very well Figure 11-9 outlines how expansion enclosures connect through inter-switch links to the server enclosure. Note the two Fibre Channel loops which are evenly populated as the number of expansion enclosures grow.
11.4.2 Data placement in the DS6000 Once you have determined the disk subsystem throughput, the disk space and number of disks required by your different hosts and applications, you have to make a decision regarding the data placement. As is common for data placement and to optimize the DS6000 resources utilization, you should: Equally spread the LUNs across the DS6000 servers.
Balanced implementation: LVM striping Rank 1 Extent Rank 2 Rank 3 Rank 4 Figure 11-10 Spreading data across ranks Note: The recommendation is to use host striping wherever possible to distribute the read and write I/O access patterns across the physical resources of the DS6000. The stripe size Each striped logical volume that is created by the host’s logical volume manager has a stripe size that specifies the fixed amount of data stored on each DS6000 logical volume (LUN) at...
11.4.5 Determining the number of paths to a LUN When configuring the IBM DS6000 for an open systems host, a decision must be made regarding the number of paths to a particular LUN, because the multipath software allows (and manages) multiple paths to a LUN. There are two opposing factors to consider when...
11.5.2 Performance potential in z/OS environments FICON channels started in the IBM 9672 G5 and G6 servers with 1 Gbps. Eventually these channels were enhanced to FICON Express channels in IBM 2064 and 2066 servers, with double the speed, so they now operate at 2 Gbps.
NVS to hold DASD fast write (DFW) data until staged to disk. The IBM Tucson performance evaluation lab suggests a certain ratio between cache size to backstore capacity. In general, the recommendation is: 0.5% cache to backstore ratio for z/OS high performance...
Page 260
FICON ports. So an ESS 800 with eight FICON channels each connected to IBM 9672 G5 or G6 servers, might end up in a single DS6000 also with eight FICON channels.
Note that this discussion takes a theoretical approach, but it is sufficient to get a first impression. At GA the IBM internal tool, Disk Magic, helps to model configurations based on customer workload data. An IBM representative can contact support personnel who will use Disk Magic to configure a DS6000 accordingly.
Page 262
In this example either one of the two HAs can address any volume in any of the ranks. which range here from rank number 1 to 12. But the HA and DA affinity to a server prefers one path over the other. Now z/OS is able to notice the preferred path and then schedule an I/O over the preferred path as long as the path is not saturated.
Page 263
Again what is obvious here is the affinity between all volumes residing in extent pool 0 to the left processor complex, server 0, including its HA, and the same for the volumes residing in extent pool 1 and their affinity to the right processor complex or server 1.
It provides a wide capacity range from 16 DDMs up to DS6000 Series: Concepts and Architecture preferred path preferred path red I/O Server 0 blue Extent pool2 blue I/O red I/O Server 1...
Page 265
128DDMs. Depending on the DDM size this reaches a total of up to 67.2 TB. Just the base enclosure provides up to 4.8 TB of physical storage capacity with 16 DDMs and 300 GB per DDM. The small and fast DS6000 with its rich functionality and compatibility with the ESS 750, ESS 800, and DS8000, in all functional respects makes this a very attractive choice.
Page 266
DS6000 Series: Concepts and Architecture...
DS6000, but also to provide additional benefits that are not specific to the DS6000. 12.2 z/OS enhancements The DS6000 series simplifies system deployment by supporting major server platforms. The DS6000 will be supported on the following releases of the z/OS operating system and functional products: z/OS 1.4 and higher...
DS6000 has a much higher number of devices compared to the IBM 2105. In the IBM 2105, we have 4096 devices and in the DS6000 we have up to 8192 devices in a storage facility. With the enhanced scalability support, the following is achieved: Common storage (CSA) usage (above and below the 16M line) is reduced.
Hardware Configuration Manager (HCM). The definition of the 1750 control unit type in the HCD is not required to define an IBM 1750 storage facility to z/Series hosts. Existing IBM 2105 definitions could be used, but the number of LSSs will be limited to the same number as today in the IBM 2105.
IDC3003I FUNCTION TERMINATED. CONDITION CODE IS 12 Figure 12-1 SETCACHE options All other parameters should be accepted as they are today on the IBM 2105. For example, setting device caching ON is accepted, but has no affect on the subsystem.
12.2.9 Preferred pathing In the DS6000, host ports have a fixed assignment to a server (or controller card). The DS6000 will notify the host operating system, in this case DFSMS (device support), if a path is preferred or not. Device support will then identify preferred paths to the IOS. I/Os will be directed to preferred paths to avoid crossing the PCI-X connection.
Figure 12-3 D M=DEV command output 12.2.10 Migration considerations A DS6000 will be supported as an IBM 2105 for z/OS systems without the DFSMS and z/OS SPE installed. This will allow customers to having to take a sysplex-wide outage. An IPL will have to be taken to activate the DFSMS and z/OS portions of this support.
VSE/ESA does not support 64K LVs for the DS6000. 12.5 TPF enhancements TPF is an IBM platform for high volume, online transaction processing. It is used by industries demanding large transaction volumes, such as airlines and banks. The DS6000 will be supported on TPF 4.1 and higher.
DS6000 disk storage server. This includes migrating data from the ESS 2105 as well as from other disk storage servers to the new DS6000 disk storage server. The focus is on z/OS environments. The following topics are covered from a planning standpoint:...
A DS6800 can contain up to 32 logical control units (LCUs) at GA. This allows the DS6800 to simulate up to 32 times an IBM 3390-6 control unit image with 256 devices within each single control unit image. The number of supported volumes within a DS6800 is 32 x 256 devices which equals 8,192 logical volumes.
8-packs. The allocation of a volume happens in extents or increments of the size of an IBM 3390-1 volume, or 1,113 cylinders. So a 3390-3 consists of exactly three extents from an extent pool. A 3390-9 with 10,017 cylinders comprises 9 extents out of an extent pool. There is no affinity any longer to an 8-pack for a logical volume like a 3390-3 or any other sized 3390 volume.
Usually this is not an issue because over time the device geometry of the IBM 3390 volume has become a quasi standard and most installations have used this standard. For organizations still using other device geometry (for example, 3380), it might be worthwhile to consider a device geometry conversion, if possible.
13.2.2 Software- and hardware-based data migration Piper z/OS (an IBM IGS service) and z/OS Global Mirror are tools for data migration that are based on software which in turn relies on specific hardware or microcode support. This section outlines these two popular approaches to migrate data.
Page 280
Fibre Channel connectivity only. It can be used, though, for the DS8000, which still allows you to connect to the ESCON infrastructure through supported ESCON host adapters. IBM plans to enhance the Piper server with FICON channel capable hardware to allow migration to FICON-only environments.
Page 281
Most of these benefits also apply to migration efforts controlled by the customer when utilizing TDMF or FDRPAS in customer-managed systems. To summarize: Piper for z/OS is an IGS service offering which relieves the customer of the actual migration process and requires customer involvement only in the planning and preparation phase.
Currently only IBM- or HDS-based controllers support XRC as a primary or source disk subsystem. As an exception, this does not apply to the IBM RVA storage controller, which does not support XRC as a primary XRC device. Also, EMC does not provide XRC support at the XRC primary site.
Page 283
Again this approach is only possible from IBM ESS to IBM DS6000 or IBM DS8000 disk storage servers and it requires the same size or larger PPRC secondary volumes with the same device geometry.
Page 284
Then check that all data is replicated to the target disk server. This might be a bit labor-intensive in a large environment without the help of automation scripts. Basically you would check each individual primary volume (=source volume) that all data is copied over.
Page 285
Figure 13-6 Check with Global Copy whether all data was replicated to the new volume This approach is not really practical though. ICKDSF also allows you to query the status of a Global Copy primary volume and displays the amount of data which is not yet replicated, as shown in Example 13-1.
Page 286
Example 13-2 All data is replicated PPRCOPY DDNAME(DD02) QUERY ICK00700I DEVICE INFORMATION FOR 6102 IS CURRENTLY AS FOLLOWS: PHYSICAL DEVICE = 3390 STORAGE CONTROLLER = 2105 STORAGE CONTROL DESCRIPTOR = E8 DEVICE DESCRIPTOR = 0A ADDITIONAL DEVICE INFORMATION = 4A000035 ICK04030I DEVICE IS A PEER TO PEER REMOTE COPY VOLUME DEVICE LEVEL ------ --------- -------------- ----------- ----------- -----------...
Global Copy is used for the actual data movement. Please note that the consistent copy in the new disk server is not concurrent with the primary copy except if the application is stopped and all data is replicated.
The following software products and components support logical data migration: DFSMS allocation management Allocation management by CA-ALLOC DFSMSdss DFSMShsm™ System utilities like: – IDCAMS with REPRO, EXPORT / IMPORT commands – IEBCOPY to migrate Partitioned Data Sets (PDS) or Partitioned Data Sets Extended (PDSE) –...
Assume these three storage controllers are going to be consolidated into a new DS6800 storage server, and that the number of volumes will be consolidated from 6 down to two, with the respective capacity as displayed in Figure 13-8.
Page 290
RO *ALL,V SMS,VOL(AAAAAA),D,N RO *ALL,V SMS,VOL(AAAAAA),D,N V SMS,VOL(AAAAAA),D,N V SMS,VOL(AAAAAA),D,N IGD010I VOLUME (AAAAAA,MCECEBC ) STATUS IS NOW DISABLED,NEW IGD010I VOLUME (AAAAAA,MCECEBC ) STATUS IS NOW DISABLED,NEW Figure 13-8 Utilize SMS SG and Volume status to direct all new allocation to new volumes When both the old hardware and the new hardware can be installed and connected to the host servers, the new volumes are integrated into the existing SGs, SG1 and SG2.
Page 291
3. Alter - Alter a Storage Group 4. Volume - Display, Define, Alter or Delete Volume Information If List Option is chosen, Enter "/" to select option Use ENTER to Perform Selection; Use HELP Command for Help; Use END Command to Exit. The next panel which appears is in Example 13-7.
Page 292
MCECEBC ===> ENABLE ===> ===> ===> ===> ===> ===> ===> In this panel we overtype the SMS volume status with the desired status change. This shows in the following panel, shown in Example 13-9. Example 13-9 Indicate SMS volume status change for all connected system images SMS VOLUME STATUS ALTER Command ===>...
Page 293
===> Use ENTER to Perform Selection; Use HELP Command for Help; Use END Command to Exit. In this example all volumes that were selected through the filtering in the previous panel no longer allow any new allocation on these volumes. But this happens only after the updated SCDS is activated and copied into the Active Control Data Set (ACDS).
Verify at the end of this logical data set migration that all data has been removed from the source disk server with the IEHLIST utility’s LISTVTOC command. Again this approach requires you to have the old and new equipment connected at the same time and most likely over an extended period, except if you push the migration through jobs like in Example 13-12, in which you can run more than one instance concurrently.
Global Mirror under a z/OS image also allows you to move z/VM full mini disks between different storage servers and would allow you to connect to the source disk server through ESCON and to the target disk storage server with FICON.
Because Metro Mirror provides data consistency at any time, the switch-over to the new disk server is simple and does not require further efforts to ensure data consistency at the receiving site. It is feasible to use the GUI -based approach because migration is usually a one time effort.
IBM pSeries, RS/6000, IBM BladeCenter JS20 IBM iSeries HP PARisc, Itanium II HP Alpha Intel IA-32, IA-64, IBM BladeCenter HS20 and HS40 Apple Macintosh Fujitsu PrimePower The DS6000 and DS8000 have the same open systems support matrix. There are only a few exceptions, with respect to the timing.
Furthermore, you can select a detailed view for each combination with more information, quick links to the HBA vendors’ Web pages and their IBM supported drivers, and a guide to the recommended HBA settings.
(from their point of view) storage systems, especially when they also have storage systems in their product portfolio. You may even get misleading information about interoperability and support from IBM. It is beyond the scope of this book to list all the vendors’ Web sites.
There is a process for cases where a desired configuration is not represented in the support matrix. This process is called IBM storage sales specialist or IBM Business Partner for submission of an RPQ. Initiating the process does not guarantee the desired configuration will be supported. This depends on the technical feasibility and the required test effort.
HBAs into one logical disk. This layer manages path failover, should a path become unusable, and balancing of I/O requests across the available paths. For most operating systems that are supported for DS6000 attachment, IBM makes the IBM Subsystem Device Driver (SDD) available to provide the following functionality:...
IBM AIX alternatively offers MPIO, a native multipathing solution. It allows the use of Control Modules PCM with the SDD full functionality. See “IBM AIX” on page 303 for more detail. IBM OS/400 V5R3 doesn't use SDD. It provides native multipath support since V5R3. For details refer to Appendix B, “Using the DS6000 with iSeries”...
14.5 IBM TotalStorage Productivity Center The IBM TotalStorage Productivity Center (TPC) is an open storage management solution that helps to reduce the effort of managing complex storage infrastructures, to increase storage capacity utilization, and to improve administrative efficiency. It is designed to enable an agile storage infrastructure that can respond to on-demand storage needs.
Page 307
SAN storage devices by allowing administrators to configure, manage, and monitor storage from a single console. The devices managed are not restricted to IBM brand products. In fact, any device compliant with the Storage Network Industry Association (SNIA) Storage Management Initiative Specification (SMI-S) can be managed with the IBM TotalStorage Multiple Device Manager.
Figure 14-2 MDM main panel For more information about the IBM TotalStorage Multiple Device Manager refer to the redbook IBM TotalStorage Multiple Device Manager Usage Guide, SG24-7097. Updated support summaries, including specific software, hardware, and firmware levels supported, are maintained at: http://www.ibm.com/storage/support/mdm...
Devices that are not SMI-S compliant are not supported. The DM also interacts and provides SAN management functionality when the IBM Tivoli SAN Manager is installed. The DM health monitoring keeps you aware of hardware status changes in the discovered storage devices.
Page 310
TPC for Disk collects data from IBM or non-IBM networked storage devices that implement SMI-S. A performance collection task collects performance data from one or more storage groups of one device type. It has individual start and stop times, and a sampling frequency.
14.6 Global Mirror Utility DS6000 Global Mirror Utility for IBM TotalStorage Global Mirror failover and failback (FO/FB). It provides clients with a set of twelve basic commands that utilize the DS Open-API to accomplish either a planned or unplanned FO/FB sequence.
Regular checks support customers in keeping the eRCMF configuration up-to-date with their actual environment; otherwise, full eRCMF management functionality is not given. eRCMF is a IBM Global Services offering. More information about eRCMF can be found at: http://www-1.ibm.com/services/us/index.wss/so/its/a1000110 14.8 Summary The new DS6000 enterprise disk subsystem offers broad support and superior functionality for all major open system host platforms.
15.1 Introduction data migration The term process of moving data from one type of storage to another, or to be exact, from one type of storage to a DS6000. In many cases, this process is not only comprised of the mere copying of the data, but also includes some kind of consolidation.
Refer to the documentation of your clustering solution for ways to propagate configuration changes throughout the cluster. Note: IBM Global Services can assist you in all phases of the migration process with professional skill and methods. 15.2 Comparison of migration methods There are numerous methods that can be used to migrate data from one storage system to another.
Page 316
Strong involvement of the system administrator is necessary. Today the majority of data migration tasks is performed with one of the methods discussed in the following sections. Basic copy commands Using copy commands is the simplest way to move data from one storage system to another, for example: copy , xcopy , drag and drop for Windows cp , cpio for UNIX...
Page 317
Online copy and synchronization with rsync rsync is an open source tool that is available for all major open system platforms, including Windows and Novell Netware. Its original purpose is the remote mirroring of file systems with as few network requirements as possible.
Page 318
Usually the process is to set up a mirror of the data on the old disks to the new LUNs, wait until it is synchronized and split it at the cut over time. Some LVMs provide commands that automate this process. The biggest advantage of using the LVM for data migration is that the process can be totally non-disruptive, as long as the operating system allows you to add and remove LUNs dynamically.
15.2.2 Subsystem-based data migration The DS6000 provides remote copy functionality, which also can be used to migrate data: IBM TotalStorage Metro Mirror, formerly known as PPRC, for distances up to 300km IBM TotalStorage Global Copy, formerly known as PPRC Extended Distance, for longer...
Page 320
The remote copy functionality can be used to migrate data in either direction between the Enterprise Storage Server (ESS) 750 or 800 and the new DS8000 and DS6000 storage systems. The ESS E20 and F20 lack support for remote copy over Fibre Channel and can therefore not be mirrored directly to a DS6000.
Piper is a hardware and software solution to move data between disk systems while production is ongoing. It is used in conjunction with IBM migration services. Piper is available for mainframe and open systems environments. Here we discuss the open systems version only.
15.3 IBM migration services This is the easiest way to migrate data, because IBM will assist you throughout the complete migration process. In several countries IBM offers a migration service. Check with your IBM sales representative about migration services for your specific environment and needs.
How to prepare a system that boots from the DS6000 (when supported) It varies in detail for the different platforms. Download it from: http://www.ibm.com/servers/storage/disk/ds6000 Many more publications are available from IBM and other vendors. Refer to “Open systems support and software” on page 275 sections in this chapter.
Some tools are worth discussing because they are available for almost all UNIX variants and system administrators are accustomed to using them. You may have to administer a server and these are the only tools you have available to use. These tools offer a quick way to tell...
The output reports the following: The %tm_act column indicates the percentage of the measured interval time that the device was busy. The Kbps column shows the average data rate, read and write data combined, of this device. The tps column shows the transactions per second. Note that an I/O transaction can have a variable transfer size.
IBM AIX This section covers items specific to the IBM AIX operating system. It is not intended to repeat the information that is contained in other publications. We focus on topics that are not covered in the well known literature or are important enough to be repeated here.
2002. Fault Tolerant Storage - Multipathing and Clustering Solutions for Open Systems for the IBM ESS , SG24-6295, focuses mainly on high availability and covers SDD and HACMP topics. It also is from 2002. Much of the technical information for pSeries or AIX also covers external storage, since SAN attachment became standard procedure in almost all data centers of size with a claim to availability.
AIX disk subsystem to recombine the multiple hdisks into one device. Subsystem device driver (SDD) The IBM Subsystem Device Driver (SDD) software is a host-resident pseudo device driver designed to support the multipath configuration environments in IBM products. SDD resides in the host system with the native disk device driver and manages redundant connections between the host server and the DS6000.
Page 330
The base functionality of MPIO is limited. It provides an interface for vendor-specific Control Modules IBM provides a PCM for DS6000 that enhances MPIO with all the features of the original SDD. It is called SDDPCM and is available from the SDD download site (refer to 14.2, “Subsystem Device Driver”...
Page 331
System Management Guide: Operating System and Devices for AIX 5L : http://publib16.boulder.ibm.com/pseries/en_US/aixbman/baseadmn/manage_mpio.htm Restriction: A point worth considering when deciding between SDD and MPIO is, that the IBM TotalStorage SAN Volume Controller does not support MPIO at this time. For updated information refer to: http://www-03.ibm.com/servers/storage/support/software/sanvc/installing.html Determine the installed SDDPCM level You use the same command as for SDD, lslpp -l "*sdd*"...
LVM configuration In AIX all storage is managed by the physical disks to be able to dynamically create, delete, resize, and move logical volumes for application use. To AIX our DS6000 logical volumes appear as physical SCSI disks. There are some considerations to take into account when configuring LVM. LVM striping Striping is a technique for spreading the data in a logical volume across several physical disks in such a way that all disks are used in parallel to access data on one logical volume.
AIX on IBM iSeries With the announcement of the IBM iSeries i5, it is now possible to run AIX in a partition on the i5. This can be either AIX 5L V5.2 or V5.3. All supported functions of these operating system levels are supported on i5, including HACMP for high availability and external boot from Fibre Channel devices.
For more information on OS/400 support for DS6000, see Appendix B, “Using the DS6000 with iSeries” on page 329. For more information on running AIX in an i5 partition, refer to the i5 Information Center at: http://publib.boulder.ibm.com/infocenter/iseries/v1r2s/en_US/index.htm?info/iphat/iphatlpar kickoff.htm Note: AIX will not run in a partition on earlier 8xx and prior iSeries systems.
Page 335
0.86 56.1 /dev/hdisk73 0.86 69.9 /dev/hdisk77 0.86 68.9 /dev/hdisk59 ------------------------------------------------------------------------ Detailed Physical Volume Stats ------------------------------------------------------------------------ VOLUME: /dev/hdisk65 description: IBM MPIO FC 1750 reads: (0 errs) read sizes (blks): 15.4 min read times (msec): 6.440 min read sequences: read seq. lengths: 15.4 min...
Linux is rapidly changing. All these factors make it difficult to promise and provide generic support for Linux. As a consequence, IBM has decided on a support strategy that limits the uncertainty and the amount of testing. IBM only supports the major Linux distributions that are targeted at enterprise customers:...
The redbook, Linux with zSeries and ESS: Essentials , SG24-7025, provides a lot of information about Linux on IBM eServer zSeries and the ESS. It also describes in detail how the Fibre Channel (FCP) attachment of a storage system to zLinux works. It does not, however, describe the actual implementation.
It is intended to help users to attach a server running an enterprise-level Linux distribution based on United Linux 1 (IA-32) to the IBM 2105 Enterprise Storage Server. It provides very detailed step by step instructions and a lot of background information about Linux and SAN storage attachment.
Table A-2 Minor numbers, partitions and special device files Major number Minor number Missing device files The Linux distributors do not always create all the possible special device files for SCSI disks. If you attach more disks than there are special device files available, Linux will not be able to address them.
Linux disk subsystem to recombine the multiple disks seen by the system into one, to manage the paths and to balance the load across them. The IBM multipathing solution for DS6000 attachment to Linux on Intel IA-32 and IA-64 architectures, IBM pSeries and iSeries is the IBM Subsystem Device Driver (SDD) (see 14.2, “Subsystem Device Driver”...
Page 341
RedHat Enterprise Linux (RH-EL) multiple LUN support RH-EL by default is not configured for multiple LUN support. It will only discover SCSI disks addressed as LUN 0. The DS6000 provides the volumes to the host with a fixed Fibre Channel address and varying LUN. Therefore RH-EL 3 will see only one DS6000 volume (LUN 0), even if more are assigned to it.
Page 342
scsi_hostadapter3 qla2300 options scsi_mod max_scsi_luns=128 Adding FC disks dynamically The commonly used way to discover newly attached DS6000 volumes is to unload and reload the Fibre Channel HBA driver. However, this action is disruptive to all applications that use Fibre Channel attached disks on this particular host. A Linux system can recognize newly attached LUNs without unloading the FC HBA driver.
The Emulex HBA driver behaves differently: it always scans all LUNs up to 127. Linux on IBM iSeries Since OS/400 V5R1, it has been possible to run Linux in an iSeries partition. On iSeries models 270 and 8xx, the primary partition must run OS/400 V5R1 or higher and Linux is run in a secondary partition.
The generic SCSI tools The SUSE Linux Enterprise Server comes with a set of tools that allow low-level access to SCSI devices. They are called the SCSI layer, which is represented by special device files /dev/sg0, /dev/sg0, and so on.
SDD is installed before adding additional paths to a device. Otherwise, the operating system could lose the ability to access existing data on that device. For details, refer to the IBM TotalStorage Multipath Subsystem Device Driver User’s Guide , SC30-4096. Here we highlight only some important items: SDD does not support I/O load balancing with Windows 2000 server clustering (MSCS).
Microsoft Functionality Non-Microsoft Functionality Non-Microsoft Functionality Figure A-1 Microsoft VDS Architecture For a detailed description of VDS, refer to the Microsoft Windows Server 2003 Virtual Disk Service Technical Reference at: http://www.microsoft.com/Resources/Documentation/windowsserv/2003/all/techref/en-us/W2K3TR_ vds_intro.asp The DS6000 can act as a VDS hardware provider. The implementation is based on the DS Common Information Model (CIM) agent, a middleware application that provides a CIM-compliant interface.
Cluster Service (MSCS) and the Metro Mirror (PPRC) feature of the DS6000. It is designed to allow Microsoft Cluster installations to span geographically dispersed sites and help protect clients from site disasters or storage system failures. This solution is offered through IBM storage services.
Important: The DS6000 FC ports used by OpenVMS hosts must not be accessed by any other operating system, not even accidentally. The OpenVMS hosts have to be defined for access to these ports only, and it must be ensured that no foreign HBA (without definition as an OpenVMS host) is seen by these ports.
HP StorageWorks FC controllers use LUN 0 as exchanging commands and information with in-band management tools. This concept is similar to the Access LUN of IBM TotalStorage DS4000 (FAStT) controllers. Because the OpenVMS FC driver has been written with StorageWorks controllers in mind, OpenVMS always considers LUN 0 as CCL, never presenting this LUN as disk device.
Page 351
However, there is no forced error indicator in the SCSI architecture, and the revector operation is nonatomic. As a substitute, the OpenVMS shadow driver exploits the SCSI commands READ LONG (READL) and WRITE LONG (WRITEL), optionally supported by some SCSI devices. These I/O functions allow data blocks to be read and written together with their disk device error correction code (ECC).
Page 352
DS6000 Series: Concepts and Architecture...
Each adapter requires its own dedicated I/O processor. The iSeries Storage Web page provides information about current hardware requirements, including support for switches. This can be found at: http://www-1.ibm.com/servers/eserver/iseries/storage/storage_hw.html Software The iSeries must be running V5R2 or V5R3 (i5/OS) of OS/400. In addition, at the time of...
Model Type Unprotected 1750-A85 1750-A84 1750-A86 1750-A87 Note: In Table B-1, GiB represents “Binary Gigabytes” (2 “Decimal Gigabytes” (10 When creating the logical volumes for use with OS/400, you will see that in almost every case, the OS/400 device size doesn’t match a whole number of extents, and so some space will be wasted.
Adding volumes to iSeries configuration Once the logical volumes have been created and assigned to the host, they will appear as non-configured units At this stage, they are used in exactly the same way as non-configured internal units. There is nothing particular to external logical volumes as far as OS/400 is concerned.
Page 357
Work with Disk Units Select one of the following: 1. Display disk configuration 2. Work with disk configuration 3. Work with disk unit recovery Selection F3=Exit F12=Cancel Figure B-2 Work with Disk Units menu 4. When adding disk units to a configuration, you can add them as empty units by selecting Option 2 or you can choose to allow OS/400 to balance the data across all the disk units.
Specify the ASP to add each unit to. Specify Serial Number 21-662C5 21-54782 75-1118707 1750 F3=Exit F12=Cancel Figure B-4 Specify ASPs to Add Units to 6. The Confirm Add Units panel will appear for review as shown in Figure B-5. If everything is correct, press Enter to continue.
Page 359
Figure B-6 iSeries Navigator initial panel 2. Expand the iSeries to which you wish to add the logical volume and sign on to that server as shown in Figure B-7. Figure B-7 iSeries Navigator Signon to iSeries panel 3. Expand Configuration and Service , Hardware , and Disk Units as shown in Figure B-8 on page 336.
Page 360
Figure B-8 iSeries Navigator Disk Units 4. You will be asked to sign on to SST as shown in Figure B-9. Enter your Service tools ID and password and press OK . Figure B-9 SST Signon 5. Right-click Disk Pools and select New Disk Pool as shown in Figure B-10 on page 337. DS6000 Series: Concepts and Architecture...
Page 361
Figure B-10 Create a new disk pool 6. The New Disk Pool wizard appears as shown in Figure B-11. Click Next . Figure B-11 New disk pool - welcome Appendix B. Using the DS6000 with iSeries...
Page 362
7. On the New Disk Pool dialog shown in Figure B-12, select Primary from the pull-down for the Type of disk pool, give the new disk pool a name and leave Database to default to Generated by the system . Ensure the disk protection method matches the type of logical volume you are adding.
Page 363
Figure B-14 Add disks to Disk Pool 10.A list of non-configured units similar to that shown in Figure B-15 will appear. Highlight the disks you want to add to the disk pool and click Add . Figure B-15 Choose the disks to add to the Disk Pool 11.A confirmation screen appears as shown in Figure B-16 on page 340.
Page 364
Figure B-16 Confirm disks to be added to Disk Pool 12.A summary of the Disk Pool configuration similar to Figure B-17 appears. Click Finish to add the disks to the Disk Pool. Figure B-17 New Disk Pool Summary 13.Take note of and respond to any message dialogs which appear. After taking action on any messages, the New Disk Pool Status panel shown in Figure B-18 on page 341 will appear showing progress.
Page 365
Figure B-18 New Disk Pool Status 14.When complete, click OK on the information panel shown in Figure B-19. Figure B-19 Disks added successfully to Disk Pool 15.The new Disk Pool can be seen on iSeries Navigator Disk Pools in Figure B-20. Figure B-20 New Disk Pool shown on iSeries Navigator 16.To see the logical volume, as shown in Figure B-21, expand Configuration and Service , Hardware , Disk Pools and click the disk pool you just created.
(SDD), multipath is part of the base operating system. At V5R3, up to eight connections can be defined from multiple I/O adapters on an iSeries server to a single logical volume in the DS6000. Each connection for a multipath disk unit functions independently.
Prior to multipath being available, some customers used OS/400 mirroring to two sets of disks, either in the same or different external disk subsystems. This provided implicit dual-path as long as the mirrored copy was connected to a different IOP/IOA, BUS, or I/O tower.
Figure B-23 Multipath removes single points of failure Unlike other systems, which may only support two paths (dual-path), OS/400 V5R3 supports up to eight paths to the same logical volumes. As a minimum, you should use two, although some small performance benefits may be experienced with more. However, since OS/400 multipath spreads I/O across all available paths in a balancing , only load...
BUS a BUS b Figure B-24 Example of multipath with iSeries Figure B-24 shows an example where 48 logical volumes are configured in the DS6000. The first 24 of these are assigned via a host adapter in the top controller card in the DS6000 to a Fibre Channel I/O adapter in the first iSeries I/O tower or rack.
Specify the ASP to add each unit to. Specify Serial Number 21-662C5 21-54782 75-1118707 1750 F3=Exit F12=Cancel Figure B-25 Adding multipath volumes to an ASP Note: For multipath volumes, only one path is shown. In order to see the additional paths, see “Managing multipath volumes using iSeries Navigator”...
Page 371
When you get to the point where you will select the volumes to be added, you will see a panel similar to that shown in Figure B-27. Multipath volumes appear as DMPxxx. Highlight the disks you want to add to the disk pool and click Add . Figure B-27 Adding a multipath volume Note: For multipath volumes, only one path is shown.
Page 372
When you have completed these steps, the new Disk Pool can be seen on iSeries Navigator Disk Pools in Figure B-28. Figure B-28 New Disk Pool shown on iSeries Navigator To see the logical volume, as shown in Figure B-29, expand Configuration and Service , Hardware , Disk Pools and click the disk pool you just created.
Managing multipath volumes using iSeries Navigator All units are initially created with a prefix of DD. As soon as the system detects that there is more than one path to a specific logical unit, it will automatically assign a unique resource name with a prefix of DMP for both the initial path and any additional paths.
Page 374
To see the other connections to a logical unit, right click the unit and select Properties , as shown in Figure B-31 on page 350. Figure B-31 Selecting properties for a multipath logical unit DS6000 Series: Concepts and Architecture...
Page 375
You will then see the General Properties tab for the selected unit, as in Figure B-32. The first Device 1 path is shown as in the box labelled Storage. Figure B-32 Multipath logical unit properties Appendix B. Using the DS6000 with iSeries...
To see the other paths to this unit, click the Connections tab, as shown in Figure B-33, where you can see the other seven connections for this logical unit. Figure B-33 Multipath connections Multipath rules for multiple iSeries systems or partitions When you use multipath disk units, you must consider the implications of moving IOPs and multipath connections between nodes.
I/O. If these do not meet your requirements, then you can adjust the hardware configuration in Disk Magic accordingly. Note: Disk Magic is for IBM and IBM Business Partner use only. Customers should contact their IBM or IBM Business Partner representative for assistance with Capacity Planning, which may be a chargeable service.
Other requirements: HA, DR etc. SAN Fabric Workload from other servers Figure B-34 Process for sizing external storage Planning for arrays and DDMs In general, although it is possible to use 146 GB and 300 GB 10K RPM DDMs, we recommend that you use 73 GB 15K RPM DDMs for iSeries production workloads.
Number of iSeries Fibre Channel adapters The most important factor to take into consideration when calculating the number of Fibre Channel adapters in the iSeries is the throughput capacity of the adapter and IOP combination. Since this guideline is based only on iSeries adapters and Access Density (AD) of iSeries workload, it doesn't change when using the DS6000.
I/O rate while that of other servers may be lower – often below one I/O per GB per second. As an example, a Windows file server with a large data capacity may normally have a low I/O rate with less peaks and could be shared with iSeries ranks. However, SQL, DB or other application servers may show higher rates with peaks, and we recommend using separate ranks for these servers.
I/O adapters to one host port in the DS6000. For a current list of switches supported under OS/400, refer to the iSeries Storage Web site http://www-1.ibm.com/servers/eserver/iseries/storage/storage_hw.html Migration For many iSeries customers, migrating to the DS6000 will be best achieved using traditional Save/Restore techniques.
Figure B-35 Using Metro Mirror to migrate from ESS to the DS6000 The same setup can also be used if the ESS LUNs are in an IASP, although the iSeries would not require a complete shutdown since varying off the IASP in the ESS, unassigning the ESS LUNs, assigning the DS6000 LUNs and varying on the IASP would have the same effect.
Page 383
You can then use the OS/400 command STRASPBAL TYPE(*ENDALC) to mark the units to be removed from the configuration as shown in Figure B-36. This can reduce the down time associated with removing a disk unit. This will keep new allocations away from the marked units.
State using ENDSBS *ALL . For most customers, this is not a practical solution. To avoid this and to make FlashCopy more appropriate to iSeries customers, IBM has developed a service offering to allow Independent Auxiliary Storage Pools (IASPs) to be used with FlashCopy independently from the LSU and other disks which make up *SYSBAS (ASP1-32).
AIX on IBM iSeries With the announcement of the IBM iSeries i5, it is now possible to run AIX in a partition on the i5. This can be either AIX 5L V5.2 or V5.3. All supported functions of these operating system levels are supported on i5, including HACMP for high availability and external boot from Fibre Channel devices.
/iphatlparkickoff.htm Note: AIX will not run in a partition on earlier 8xx and prior iSeries systems. Linux on IBM iSeries Since OS/400 V5R1, it has been possible to run Linux in an iSeries partition. On iSeries models 270 and 8xx, the primary partition must run OS/400 V5R1 or higher and Linux is run in a secondary partition.
IBM Implementation Services for TotalStorage Copy Functions IBM Implementation Services for TotalStorage Command-Line Interface IBM Migration Services for eServer zSeries data IBM Migration Services for open systems attached to TotalStorage disk systems IBM Geographically Dispersed Parallel Sysplex™ (GDPS®) Enterprise Remote Copy Management Facility (eRCMF)
IBM Migration Services for eServer zSeries data IBM provides a technical specialist at your site to help plan and assist in the implementation of non disruptive DASD migration to a new or existing IBM TotalStorage disk system. The migration is accomplished using the following software and hardware that allows DASD volumes to be copied to the new storage devices without interruption to service.
Page 389
IBM Migration Services for open systems attached to TotalStorage disk systems include planning for and implementation of data migration from an existing UNIX or Windows server to new or existing larger capacity IBM storage with minimal disruption. This service uses the following hardware and software tools:...
#GDSSolution IBM eServer iSeries Copy Services For the iSeries environment IBM offers a special toolkit, which allows you to use the advanced Copy Services functions with the iSeries. For more information on this, see “iSeries toolkit for Copy Services” on page 361.
Page 391
Figure 15-9 Example of the Supported Product List (SPL) from the IBM Support Line Appendix C. Service and support offerings...
Page 392
DS6000 Series: Concepts and Architecture...
IBM Redbooks For information on ordering these publications, see “How to get IBM Redbooks” on page 371. Note that some of the documents referenced here may be available in softcopy only. The IBM TotalStorage DS6000 Series: Implementation, SG24-6781...
These Web sites and URLs are also relevant as further information sources: Documentation for DS6800: http://www.ibm.com/servers/storage/support/disk/ds6800/ SDD and Host Attachment scripts http://www.ibm.com/support/ IBM Disk Storage Feature Activation (DSFA) Web site at http://www.ibm.com/storage/dsfa The PSP information can be found at: http://www-1.ibm.com/servers/resourcelink/svc03100.nsf?OpenDatabase Documentation for the DS6000: http://www.ibm.com/servers/storage/support/disk/1750.html...
Nortel: http://www.nortelnetworks.com/ ADVA: http://www.advaoptical.com/ How to get IBM Redbooks You can search for, view, or download Redbooks, Redpapers, Hints and Tips, draft publications and Additional materials, as well as order hardcopy Redbooks or CD-ROMs, at this Web site: ibm.com/redbooks...
Page 396
DS6000 Series: Concepts and Architecture...
Page 398
installation methods 197 migration 208 migration example 209 mixed device environments 208 return codes 206 supported environments 197 usage examples 207 user assistance 205 user security 203 DS management console see DS MC DS MC 8, 108, 126 connectivity 129 DS Open API 9, 110, 120 DS Open application programming interface see DS Open API...
Page 399
IBM Migration services open systems 365 IBM Multi-path Subsystem Device Drive see SDD IBM TotalStorage DS Command-line Interface see DS IBM TotalStorage DS Storage Manager see DS Storage Manager IBM TotalStorage Multiple Device Manager 283 IBM TotalStorage Productivity Center see TPC...
Page 400
LVS LCU 158 licensed features 124, 131 disk storage feature activation 137 ordering 134 licenses server attachment 134 Linux 312 limited number of SCSI devices 316 managing multiple paths 316 missing device files 315 on iSeries 319, 362...
Page 401
microcode maintaining 62 updates 62 Microsoft Windows 2000/2003 321 HBA and operating system settings 322 SDD 322 MPIO 306 multipathing other solutions 281 software 51 multiple allegiance 19 Multiple Relationship FlashCopy 10, 94 network settings 120–121 non-volatile storage see NVS NVS 48 OEL 124, 132 offline configuration 127, 165...
Page 402
75 logical volumes 72 ranks 69 vmstat 303 volume groups 78, 156 creating 187 VSE/ESA 265, 271 Windows Server 2003 VDS support WWNN 192 XRC see z/OS Global Mirror z/OS configuration recommendations 237 device recognition 246 IOS scalability 244...
Page 406
DS6000 series is designed and operates. connectivity The DS6000 series is a follow-on product of the IBM Enhanced TotalStorage Enterprise Storage Server with new functions configuration related to storage virtualization and flexibility.