SVC - San Volume Controller
SVC virtualization concepts
The SVC product provides block-level aggregation and volume management for disk storage within the SAN. In simpler terms, SVC manages a number of back-end storage controllers and maps the physical storage within those controllers into logical disk images that can be seen by application servers and workstations in the SAN.
The SAN is zoned so that the application servers cannot see the back-end physical storage, which prevents any possible conflict between the SVC and the application servers both trying to manage the back-end storage.
A node is an SVC, which provides virtualization, cache, and copy services to the SAN. SVC nodes are deployed in pairs, to make up a cluster. A cluster can have between one and four SVC node pairs in it, which is a product limit not an architectural limit.
Each pair of SVC nodes is also referred to as an I/O Group. An SVC cluster might have between one and up to four I/O Groups. A specific virtual disk or VDisk is always presented to a host server by a single I/O Group of the cluster.
When a host server performs I/O to one of its VDisks, all the I/Os for a specific VDisk are directed to one specific I/O Group in the cluster. During normal operating conditions, the I/Os for a specific VDisk are always processed by the same node of the I/O Group. This node is referred to as the preferred node for this specific VDisk.
Both nodes of an I/O Group act as the preferred node for its specific subset of the total number of VDisks that the I/O Group presents to the host servers. But, both nodes also act as failover nodes for their specific partner node in the I/O Group. A node will take over the I/O handling from its partner node, if required.
In an SVC-based environment, the I/O handling for a VDisk can switch between the two nodes of an I/O Group. Therefore, it is mandatory for servers that are connected through FC to use multipath drivers to be able to handle these failover situations.
SVC 5.1 introduces iSCSI as an alternative means of attaching hosts. However, all communications with back-end storage subsystems, and with other SVC clusters, is still through FC. The node failover can be handled without a multipath driver installed on the server. An iSCSI-attached server can simply reconnect after a node failover to the original target IP address, which is now presented by the partner node. To protect the server against link failures in the network or host bus adapter (HBA) failures, a multipath driver is mandatory.
The SVC I/O Groups are connected to the SAN so that all application servers accessing VDisks from this I/O Group have access to this group. Up to 256 host server objects can be defined per I/O Group; these host server objects can consume VDisks that are provided by this specific I/O Group.
If required, host servers can be mapped to more than one I/O Group of an SVC cluster; therefore, they can access VDisks from separate I/O Groups. You can move VDisks between I/O Groups to redistribute the load between the I/O Groups. With the current release of SVC, I/Os to the VDisk that is being moved have to be quiesced for a short time for the duration of the move.
The SVC cluster and its I/O Groups view the storage that is presented to the SAN by the back-end controllers as a number of disks, known as managed disks or MDisks. Because the SVC does not attempt to provide recovery from physical disk failures within the back-end controllers, an MDisk is usually, but not necessarily, provisioned from a RAID array. The application servers however do not see the MDisks at all. Instead, they see a number of logical disks, which are known as virtual disks or VDisks, which are presented by the SVC I/O Groups through the SAN (FC) or LAN (iSCSI) to the servers. A VDisk is storage that is provisioned out of one Managed Disk Group (MDG), or if it is a mirrored VDisk, out of two MDGs.
An MDG is a collection of up to 128 MDisks, which creates the storage pools out of which VDisks are provisioned. A single cluster can manage up to 128 MDGs. The size of these pools can be changed (expanded or shrunk) at run time without taking the MDG or the VDisks that are provided by it offline. At any point in time, an MDisk can only be a member in one MDG with one exception (image mode VDisk).
MDisks that are used in a specific MDG must have the following characteristics:
VDisks can be mapped to a host to allow access for a specific server to a set of VDisks. A host within the SVC is a collection of HBA worldwide port names (WWPNs) or iSCSI qualified names (IQNs), defined on the specific server. Note that iSCSI names are internally identified by "fake" WWPNs, or WWPNs that are generated by the SVC. VDisks might be mapped to multiple hosts, for example, a VDisk that is accessed by multiple hosts of a server cluster.
An MDisk can be provided by a SAN disk subsystem or by the solid state drives that are provided by the SVC nodes themselves. Each MDisk is divided into a number of extents. The size of the extent will be selected by the user at the creation time of an MDG. The size of the extent ranges from 16 MB (default) up to 2 GB.
We recommend that you use the same extent size for all MDGs in a cluster, which is a prerequisite for supporting VDisk migration between two MDGs. If the extent size does not fit, you must use VDisk Mirroring as a workaround. For copying (not migrating) the data into another MDG to a new VDisk, you can use SVC Advanced Copy Services.