Книга: Mastering VMware® Infrastructure3

Chapter 4: Creating and Managing Storage Devices

Chapter 4: Creating and Managing Storage Devices

Differentiate among the various storage options available to VI3. The storage options available for VMware Infrastructure 3 offer a wide range of performance and cost options. From the high-speed, high-cost fibre channel solution to the efficient, cost-effective iSCSI solution, to the slower, yet cheaper NAS/NFS, each solution has a place in any organization on a mission to virtualize.

Master It Identify the characteristics of each storage technology and which VI3 features each supports.

Solution Fibre channel, iSCSI, and NAS/NFS all allow for VMotion, DRS, and HA. Fibre channel is traditionally more expensive than iSCSI, which is more expensive than NAS/NFS. Fibre channel and iSCSI storage support a boot from SAN configuration for ESX Server. Only fibre channel SANs support virtual machines configured as part of Microsoft Server Cluster.

Design a storage area network for VI3. Once you've selected a storage technology, begin with the implementation of a dedicated storage network to optimize the transfer of storage traffic. A dedicated network for an iSCSI or NAS/NFS deployment will isolate the storage traffic from the e-mail, Internet, and file transfer traffic of the standard corporate LAN. From there, the LUN design for a fibre channel or iSCSI storage solution will work itself out in the form of the adaptive approach, predictive approach, or a hybrid of the two.

Master It Identify use cases for the adaptive and predictive LUN design schemes.

Solution The adaptive scheme is good for non-disk-intensive virtual machines and for minimizing administrative overhead. The predictive scheme involves more administrative effort for designing and creating the LUN strategy but offers a better performance opportunity for the VM.

Configure and manage fibre channel and iSCSI storage networks. Deploying a fibre channel SAN involves the development of a zoning and LUN masking strategy that ensures data security across ESX Server hosts while still providing for the needs of VMotion, HA, and DRS. The nodes in the fibre channel infrastructure are identified by 64-bit unique addresses called World Wide Names (WWNs). The iSCSI storage solution continues to use IP and MAC addresses for node identification and communication. ESX Server hosts use a four-part naming structure for accessing pools of storage on a SAN. Communication to an iSCSI storage device requires that both the Service Console and the VMkernel be able to communicate with the device.

Master It Identify the SAN LUNs that have been available to an ESX Server host. Solution Use the Rescan link from the Storage node of the Configuration tab.

Configure and manage NAS storage. NAS storage offers a cheap solution for providing a shared storage pool for ESX Server hosts. Since the ESX Server host connects under the context of root, the NFS server must be configured with the no_root_squash parameter. A VMkernel port with access to the NAS server is required for an ESX Server host to function.

Master It Identify the ESX Server and NFS server requirements for using a NAS/NFS device.

Solution Configure the ESX Server with a VMkernel port. Configure the shared directory on the NFS server using the rw, no_root_squash, and sync parameters.

Create and manage VMFS volumes. VMFS is the proprietary, highly efficient file system used by ESX Server hosts for storing virtual machine files, ISO files, and templates. VMFS volumes can be extended to overcome the 2TB limitation, but the file sizes within the VMFS volume still keep a maximum of 2TB. VMFS is managed through the VI Client or from a series of command-line tools, including vmkfstools and esxcfg-vmhbadevs.

Master It Increase the size of a VMFS volume.

Solution Use the datastore properties page to add a non-VMFS LUN as an extent in the existing datastore.

Master It Balance the I/O of an ESX Server to use all existing hardware.

Solution Use the datastore properties page to manually set the active paths to each LUN so that all HBAs in the local ESX Server host are being utilized.

Оглавление книги


Генерация: 0.088. Запросов К БД/Cache: 0 / 3
поделиться
Вверх Вниз