News:

Build a stunning handcrafted website with IT Acumens

Main Menu

Our Virtual Test Drives

Started by dhilipkumar, May 02, 2009, 09:03 PM

Previous topic - Next topic

dhilipkumar

Our Virtual Test Drives

Straight hypervisor performance doesn';t reflect the entire cluster management-performance experience. This Rolling Review was, first and foremost, a qualitative assessment, so we built our test bed to gauge each product's setup, physical-to-virtual conversions, and day-to-day manageability.
We ran all five platforms on a variety of server hardware, then put all of the virtualization suites except Parallels through a final series of tests on identical hardware. The test cluster included four quad-core Hewlett-Packard (NYSE: HPQ) DL385-G2 servers, two configured with 12 GB of RAM and two configured with 8 GB of RAM. Within each, host hypervisors were installed on a pair of hardware-RAID mirrored 72-GB 10,000- rpm SAS drives. VMware, Citrix (NSDQ: CTXS), and Virtual Iron installations were bare metal, and Microsoft (NSDQ: MSFT) Hyper-V was installed on top of Windows 2008.


All hosts were connected via Cisco (NSDQ: CSCO)'s 3750G for iSCSI SAN access to Dell (Dell) EqualLogic PS5000-series SAS and SATA storage devices. We base-tested connectivity with legacy Fibre Channel storage arrays to verify functionality, while a 16-drive SAS array and 48-drive SATA array allowed us to test compatibility and performance reflecting commodity solutions. The Dell system was very easy to set up, configure, and maintain throughout testing. Small and midsize businesses would do well to investigate entry-level, expandable iSCSI SANs from Dell or other vendors. We formatted the 48 TB of raw storage as one 32.3-TB RAID-50 pool, from which we sliced out 5-TB resource pools for each virtual machine cluster.

Each host had one dedicated gigabit network interface card for iSCSI traffic, and the 3750G was dedicated to storage only; no other communications traversed the physically isolated switch. All Ethernet data connections for the VM clusters ran over a second 3750G, with one gigabit NIC per host for network communications and one gigabit NIC for VM management. Three virtual LANs separated network traffic: one subnet for VM management and two subnets for VM testing.

All servers were rolled into four-host VM clusters for the four mainstream hypervisors, yielding a virtual pool of resources encompassing 16 2.6-GHz cores, 40 GB of RAM, and 5 TB of storage on the storage area network. It's easy to see why old-school mainframe wonks smile whenever 20-something IT admins get excited about virtualization.

We tested Parallels Server on four- and eight-core Apple Xserves running 8 GB and 32 GB, respectively, with the hypervisor installed atop Mac OS X Server 10.5.5 on hardware-RAID mirrored 80-GB SAS drives. Apple doesn't officially support iSCSI connections, only Fibre Channel SANs. We tested Parallels with a legacy Fibre Channel storage device and local storage. To guarantee support from Apple and Parallels, we recommend sticking to the Fibre Channel SANs on Apple's short vendor list for any Parallels Server installation.

We chose a Windows 2003 host running on older, nonvirtualization-optimized hardware as our physical-to-virtual guinea pig. During the course of this Rolling Review, we virtualized many varieties of Windows and Linux with decent results. For the wrap-up, we stuck with simple 32-bit Windows server conversions on the assumption that most SMBs initially consolidate older legacy servers to get aging hardware offline. The host ran file services serving up local storage, IIS for basic static pages, and DNS. Each vendor's (or vendor's partner's) physical-to-virtual converter worked without issue, capturing the Windows 2003 host as a VM.

We installed each vendor's virtualization tools to optimize drivers for video and system performance, then took snapshots of the completed base images. Then we cloned the heck out of our images to populate our clusters with Windows and Debian VMs, and ran with the ball.

informationweek