Login to ProCooling
Name: Pass:
Not Logged In.
Search:
ProCooling Technical Discussions
Overland Storage SnapOS v4 & Image File Links
kiwa 535 Day(s) ago.
/sadmin/debug.cgi
ozmatt 932 Day(s) ago.
Getting back into the water
ben333 1112 Day(s) ago.
ID This Waterblock.
jaydee 1595 Day(s) ago.
ProCooling Geek Bits
Happy 20 year on Procooling!
satanicoo 476 Day(s) ago.
Fluke 17B+ ... the multimeter I've always wanted b...
Noise 1901 Day(s) ago.
Ben's Win 98 box, redone! Build up
ben333 3556 Day(s) ago.
Project Black & Blue - Ben's new HTPC
ben333 3616 Day(s) ago.
Site News and Blogs
User Registrations kinda back open
Jag 1009 Day(s) ago.
New User Registrations are blocked
ben333 1132 Day(s) ago.
Where is Joe these days? JoeKelly.co
ben333 1423 Day(s) ago.
Stuff over the last few years, Blogs, etc...
rhkcommander 1806 Day(s) ago.
The Pro/Market
GTX 280 for US $308.11 !!!!!!
sam amaar 2279 Day(s) ago.
FS: Laptop hardware (CPUs, Memory, HDDs, Wifi, etc...
ben333 4418 Day(s) ago.
FS external watercooling units from jpiperformance...
Halo_Master 4456 Day(s) ago.
FS Snap Server 4200
abuthemagician 4581 Day(s) ago.
Virtualized Virtualized Test Lab Basics with Hyper-V
Date Posted: Aug 31 2009
Author: Joe
Index:
Posting Type: Article
Category: IT Topics
Page: 4 of 4
Article Rank:5 from 1 Readers
Must Log In to Rank This Article
Forum Discussion Link
Virtualized Test Lab Basics with Hyper-V By: Joe
Lab Basics


Templates:
To really take advantage of a virtualized lab, you need templates. Natively in Hyper-V there is no provision for "template" machines in the strictest sense so you need to become creative. SCVMM does have templates and libraries to store all that data in, but we aren't going to get too deep into that in this article.

There are 2 ways to do templates in Hyper-V natively.


  1. Create a VM like normal, configure and patch it to the way you want and shut it down. Name it something that keeps it near the top of the name list (like -Windows7x64-Template) with the

  2. "-" being the item that keeps it near the top. Then when you want to create a copy of that VM in the Hyper-V manager, you Export the machine to a new location, and import. Rename the imported machine, and fire it up and sysprep it.

    It's not the most efficient solution, but it works.

  3. Second way to do this is a little more confusing if you don't understand how disks work in Hyper-V. You setup your template OS, and loading the base configuration you are going to use for all your other machines based on this OS. You save that as a template name like above (-Windows7x64-Template). Shut this template down, you won't be firing this VM back up any more


  4. Then you create a new Virtual machine that you want to clone the base configuration. To do this you create the new VM, and for a disk you tell it to use a differencing disk and point that disk at the VHD for the template you just created. At boot the new VM will use the parent VHD, but from that point on all changes will be made only to the differencing disk. This makes the disk on the child VM's much more compact in most cases as most of the data is being pulled off the parent disk. Because of this the parent VHD should be on fast storage. Keep in mind that once differencing disks are connected to the parent disk you shouldn't make any changes to the parent VHD file.

    Once the Child VM is up and online you run sysprep to re-SID and Name the image.

  5. There is a third way, but it is a SCVMM only option. I will go into that more in the next article. SCVMM has a dedicated library where you can dump VM's for storage, convert VM's into templates, etc... Very handy!



Backups and Restores:
This is a short topic. Since in the previous pages I mentioned not to worry about much redundancy for your lab VM's since they are just that Lab images not production images. But it is still a good idea to back up the VM's. There are a few ways to to this, either with Hyper-V aware backup app like Symantec Backup Exec 12.5, or with the native Windows Server Backup which (in R2) can back up individual folders so you can pick which VM's you want to back up and which you don't.

When you backup a VM that is running, when you restore it will be stopped. The restore will be like you turned the VM off. So it won't be terribly clean but it will at least be a backup. If in a production environment you wanted to do backups, I would recommend installing the backup agents for a higher end backup tool like Symantec into the VM's and doing app level backups on top of host level.


Networking, Vlans, Routers, etc...:
I touched on networking on the last couple pages, but one aspect of networking and setting up many environments that is a pain is IP conflicts and trying to isolate your VM's properly. As of now I just run one IP subnet that houses all the VM's in my lab, and all my home machines. Very similar to this diagram:


Nothing too fancy there, other than the headache of needing to ensure that none of the labs I may run at the same time have the same IP space used. All my labs use static IP's so there is no dependency on my host DHCP. None of my VM labs use my Host network services like DHCP, DNS, or WINS. Got to keep some separation.

Another way to do this if you are fortunate enough to have either a ISP that will give you multiple IP's from the web, or you just have multiple routers is:



The benefit of the split network is you aren't at risk of stepping on the IP's of real machines in your environment, and if you are doing any testing of nasty stuff like the effectiveness of your A/V on your VM's... you have a wall between the networks.

In the diagram above you see that the network shares common switching hardware. This was to illustrate that you don't need completely redundant network hardware to do that.

Also another big benefit of splitting the networks is if you plan to do any higher end web provisioning of VM's via SCVMM, or make your lab accessible to clients you can configure your web address and functions on the isolated network and keep that traffic away from your personal servers, workstations, or even the VM Host.

This can also be achieved with Vlans, but most of us don't have Vlan aware network hardware at home.

I plan to go over web accessibility in depth in the next article.

Lab Remote Access:
There are many ways to access your VM test lab remotely. Whether remote is your living room or a client site there are many options.

  • RSAT/VMC - The most obvious option is via the RSAT tools. If you have these installed and can connect to your Hyper-V server you can use the Virtual Machine Connection interface to access the desktop of any VM guest whether it has RDP or network connectivity enabled or not. The issue with this is you need RPC access to the host server with name resolution. Over VPN this can be very slow as the VMC Connections are quite fat.


  • RDP to Host - You can RDP to the host server and manage the guests via RSAT/VMC on host. You will be unable to use a mouse on non supported guest OSs. OSs that can't have the integration disk installed on them do not support mouse operations through an RDP session on the host. This way works for all other guests that are supported, but it is a bit slow but much more efficient than the VMC connections if running over VPN or through the web.


  • RDP is also a nice option because with the deployment of a Terminal Service Gateway on 2008 you could access the host via HTTPS anywhere.

  • RDP to the Guest - This is the fastest way to connect to the guests, but also the biggest PITA due to the fact many guests may not have IP's configured to be accessed, guest firewalls may block your access, and login credentials are a pain to manage. I almost never use this except for some standard VM's I run for some of my non lab services.


  • SCVMM Self Service Portal - This is the coolest of the bunch. Via HTTP or HTTPS you natively are able to connect, provision, and manage VM guests based on who the owner is of the VM and who you are logged in as. The "Owner" attribute is only accessed in SCVMM so don't look for it in the Hyper-V console you can see it in one of the images below.


  • There is some limitations to this though. For the HTTP VMC connection to work, you must have local host name resolution to access the guest. So if I connected from the internet, I would see everything on the below images, but when I click "Connect to VM" I would get an error saying "Cant connect to Leviathan". Because of this I have an SSTP VPN configured, so over just HTTPS I can remote in and access my SCVMM server, it's also a bit more secure this way.

    I will go into much more detail about SCVMM in the next article and how to deploy it.

    Below are some images of what the self Service Portal looks like:





Hyper-V Limitations - Labbing out a Hyper-V Cluster:

There is pretty much only one lab you cannot execute in a VM environment to some level. And that is simulating a Hyper-V Cluster. The reason is Hyper-V needs direct access to the AMD-V or Intel VT-x extensions' in the CPU. The hypervisor does not allow direct CPU access to guests and because of this you cannot load Hyper-V in a guest image.

So how do you cheaply setup a Hyper-V cluster lab?

There is a lot more too this than I am going to put in this article and I don't plan on running through every aspect of clustering here. I am putting this in to give you an idea at this point of what would be required to do a Clustered Hyper-V lab environment.

In Hyper-V 1.0 there was no CPU downward compatibility, which meant if you wanted a Hyper-V cluster you needed all the machines to have the same exact CPU specs between all nodes. Hyper-V 2.0 now allows 2 generations of movement between VM's pending you tell the VM CPU to allow downward compatibility. You can even move a VM from a quad core CPU to a dual core CPU! (pending the VM isn't set to require 4 cores) Below is a image of the setting that needs to be checked to enable a VM to be migrated between different CPU versions:


I have setup a nice little Hyper-V Cluster lab that I can start up, shutdown and use at will with only the additional purchase of about $200 in hardware and NO changes to my main lab VM machine.

I purchased a small extra server, with a minimal amount of RAM, and the lowest spec Intel CPU that supports the VT-x extensions'. I had a HD and some ram kicking around so I built up for about $200.00 a server I call "LilFish" as opposed to my main VM server "Leviathan".

So here is the hardware:
Main Hyper-V Server: (Leviathan)
  • Quad Core Q9300 CPU

  • 16GB Ram

  • 4x 250GB drives for VM's

  • 2x onboard Gbit NICs.

  • - Nic1 - Host Only (this is the Nic that only the host services use... RDP/File/Print/DNS/DHCP/etc...)

  • - Nic2 - Hyper-V / Cluster Heart Beat Nic (this Nic is set to be the Hyper-V bridge connection for the "External" network, but also has an internal IP assigned to it to be the heart beat connection between the cluster nodes.


Secondary (for clustering) Hyper-V server (Lilfish)
  • Dual Core E6300 CPU

  • 2GB Ram

  • 1x 320GB drive

  • 1x onboard NIC, and 1x generic 10/100 NIC.

  • - Onboard Nic - Host Only (this is the Nic that only the host services use... RDP)

  • - Extra Nic - Hyper-V / Cluster Heart Beat Nic (this Nic is set to be the Hyper-V bridge connection for the "External" network, but also has an internal IP assigned to it to be the heart beat connection between the cluster nodes.


Reason I can get away with such minimal specs, is I am not planning to do much more with this box than just use it to validate clustering tests between servers by moving a VM or two between. So it doesn't need to host much.

Note that any Hyper-V Failover cluster requires shared storage so I had to come up with a solution for that also.

Shared Storage for Cheap
MS's recommendation for shared storage (which is required to do Hyper-V Clustering) is iSCSI, or Fibre Channel SAN connections. Now I am assuming most people lack a SAN at home... so I won't be talking about the merits of a FC connection. For those mortals that have nothing at home, there is a solution.

Software Emulated iSCSI targets. What are these? These are small applications you can run on a server (or workstation) to make that machine emulate a full on iSCSI device like a NetApp. They don't have all the functionality as a high end iSCSI device, but they do what we need. The one I am using is an older version of the StarWind iSCSI target which can now be found here:
Rocket Divisions StarWind iSCSI Target

To host the iSCSI target I provisioned a small x86 Windows 2003 R2 server on my main Hyper-V Server with only 512MB of ram. I also attached a large fixed size (200GB) VHD (fixed so it has some more performance) on a high speed 4x RAID0 array on my main VM box. On the iSCSI target you will need to provision 2 drives at a minimum. One to hold the Quorum and one to hold the VM data.

Setting up StarWind is easy, you add the type of drive you want and call it something and that's all there is too it. StarWind does support various types of mappings, in this case the Quorum is a small 256mb image file on the root of the iSCSI VM machine, and the VM disk is one large disk mapping to the 200GB partition assigned to the guest.


Failover Clustering Shared Storage info for Hyper-V 1.0
If you are using Hyper-V 1.0 you will need to handle creating LUN's and disks for VM's differently in a HA environment. Specifically under Hyper-V 1.0 you will need one shared storage for each VM you want to host on in the HA environment. So if you plan to have 10 Guests in the HA cluster, you will need to provision 11 iSCSI target disks. (1 Quorum + 10 drives for guest VM's)

This is because Hyper-V 1.0 does not support Cluster Shared Volumes (CSV) so when a VM is failed over the Failover Clustering must transition the actual LUN to the other server.

Failover Clustering Shared Storage info for Hyper-V 2.0

Hyper-V 2.0 is not constrained by the limitations of 1.0 and does not require separate LUNS for each VM. Hyper-V 2.0 supports Cluster Shared Volumes. Which are native NTFS mouthing points off the root drive in Windows 2008 R2. Once configured each node will have this folder - C:\ClusterStorage. Below that location is one 'Volume' folder for each iSCSI drive you have in the CSV configuration. In the below pic, Volume1 is the 200GB iSCSI drive I have connected.




I am not a fan of re-inventing stuff, so here is a link with a detailed breakdown on how to enable CSV on a R2 cluster, and how it works in more detail:
Microsoft's Failover and Network Load Balancing Clustering Team Blog covering Clustered Storage

Connecting Your Servers to your iSCSI Target
This is pretty painless, you fire up your iSCSI initiator out of the control panel, and the servers should be able to discover the drives you just mounted with StarWind. Connect them both (one server at time) and you will see them get added to your Storage Manager.


Once connected you can go through the cluster configuration process and install the required services, establish the cluster, and review that above MSDN blog link as it gives a great run down on how to configure the storage, and test Live Migration. Below is an image of the Cluster interface. Once the Failover Cluster is established, it recognizes the Hyper-V services and enables those options automatically. The interface looks like:



Tips and Tricks for Hyper-V Clustering

  • Make sure your External or what every primary network connection you have for your VM's is named the EXACT same thing (case sensitive) under the Virtual Network Properties on both nodes. if it is not then the VM will not be able to connect to the network as it fails over. SCVMM won't even let you failover the node if this is mismatched between hosts:


  • Don't manage machines in the Failover Cluster manager, and the Hyper-V manager, and SCVMM. Pick one and use it. Changing properties in all three places can cause inconsistencies on different interfaces and can cause some issues with the guest configurations.


  • An example is SCVMM is very strict on names, and configurations on Guests. Hyper-V Manager is not. And Failover Cluster manager is somewhere in between. If you make a change in the later two , you very well may make SCVMM stop working with that guest. If you have SCVMM in the environment use that.

  • Now that you have a Failover cluster online and you have both your full VM server running and your secondary VM host server running you may not want them both up all the time. From my experience having to maintain the cluster, iSCSI guest, and both machines is not something I look forward to and they eat up resources. So here is the process I go through before and after a cluster test:


After I am done running Clustered Labs: (assuming the failover cluster is up and running as expected, and now that I am done testing I want to return my main VM box back to a standalone server without destroying my cluster configuration)

1. I clone any VM's I moved to High Availability to local disk on my main VM server if I need them, if not then I shut them down or save them. You cannot migrate a VM from a HA environment to a non HA environment. You can migrate the other way though. So the only way to get the VM back from being a HA Guest is to clone it onto a non HA platform.
2. Failover the cluster to the secondary VM Host so it has all the cluster roles and HA guests.
3. Evict the main VM Host from the cluster.
4. If SCVMM is in the environment, refresh the cluster object so SCVMM recognizes that the main VM Host is no longer part of the cluster.
5. Save or Shutdown all HA guests on the Secondary VM Host.
6. Shutdown the Secondary VM Host once.
7. Save the iSCSI guest on the main VM Host.

And that's all there is too it. Now the cluster is in stasis on the Secondary VM Host which is powered down, the iSCSI isn't using up any resources as its shutdown or in a save state, and the main VM server is working just as a standalone server again.

When I want to re-join the cluster you reverse the order:

1. Start the iSCSI target guest VM on the main VM Host.
2. Power up the Secondary VM Host.
3. Join the Main VM host to the cluster. Once the join to the cluster is complete the main VM box will act as part of the HA environment.
Done.



I will do more of a write up about Failover clustering in the next article, but for now I think this gives a good idea of the steps and processes to even attempt a clustered Hyper-V lab.

In closing for this article I figure I would document my few tips, tricks, and lessons learned:


For the basics of running your lab in Hyper-V here is a bit of a lessons learned listing


  • Space our your VM's across separate physical spindles. If you are loading a CCR cluster to test, or a few heavy I/O machines, having them separated is worth its weight in gold.


  • Used fixed size disks if you want all out performance, or Dynamic if you want to save space.


  • If you are real tight on space look at using differencing disks. Load one large VHD with the OS you want to use on the other machines. Create differencing disks for those other machines, and then connect those differencing disks to new guest machines and you are rolling out. This way each Guest only takes up as much room as you need to add or change data from the source. This is also very similar to the template method mentioned earlier.


  • Keep a list of your IP's. or setup a separate network with a large client subnet so you can keep stuff spaced out easily.


  • Use resource reserves for the host and the guests if you need to ensure one of those maintains its CPU priority and its CPU time allocation if the host is overwhelmed.


<< Previous Page: The Software Index:
To Comment on this Article Click Here!
Random Forum Pic
Random Forum Pic
From Thread:
Any one using the HX08 case (by Aopen) for water cooling?
ProCooling Poll:
So why the hell not?
I agree! 67%
What? 17%
Hell NO! 0%
Worst Poll Ever. 17%
Total Votes:18
Please Login to Vote!



(C) ProCooling.com 2005 - All this glorious web geekness was brought to you by Joe's amateur web coding skills.
If we in some way offend you, insult you or your people, screw your mom, beat up your dad, or poop on your porch... we're sorry... we were probably really drunk...
Oh and dont steal our content bitches! Don't give us a reason to pee in your open car window this summer...