Cómo crear un cluster simulado en Hyper-V
Si desean crear un laboratorio de clúster dentro de Hyper-V pueden seguir los pasos descritos en este artículo http://blogs.technet.com/pfe-ireland/archive/2008/05/16/how-to-create-a-windows-server-2008-cluster-within-hyper-v-using-simulated-iscsi-storage.aspx
How to create a Windows Server 2008 Cluster within Hyper-V using simulated iSCSI storage
Familiar with Virtual Server 2005 and shared disks for creating virtual clusters? Well its different with Hyper-V. The shared disk option is no longer available (which I did not know when I started testing). You have to use iSCSI instead. Here is a step by step method for creating a fail-over cluster within Hyper-V. Its a cheap way of setting up a test lab (assuming you don’t have access to Windows Storage server). In this post I use StarWind to simulate iSCSI storage … its not an endorsement of the product, I just picked it from amongst the crowd.
Windows Server 2008 fail-over clusters support Serial Attached SCSI (SAS), iSCSI and Fibre Channel disks as storage options. So, how would you go about setting up a virtual Windows Server 2008 test cluster using the new Hyper-V vitalisation product? The method I am about to outline is a little different to what you might be used to Virtual Server 2005. The following steps detail how I managed to setup a test cluster using simulated iSCSI storage. Before beginning it’s worth reviewing this article that outlines the storage options that are available to Hyper-V. By the end of this post you should have a simple two node cluster up and running using simulated iSCSI storage.
Tools for the job:
- A Windows Server 2008 server x64 server with the Hyper-V role enabled (I used a Dell Precision 390)
- One Windows Server 2008 VM to act as a Domain Controller (Clusters must be part of a domain)
- Two Windows Server 2008 VMs to act as Cluster Nodes
- One Windows Server 2003 SP2 VM (or you could use Windows Server 2008 in a Core install to maximise VM performance)
- iSCSI Target Software: I used Rocket Divisions StarWind product that is available as a 30 day eval and is reasonably priced
- iSCSI Initiator software (built into Windows Server 2008)
I wont go into how to create a VM but you can find more info from Virtual Guys weblog.
Before I began looking into the iSCSI simulated storage option for my cluster nodes I tried to expose a single VHD to each of my cluster nodes in the hopes that they would share it. I didn’t get very far and was presented with the following error when powering on the VMs:
This error is by design (thanks Justin Zarb for point this out) as Windows Server 2008 Hyper-V does not support this sort of storage (see link above for Hyper-V storage options). The above error is simply a file system error as the VHD “is being used by another process” … should have spotted that
SETTING UP THE LAB
Note: I’m assuming that you know how to install Windows Server 2003 and 2008. I’m also assuming that you know how to install and configure a Window Server 2008 Domain Controller. If you have any questions leave me a comment and I will see if I can point you in the right direction.
VIRTUAL NETWORK
Create the network with a connection type of “Internal Only”. I enabled Virtual LAN identification and set the default ID to 2 as this will be my public LAN. Setting the default to 2 means that if I dont specify a VLAN on subsequent NICs they will be classified as public connections.
VLAN ids:
- VLAN 2: Public 10.1.1.x/24
- VLAN 3: Heartbeat 192.168.1.x/24
- VLAN 4: iSCSI 192.168.2.x/24
SERVER SETUP
Tip: Be sure to rename each network card on the hosts to make identification easier. If its the public NIC, call it public etc.
Domain Controller: dc01
- Windows Server 2008 x32
- One VHD IDE fixed size disk 10GB
- 1 x NIC connected to my Virtual Network in VLAN 2
Network settings:
- IP Addr: 10.1.1.10
- Mask: 255.255.255.0
- Gateway: I didn’t bother setting one
- DNS: 10.1.1.10
Cluster Nodes:
- Windows Server 2008 x32
- 1 x VHD IDE fixed size disk 10GB
- 3 x NICs connected to my Virtual Network in the following VLANs
- Public card: VLAN 2
- Heartbeat card: VLAN3
- iSCSI: VLAN4
Node01
Public NIC: VLAN 2
- IP Addr: 10.1.1.20
- Mask: 255.255.255.0
- Gateway: I didn’t bother setting one
- DNS: 10.1.1.10
Heartbeat NIC: VLAN 3
- IP Addr: 192.168.1.4
- Mask: 255.255.255.0
iSCSI NIC: VLAN 4
- IP Addr: 192.168.2.4
- Mask: 255.255.255.0
Note: On all NICs in VLAN 3/4 be sure to disable the Client for Microsoft Networks, disable DNS registration and disable NetBIOS. Be sure to check your binding order too. The public NIC should be first.
Node02
Public NIC: VLAN 2
- IP Addr: 10.1.1.21
- Mask: 255.255.255.0
- Gateway: I didn’t bother setting one
- DNS: 10.1.1.10
Heartbeat NIC: VLAN 3
- IP Addr: 192.168.1.5
- Mask: 255.255.255.0
iSCSI NIC: VLAN 4
- IP Addr: 192.168.2.5
- Mask: 255.255.255.0
Note: On all NICs in VLAN 3/4 be sure to disable the Client for Microsoft Networks, disable DNS registration and disable NetBIOS. Be sure to check your binding order too.
iSCSI Target
- Windows Server 2003 SP2 x32 (see here for notes on W2K3 hosts in Hyper-V)
- 1 x VHD IDE fixed sized disk 10GB
- 2 x VHD SCSI fixed sized disks 1GB and 10GB for Cluster disks
- StarWind iSCSI Target Software
- 2 x NICs connected to my Virtual Network in the following VLANs:
- Public : VLAN 2
- iSCSI : VLAN 4
Public NIC: VLAN 2
- IP Addr: 10.1.1.22
- Mask: 255.255.255.0
- Gateway: I didn’t bother setting one
- DNS: 10.1.1.10
iSCSI NIC: VLAN 4
- IP Addr: 192.168.2.2
- Mask: 255.255.255.0
Note: On all NICs in VLAN 3/4 be sure to disable the Client for Microsoft Networks, disable DNS registration and disable NetBIOS. Be sure to check your binding order too. Make sure you format and assign drive letters to the SCSI VHDs on this VM.
Setting up the Cluster
Configuring the iSCSI target software (Starwind)
- Install the StarWind software on your iSCSI target VM.
- Launch the StarWind management console.
- Under the Connections you should see localhost:3260. Right click on localhost and select Connect. If I remember correctly the first username and password becomes the default (which you can change later).
- Right click localhost:3260 and select add Device
- Select Disk Bridge Device as the Device type and click next
- Select the first SCSI disk from the list (more than likely \\.\PhysicalDisk1).
- Select Asynronous Mode and Allow multiple iSCSI connections (clustering) and click next
- Give the disk a friendly name
- Repeat the steps to add the second disk
Adding disks to the cluster nodes
Each cluster node now needs to be connected to the iSCSI target. Launch the built in iSCSI initiator and follow the steps below:
- If prompted to unblock the Microsoft iSCSI service always click Yes otherwise the 3260 port will be blocked.
- Click on the Discovery tab and select Add Portal.
- Enter the IP address for the iSCSI target [192.168.2.2]
- Click the Targets tab and you should now see a list of the disks available on the target
- For each disk in the list click Log on and select Automatically restore this connection
- Click on the Volumes and Devices tab and select AutoConfigure. You disks should now appear as Devices.
- Reboot each cluster node as you add the disks.
- Disks will be offline when you reboot. Ensure that you bring them online in Disk Management.
When completed (and hosts connected) you should see something like this on the iSCSI target VM.
Installing the Cluster
The new fail-over cluster wizard is quite straight forward and much easier to follow when compared with Windows Server 2003. There isn't much point in going into too much detail … you’ll find plenty of info on the web.
Here is a step by step guide to installing a two node file cluster in Windows Server 2008.
Comments