Live Migration IP problems when moving VM's between two separate clusers

Hi,

We have two datacenters with Hyper-V cluster in each datacenter. In each datacenter we have multiple VLAN's for servers, clients etc. that can communicate between them using L3 Switch. Each datacenter has separate VLAN ID's for specific network (Datacenter_A - VLAN11; Datacenter_B - VLAN 37 for servers). Datacenters are connected using Provider MPLS connection and VM's can ping each other over different VLAN's. We are planning to use SCVMM to manage all VM's and all Hyper-V hosts from central place. Ultimately, we want to enable Live Migrations between Clusters, for VM's that are bound to Servers VLAN's (11 to 37 and 37 to 11).

We have installed Test VMM, created host groups and import Cluster to each group. Then we have created logical network in following way:

Logical Network - Servers (One connected Network) with both checkmarks disabled (no virtualization, nor auto VM network)

SiteA -> VLAN11 (subnet 192.168.11.0/24)-> bound to HostGroupA

SiteB -> VLAN37 (subnet 192.168.37.0/24) -> bound to HostGroupB

For that Logical Newtork we created appropriate IP Pools for each site, and created VM network.

We created one logical Switch with two uplink ports (one for each datacenter), and in each uplink port we assigned appropriate site for Servers logical network. (Upl_DataCent_A -> SiteA; Upl_DataCent_B -> SiteB)

When we try to Live Migrate VM from DataCenter_A to DataCenter_B with above setting we get following error on Select destination Screen: No connection to Servers" which sufficient Resources could be found. Question - Should SCVMM automatically recongnize and assigned VLAN from SiteB since both sites are included in same logical network?

 

Then we tried to change logical network layout to following:

SiteA -> VLAN11 (subnet 192.168.11.0/24) -> bound to Both HostGroups

SiteB -> VLAN37 (subnet 192.168.37.0/24) -> bound to Both HostGroups

And also applied that changes to both uplink profiles.

After those changes.  LiveMigration wizard allowed us to pass Select destination screen (DataCenter_A to DataCenter_B), and in Select Network screen we had to manually select VLAN 37. After that LiveMigration completed successfully but VM stayed with IP address from IP pool for SiteA (192.168.11.0), and VLAN (37) for site B...

Now we have completed Migration, but it's still no good because we have to manually change IP address.

If we try to use DHCP, and abandon Static IP pools, we still had to disconnect and reconnect adapter to VM (using Hardware Configuration Tab), to actually trigger DCHP address allocation (disable/enable adapter in VM wasnt enough).

Since we are run out of options, can someone help in acieving right design and configuration for VM movement between clusters?

July 21st, 2015 12:30pm

Hi Sir,

>>

Then we tried to change logical network layout to following:

SiteA -> VLAN11 (subnet 192.168.11.0/24) -> bound to Both HostGroups

SiteB -> VLAN37 (subnet 192.168.37.0/24) -> bound to Both HostGroups

 

Based on my understanding, the Vlan settings need to bound to that host which will hold corresponding VLan settings .

 

>>but it's still no good because we have to manually change IP address. If we try to use DHCP, and abandon Static IP pools, we still had to disconnect and reconnect adapter to VM (using Hardware Configuration Tab), to actually trigger DCHP address allocation (disable/enable adapter in VM wasnt enough).

As you know ,  live migration is used for keeping the VM online with current settings .

For stand alone hosts live migration , if you want to change IP for that the VM , it means changing VM settings .

Best Regards,

Elton Ji

Free Windows Admin Tool Kit Click here and download it now
July 26th, 2015 10:56pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

July 28th, 2015 12:28pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

Free Windows Admin Tool Kit Click here and download it now
July 28th, 2015 4:27pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

July 28th, 2015 4:27pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

Free Windows Admin Tool Kit Click here and download it now
July 28th, 2015 4:27pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

July 28th, 2015 4:27pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

Free Windows Admin Tool Kit Click here and download it now
July 28th, 2015 4:27pm

Hi Elton,

I agree on what you said that VLAN should be bound to host which hold VLAN, and it is true that LM is used for keeping VM online with current settings, but how should we perform Live Migration of VM's between two clusters in separate location?

Apparently LM between clusters is supported by using VMM, but how should we setup networking for that scenario?

https://technet.microsoft.com/en-us/library/jj860434.aspx#BKMK_Cluster

Or better to ask is it possible without using network virtualization in VMM ?

Thank you,

Mitro

July 28th, 2015 4:27pm

Hi Sir,

>>is it possible without using network virtualization in VMM ?

Yes .

Before live migration  , you may need to ensure there is a same Vswitch in destination cluster .

Best Regards,

Elton Ji

Free Windows Admin Tool Kit Click here and download it now
August 6th, 2015 2:16am

Hi Elton,

As I stated in First post, we have one Logical Switch created through VMM that is applied to all hosts in both clusters. I just try to get common logic how to design network to Support this scenario. As explained in First post the logic that we thought should work was like this:

1. Create one logical network as "one connected network" option  (ex. Servers-LN) 

2. Put all VLANs (VLAN11 and VLAN37 in our example) as sites in logical network, and bind each site to appropriate host group

3. Create 2 Virtual Network on top of logical network (SiteA_Srv and SiteB_Srv), create IP pools for each VN

4. Create 2 uplink port profiles (one for cluster in SiteA, other for cluster in SiteB)

5. Create one logical switch and connect both uplink port profiles to it

6. Attach logical switch to all hosts in each cluster

7. Once logical switch is applied to hosts, create additional vNIC on host for LM, clusterHB, Management or other hosts traffic.

8. Server VM in SiteA cluster connect to VM Network SiteA_Srv, Servers in SiteB cluster connect to VM network SiteB_Srv

Is this logic ok, or we got it wrong?

One of references that we followed to gain that logic is from Hypervrockstar.com http://www.hypervrockstar.com/qs-deployscvmm_part2/

Thanks

August 13th, 2015 4:42pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics