Hi,
We have two datacenters with Hyper-V cluster in each datacenter. In each datacenter we have multiple VLAN's for servers, clients etc. that can communicate between them using L3 Switch. Each datacenter has separate VLAN ID's for specific network (Datacenter_A - VLAN11; Datacenter_B - VLAN 37 for servers). Datacenters are connected using Provider MPLS connection and VM's can ping each other over different VLAN's. We are planning to use SCVMM to manage all VM's and all Hyper-V hosts from central place. Ultimately, we want to enable Live Migrations between Clusters, for VM's that are bound to Servers VLAN's (11 to 37 and 37 to 11).
We have installed Test VMM, created host groups and import Cluster to each group. Then we have created logical network in following way:
Logical Network - Servers (One connected Network) with both checkmarks disabled (no virtualization, nor auto VM network)
SiteA -> VLAN11 (subnet 192.168.11.0/24)-> bound to HostGroupA
SiteB -> VLAN37 (subnet 192.168.37.0/24) -> bound to HostGroupB
For that Logical Newtork we created appropriate IP Pools for each site, and created VM network.
We created one logical Switch with two uplink ports (one for each datacenter), and in each uplink port we assigned appropriate site for Servers logical network. (Upl_DataCent_A -> SiteA; Upl_DataCent_B -> SiteB)
When we try to Live Migrate VM from DataCenter_A to DataCenter_B with above setting we get following error on Select destination Screen:
No connection to Servers" which sufficient Resources could be found.
Question - Should SCVMM automatically recongnize and assigned VLAN from SiteB since both sites are included in same logical network?
Then we tried to change logical network layout to following:
SiteA -> VLAN11 (subnet 192.168.11.0/24) -> bound to Both HostGroups
SiteB -> VLAN37 (subnet 192.168.37.0/24) -> bound to Both HostGroups
And also applied that changes to both uplink profiles.
After those changes. LiveMigration wizard allowed us to pass Select destination screen (DataCenter_A to DataCenter_B), and in Select Network screen we had to manually select VLAN 37. After that LiveMigration completed successfully but VM stayed with IP address from IP pool for SiteA (192.168.11.0), and VLAN (37) for site B...
Now we have completed Migration, but it's still no good because we have to manually change IP address.
If we try to use DHCP, and abandon Static IP pools, we still had to disconnect and reconnect adapter to VM (using Hardware Configuration Tab), to actually trigger DCHP address allocation (disable/enable adapter in VM wasnt enough).
Since we are run out of options, can someone help in acieving right design and configuration for VM movement between clusters?