Error 10698 Virtual machine could not be live migrated to virtual machine host

Hi all,

I am running a fail over cluster of

Host:
2 x WS2008 R2 Data Centre

managed by VMM:
VMM 2008 R2

Virtual Host:
1x windows 2003 64bit guest host/virtual machine

I have attempted a live migration through VMM 2008 R2and im presented withe the following error:

Error (10698)
Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
(Unspecified error (0x80004005))

What i have found when running the cluster validation:

1 out of the 2 hosts have an error with RPC related to network configuration:

An error occurred while executing the test.
Failed to connect to the service manager on 'xxx-Host02'.
The RPC server is unavailable

However there are no errors or events on host02 that are showing any probelms at all.
In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.

See below:


--------------------------------------------------------------------------------


List BIOS Information
List BIOS information from each node.


xxx-Host01
Gathering BIOS Information for xxx-Host01
Item Value
Name Phoenix ROM BIOS PLUS Version 1.10 1.1.6
Manufacturer Dell Inc.
SMBios Present True
SMBios Version 1.1.6
SMBios Major Version 2
SMBios Minor Version 5
Current Language en|US|iso8859-1
Release Date 3/23/2008 9:00:00 AM
Primary BIOS True


xxx-Host02
Gathering BIOS Information for xxx-Host02
Item Value
Name Phoenix ROM BIOS PLUS Version 1.10 1.1.6
Manufacturer Dell Inc.
SMBios Present True
SMBios Version 1.1.6
SMBios Major Version 2
SMBios Minor Version 5
Current Language en|US|iso8859-1
Release Date 3/23/2008 9:00:00 AM
Primary BIOS True


Back to Summary
Back to Top

--------------------------------------------------------------------------------


List Cluster Core Groups
List information about the available storage group and the core group in the cluster.
Summary
Cluster Name: xxx-Cluster01
Total Groups: 2
Group Status Type
Cluster Group Online Core Cluster
Available Storage Offline Available Storage

Cluster Group
Description:
Status: Online
Current Owner: xxx-Host01
Preferred Owners: None
Failback Policy: No failback policy defined.
Resource Type Status Possible Owners
Cluster Disk 1 Physical Disk Online All Nodes
IP Address: 10.10.0.60 IP Address Online All Nodes
Name: xxx-Cluster01 Network Name Online All Nodes


Available Storage
Description:
Status: Offline
Current Owner: Per-Host02
Preferred Owners: None
Failback Policy: No failback policy defined.
Cluster Shared Volumes
Resource Type Status Possible Owners
Data Cluster Shared Volume Online All Nodes
Snapshots Cluster Shared Volume Online All Nodes
System Cluster Shared Volume Online All Nodes


Back to Summary
Back to Top

--------------------------------------------------------------------------------


List Cluster Network Information
List cluster-specific network settings that are stored in the cluster configuration.
Network: Cluster Network 1
DHCP Enabled: False
Network Role: Internal and client use
Metric: 10000

Prefix Prefix Length
10.10.0.0 20

Network: Cluster Network 2
DHCP Enabled: False
Network Role: Internal use
Metric: 1000

Prefix Prefix Length
10.13.0.0 24

Subnet Delay
CrossSubnetDelay 1000
CrossSubnetThreshold 5
SameSubnetDelay 1000
SameSubnetThreshold 5


Validating that Network Load Balancing is not configured on node xxx-Host01.
Validating that Network Load Balancing is not configured on node xxx-Host02.
An error occurred while executing the test.
Failed to connect to the service manager on 'xxx-Host02'.
The RPC server is unavailable


Back to Summary
Back to Top

If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them,which makesthe report above is a bit misleading.

I have also checked the rpc service and it has started.

If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.

Kind regards,

Chucky

August 26th, 2009 7:27am

You might want to check the firewall configuration to see if there are any differences bewtween the two hosts .
HTH
Stefano
Free Windows Admin Tool Kit Click here and download it now
August 26th, 2009 6:42pm

You might want to check the firewall configuration to see if there are any differences bewtween the two hosts .
HTH
Stefano
  • Marked as answer by Chucky_au Thursday, August 27, 2009 6:47 AM
  • Unmarked as answer by Chucky_au Thursday, August 27, 2009 6:47 AM
August 26th, 2009 6:42pm

Hi Stefano,

Thanks for the reply and you sugestion.Idoappreciate it.

The firewall is off as the machines are part of the domain. When the machines are in the 'domain network' the firewall is turned off. Group policy setting.

I am suspecting the issue is related to the network config for hyper v and the iscsi MCS configuration. It may be a bug.

What i did:

Shutdown cluster
Remove network config to both nodes
Re instate network config
removed 2nd physical ISCSi connection and MCS setting in iscsi initiator
this leave me with one ethernet interface running iscsi only.

Start cluster
run validation check
result - ok

Thing is i now dont have redundancy in my iscsi setup.

i will reserach some more on MCS and MS MPIO. Any further information i find on this issue i will document in thsi forum.

Thanks,

Chucky
Free Windows Admin Tool Kit Click here and download it now
August 27th, 2009 9:47am

Hi Stefano,

Thanks for the reply and you sugestion.Idoappreciate it.

The firewall is off as the machines are part of the domain. When the machines are in the 'domain network' the firewall is turned off. Group policy setting.

I am suspecting the issue is related to the network config for hyper v and the iscsi MCS configuration. It may be a bug.

What i did:

Shutdown cluster
Remove network config to both nodes
Re instate network config
removed 2nd physical ISCSi connection and MCS setting in iscsi initiator
this leave me with one ethernet interface running iscsi only.

Start cluster
run validation check
result - ok

Thing is i now dont have redundancy in my iscsi setup.

i will reserach some more on MCS and MS MPIO. Any further information i find on this issue i will document in thsi forum.

Thanks,

Chucky
  • Marked as answer by Chucky_au Thursday, August 27, 2009 6:47 AM
August 27th, 2009 9:47am

Make sure the "Client for Microsoft Networks" is enabled for the network adapter the cluster is using for cluster communication...This will be the adapter in the Cluster manager that is marked "internal"
Free Windows Admin Tool Kit Click here and download it now
November 17th, 2009 1:41am

I ran into this same issue, and my resolution was a simple one, check the network within the cluster configuration and make sure that any extra NICs are disabled, and that you have them set up correctly:
Heartbeat: Allow cluster network communication on this network
Live Migration: Allow cluster network communication on this network
Public Network: Allow cluster network communication on this network and Allow clients to connect through this network

this message basically was telling me that I didn't have a path to send the live migration because the one it was trying was a network that I haven't set up yet (trying to work out the bugs for a NIC teaming test)
November 20th, 2009 7:35pm

Hi ,
looking at the information you provided I found
Failed to connect to the service manager on 'xxx-Host02'.
The RPC server is unavailable
This has nothing to do with the possibility to remote desktop or share browsing to the host2 server.
Is it WMI service running on the host ?
You mentioned a policy has been applied to disable firewall , is such policy ( or another) configuring also who can access the host from the network ?
Check whether the two hosts are running the same services
Free Windows Admin Tool Kit Click here and download it now
November 25th, 2009 5:03am

This happened to me after I disabled NetBIOS on the iSCSI nic per performance recommendations. In the failover cluster configuration, I changed the iSCSI network from "Internal" to "Disabled" which resolved the issue - however, the live migration network was already set to a different network where NetBIOS was enabled (and this network was unchecked in the Live Migration settings tab).
April 1st, 2010 10:11pm

I had this come up after renaming my Live Migration network in the Failover Cluster manager.  To fix it...

  1. Open the Failover Cluster Manager
  2. Navigate to the Cluster, expand "Services and applications" and select any Virtual Machine
  3. In the middle pane of the MMC, right click the Virtual Machine and select Properties
  4. Select the "Network for Live Migration" tab
  5. Temporarily check the box to select any other network and click OK
  6. Open the VM Properties and select the LiveMig tab again
  7. Now re-select the Network you actually want to use for Live Migration, click OK

After that I was able to perform Live Migration from the Failover Cluster Manager AND in SCVMM.

Free Windows Admin Tool Kit Click here and download it now
April 15th, 2011 5:13pm

Great this fixed my problem, if ve changed the network names ....
April 29th, 2011 3:32pm

Man, it worked perfectly!! Tks a lot

Free Windows Admin Tool Kit Click here and download it now
May 24th, 2011 9:11pm

I followed Jeff Graves instructions, and disabled cluster communication settings for the two networks that are iSCSI. The wording may have changed between versions. When getting to the properties page of the network, your two options are "allow cluster network communication on this network" and "do not allow..." Switching both (i have two iSCSI networks) to "do not allow" fixed the issue, and I can live migrate VMs between cluster nodes.
June 2nd, 2011 5:04pm

MrShannon: Thanks!

 

Free Windows Admin Tool Kit Click here and download it now
July 21st, 2011 3:54pm

MrShannon's solution worked for me. Thank you!
August 10th, 2011 11:44pm

MrShannons solution worked for me too. I noticed that I was able to Quick Migrate with no problem, however Live Migration was a problem. Same problem via SCVMM and Cluster Manager. In my case I had renamed the network via Cluster Manager with a friendly name. After applying the change to one VM, all other subsequent VMs seemed to work without needing to be updated too. Many thanks. Jay
Free Windows Admin Tool Kit Click here and download it now
October 5th, 2011 7:00pm

Hi all,

I am running a fail over cluster of

Host:
2 x WS2008 R2 Data Centre

managed by VMM:
VMM 2008 R2

Virtual Host:
1x windows 2003 64bit guest host/virtual machine

I have attempted a live migration through VMM 2008 R2 and im presented withe the following error:

Error (10698)
Virtual machine XXXXX could not be live migrated to virtual machine host xxx-Host01 using this cluster configuration.
 (Unspecified error (0x80004005))

What i have found when running the cluster validation:

1 out of the 2 hosts have an error with RPC related to network configuration:

An error occurred while executing the test.
Failed to connect to the service manager on 'xxx-Host02'.
The RPC server is unavailable

However there are no errors or events on host02 that are showing any probelms at all.
In fact the validation report goes on to showing the rest of the configuration information of both cluster hosts as ok.

See below:

 
--------------------------------------------------------------------------------


List BIOS Information
List BIOS information from each node.


xxx-Host01
Gathering BIOS Information for xxx-Host01
Item  Value 
Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
Manufacturer  Dell Inc. 
SMBios Present  True 
SMBios Version  1.1.6 
SMBios Major Version  2 
SMBios Minor Version  5 
Current Language  en|US|iso8859-1 
Release Date  3/23/2008 9:00:00 AM 
Primary BIOS  True 

 


xxx-Host02
Gathering BIOS Information for xxx-Host02
Item  Value 
Name  Phoenix ROM BIOS PLUS Version 1.10 1.1.6 
Manufacturer  Dell Inc. 
SMBios Present  True 
SMBios Version  1.1.6 
SMBios Major Version  2 
SMBios Minor Version  5 
Current Language  en|US|iso8859-1 
Release Date  3/23/2008 9:00:00 AM 
Primary BIOS  True 

 


Back to Summary
Back to Top

--------------------------------------------------------------------------------


List Cluster Core Groups
List information about the available storage group and the core group in the cluster.
Summary 
Cluster Name: xxx-Cluster01 
Total Groups: 2 
Group  Status  Type 
Cluster Group  Online  Core Cluster 
Available Storage  Offline  Available Storage 

 

 

 Cluster Group
Description:
Status: Online
Current Owner: xxx-Host01
Preferred Owners: None
Failback Policy: No failback policy defined.
Resource  Type  Status  Possible Owners 
Cluster Disk 1  Physical Disk  Online  All Nodes 
IP Address: 10.10.0.60  IP Address  Online  All Nodes 
Name: xxx-Cluster01  Network Name  Online  All Nodes 

 

 
 Available Storage
Description:
Status: Offline
Current Owner: Per-Host02
Preferred Owners: None
Failback Policy: No failback policy defined.
 Cluster Shared Volumes
Resource  Type  Status  Possible Owners 
Data  Cluster Shared Volume  Online  All Nodes 
Snapshots  Cluster Shared Volume  Online  All Nodes 
System  Cluster Shared Volume  Online  All Nodes 

 


Back to Summary
Back to Top

--------------------------------------------------------------------------------


List Cluster Network Information
List cluster-specific network settings that are stored in the cluster configuration.
Network: Cluster Network 1 
DHCP Enabled: False 
Network Role: Internal and client use 
Metric: 10000 

Prefix  Prefix Length 
10.10.0.0  20 

Network: Cluster Network 2 
DHCP Enabled: False 
Network Role: Internal use 
Metric: 1000 

Prefix  Prefix Length 
10.13.0.0  24 

Subnet Delay  
CrossSubnetDelay  1000 
CrossSubnetThreshold  5 
SameSubnetDelay  1000 
SameSubnetThreshold  5 


Validating that Network Load Balancing is not configured on node xxx-Host01.
Validating that Network Load Balancing is not configured on node xxx-Host02.
An error occurred while executing the test.
Failed to connect to the service manager on 'xxx-Host02'.
The RPC server is unavailable


Back to Summary
Back to Top

If it was an RPC connection issue, then i shouldnt be able to mstsc, explorer shares to host02. Well i can access them, which makes the report above is a bit misleading.

I have also checked the rpc service and it has started.

If there is anyone that can shed some light or advice me oany other option for trouble shooting this, that would be greatley appreciated.

Kind regards,

Chucky


October 24th, 2011 6:25pm

I just ran into this with a 4 host cluster. I could move a VM between any of them, including onto Host4, but got the error trying to move *from* Host4. I also noticed it did not show the screenshot of the VM on the bottom of the page where it showed the status, CPU useage etc... in SCVMM.

 

In my case Host4 did not have any DNS servers entered in. They are running Win2008 Hyper-V edition and for some reason either it wasn't entered in or it lost the setting. If I did an NSLOOKUP it pulled up the IPv6 address. Once I entered in my DNS servers NSLOOKUP pulled up fine with IPv4 address and I was able to move the VM from Host4.

Easy fix, I hope this helps somebody...

 

-Joel

Free Windows Admin Tool Kit Click here and download it now
January 30th, 2012 5:05am

For those that also see random DNS errors in the event log (which go along with the "cannot migrate") message in the original post (but everything else works fine including "quick migrate"):

Open Failover Cluster Manager, look at your cluster name resource under "cluster core resources."  It will probably show online.  Double click and and If "DNS status" is not "OK".. Right click and take the resource offline.  Then Right click, more actions and run a "repair." Put the resource back online and you can now live migrate again.

December 11th, 2012 9:57pm

Awesome! This did the trick for me. :) Damn DNS... BTW I am running a 2012 cluster but the DNS fix did the trick for my Live Migrations.
Free Windows Admin Tool Kit Click here and download it now
June 22nd, 2013 8:21pm

Hi;

I had the same problem for the wrong antivirus configuration.

You can examine the http://support.microsoft.com/kb/961804/en-us for the solution
November 1st, 2013 1:10am

Hi,

I had simmilar problem - when tried live migration, i got an error RPC server is unavailable.

Problem was in networking - I have two adapters, one for Clients one for Hyper-V heartbeat

  • In the left pane of Filover Cluster Manager mmc expand Networks and right click on one of the networks and select properties.
  • If the network, you have selected is for virtual machines choose option "Do not allow cluster communicaction..."
  • On the network, you have for Cluster communication (migration, heartbeat etc.) select "Allow cluster network communiaction..." BUT without the mark - "Allow clients to connect..."
  • Check if you can access both Hyper-V server through LAN (use Ping), if not - check IP configuration (only one must have gateway! - Typically domain network)
  • Check if DNS records point to right IP adress, and only to the domain network

This solved my problem.

Free Windows Admin Tool Kit Click here and download it now
January 16th, 2014 11:38am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics