I think the confusion is over the use of the word "IP Address" when really it just means "Server Name" using exact MS Failover Cluster definitions. Only the logical clustered server name is the fixed part, the address could
even assigned via DHCP.
For example in my configuration I have two SQL Server clustered instances on two possible cluster nodes so both can be active to distribute load during normal operation, then one node takes over all services during a failure. As suggested in the preferred
configuration everything they need (including MSDTC) is housed within their own clustered service/application "group".
The configuration looks like this in Failover Cluster Manager:
Services and Applications
> SQLCluster1 : Preferred Owners: SQLNode1
> > Server Name : SQLCluster1
> > MSDTC : Network Name (SQLCluster1)
> > Disk Drives : SQL Cluster 1 Data : Volume (S:)
> > Other Resources : SQL Server (Cluster1), SQL Server Agent (Cluster1)
> SQLCluster2 : Preferred Owners: SQLNode2
> > Server Name : SQLCluster2
> > MSDTC : Network Name (SQLCluster2)
> > Disk Drives : SQL Cluster 2 Data : Volume (R:)
> > Other Resources : SQL Server (Cluster2), SQL Server Agent (Cluster2)
Notes:
- MSDTC exists within the clustered service/application (SQL) group so does not require a separate disk, name or quorum access.
- Optionally for performance you might want to add separate disks dedicated to the MSDTC, SQL log and temp data files. Only makes sense if the physical disk at the end of the line (including iSCSI targets) is separate, else it's just more hassle
to configure with no benefit.
- Each SQL instance must have a unique SQL Instance name (in case they end-up running on the same server, i.e. during a failure). My instance names were Cluster1 and Cluster2, i.e. SQL Server addresses "SQLCluster1\Cluster1" and "SQLCluster2\Cluster2".
There's one other important point to keep in mind for anyone setting up their first SQL cluster. When you access the SQL Server from another machine over the network, you must omit the instance name. But only remotely, not when connecting locally on the
cluster node, e.g. running management tools.
It's like the clustered SQL instance becomes the default instance even though it has it's own name different to the typical MSSQLSERVER default instance name. I guess this has something to do with the possibility that multiple "default" instances
could end up on the same physical server. Still it's very confusing when you try to connect and get errors that the SQL server was not found, e.g. you must connect remotely as SQLCluster1 but locally as SQLCluster1\Clu