Hi guys,
I have a little question about virtualisation and a Hyper-V scenario for a HA Cluster.
In the past I have already set-up a HA Cluster for Hyper-V. This had the following set-up:
- Dell Equallogic (SAN)
- 2 Windows Server 2008 R2 servers which were configured in the same cluster
- The 2 Servers were directly connected through ISCSI with the SAN and the volume was configured as a CSV-volume.
- The Hyper-V role was installed on both windows servers
- All of the vhd files were stored directly under C:\ClusterStorage\
At the moment I am studying for my MCSE certification and I just learned about HA in the CBT-nuggets series.
It looks that they implemented another scenario:
- Windows server setup with storage spaces (just to replace the SAN, I guess. Same performance?!)
- 2 Windows Server 2012 servers which were configured in the same cluster
- The 2 Servers were directly connected through ISCSI with the storage-server and the volume was configured as a CSV-volume.
- This CSV-volume is then used in a Scale-Out File Server to make a HA File Server
- The VHD-files are then placed on this SMB 3.0 fileshare
It is a nice scenario, but why go through all the trouble of configuring a SOFS and accessing these VHD-files through SMB?
I know SMB 3.0 has some greatly improved features, but is it still better than directly accessing the CSV which is connected through ISCSI with the SAN/WS2012 ISCI target?
Would this set-up be any different when a SAN or WS 2012 server is used as storage?
In the CBT-nuggets video's, they are talking about HA from end-to-end.. But to be honest, I don't see more HA when using this SOFS. When the storage/SAN is down, the whole set-up won't work?!
Thanks for all the suggestions about this set-up. I just want to encorporate all the new features that WS2012 offers, but at this moment I only see a downside of this feature since it brings another extra system to the set-up which (I guess) will reduce
performance?
Thanks again!!
Kind regards,
Sven Celis
1) Why you should bother to configure SoFS and go with SMB? That makes sense only if a) you're building a setup from the scratch and you don't have a SAN so can indeed allow Microsoft to eat some of the EMC lunch and b) you have more then 2 Hyper-V servers
so implementing switched fabric SAS with a Clustered Storage Spaces would require you to go with a pair of SAS switches and it's both damn expensive (8 Gb FC and 10 GbE are ten times cheaper then 6 Gbps SAS and much more common) and does not really work. For
a simple two-node Hyper-V config you don't need SoFS layer absolutely as it just increases complexity (+2 servers, more switches and cabling, + man-hours for config) and implementation costs (+2 servers, switches, NICs, Windows licenses) and slows down everything
(obviously feeding SAS directly to hypervisor is faster then to feed the same SAS but using also Ethernet injected in the middle of the I/O pipe).
2) If you want to run a contest of a Microsoft built-in storage tools then the whole thing would go like this:
a) Loser. MS iSCSI target. Extremely dumb implementation (no RAM cache, cannot keep VHDX on a Storage Spaces so no flash cache with R2, cannot scale beyond a single node so multi-node setups are Active/Passive, no built-in clustering featurs so you need
to run a standard Windows cluster to make it fault tolerant).
b) Winner. SMB 3.0 share. Solid implementation with some of the drawbacks (SMB is cached on both client and server sides and in a very efficient and aggresive way, can keep files on a Storage Spaces with flash cache, cannot scale beyond a single node so
again Active/Passive, no built-in clustering features so again need to be run in a Windows cluster for fault tolerance).
c) Winner but with remarks. Nice implementation being actually wrapped over an SMB share (cached well with RAM and flash, scaled beyond single node with a SMB Multichannel and with R2 should catch up with iSCSI and FC at least on the paper as it should support
one-client-to-many-servers load distribution, has built-in features (overwrapped Windows cluster) for fault tolerance). The only issue with SoFS is - it does not support generic workload (so you cannot run a file server) as it can do huge file I/O only. That's
why SoFS is not going to replace failover SMB share...
That's for Microsoft. You need to understand however any decent commercial iSCSI target (Windows or FreeBSD/Solaris/Linux based and ZFS powered) would run circles around SoFS config in terms of performance (multiple nodes of course). And any decent 8 Gb
FC would kill 10 GbE iSCSI or SMB Direct (RDMA) SMB 3.0 in terms of latency (not even talking about 16 Gb FC).
3) Why that setup? They wanted to show how SoFS worked so they had built a test lab and demo stand. From the production or performance sides it has zero sense - slow, expensive and having block back end as a single point of failure. So you're absolutely
right with your doubts :)
So see again the beginning of my post where SoFS makes sense.
Hope this helped :)