from inside a VM , we mount the same disk as a secondary disk and run SQLIO against it ; we get about 30,000 IOPS.
There is no QoS for storage defined yet ..
Is there any other limit ? we increased VM memory to 64 GB , processor to 8 and never went above 30,000
Hi,
Are the VM's configured to use Dynamic Memory? Implementing QoS for storage is always a good idea, the improvements in storage IOPS are worth it.
Here's a couple articles:
http://www.aidanfinn.com/?p=15386
and, just because I was reading this before I read your question:
http://blogs.technet.com/b/cotw/archive/2009/03/18/analyzing-storage-performance.aspx
I was finding that the measurement of IOPS from physical to virtual were "always" favoring the physical, when in reality the platform should make too much difference...
Hope this helps
Cheers
Andrew
quick question ... we get 100,000 IOPS once we used SQLIO against a VHDX file . ( let's say disk2.vhdx)If you run SQLIO from the host just against VHDX file then your I/O would be cached. If you run SQLIO from inside a VM then "outside" I/O performed by Hyper-V would not be cached.
from inside a VM , we mount the same disk as a secondary disk and run SQLIO against it ; we get about 30,000 IOPS.
There is no QoS for storage defined yet ..
Is there any other limit ? we increased VM memory to 64 GB , processor to 8 and never went above 30,000
so , I am getting only 30,000 because of none-cached IO ? still it seems very low. comparing to 150,000 that I get from host . it's about 20% .
Hi,
The Hyper-V storage stack also uses unbufferred writes to make sure that the writes from the guest bypass the underlying host filesystem stack. When you use the SQLIO you should create a file as big as possible, so that you can exercise the entire disk. For hard disks, creating a small file causes the head movement to be restricted to a portion of the disk. Unless youre willing to use only a fraction of the hard disk capacity, these numbers show unrealistically high random IO performance.
The related information:
SQLIO, PowerShell and storage performance: measuring IOPs, throughput and latency for both local disks and SMB file shares
The related KB:
Hyper-V storage: Caching layers and implications for data consistency
http://support.microsoft.com/kb/2801713
Hope this helps.
StarwindPlease run Intel I/O Meter against your storage array directly on host (4-8 workers, 64+ queue depth, random reads and random writes for next run). It would be "base performance" you could try to match within your VMs.
so , I am getting only 30,000 because of none-cached IO ? still it seems very low. comparing to 150,000 that I get from host . it's about 20% .
Starwind , all good points .
I am getting the same results with IOmeter ; however I am using a 10GB fixed size file across the board and using that as my baseline , the reason is that my disks are SSD and size doesn't matter .
I think I found the culprit ; hyperV has a built in load balancer that kicks in under some circumstances . it is controlled under : HKLM\SYSTEM\CurrentControlSet
\Control\StorVSP\IOBalance\Enabled
Registry key does not exist by default and balancer is enabled . once I disable and reboot ; VMS get more IOPS
but IOPS are not balanced anymore ! Here is an Aiden Finn article about it :http://www.aidanfinn.com/?p=13232
Registry Key IOBalance_Enabled didn't work for me , I had to create IOBalance Key first and DWord of enabled . setting to 0 disabled the balancer and unleashed IOPS !
- Proposed as answer by VR38DETTMVP Monday, June 09, 2014 2:08 PM
- Marked as answer by Alex LvMicrosoft contingent staff, Moderator Monday, July 07, 2014 7:57 AM
Hi,
Just want to confirm the current situations.
Please feel free to let us know if you need further assistance.
Regards
the problem is that it doesn't load balance any more .. so one VM may get super performance and the other doesn't get anything .
- Edited by SpaceTime_L Tuesday, June 24, 2014 9:16 AM
- Proposed as answer by Alex LvMicrosoft contingent staff, Moderator Monday, July 07, 2014 7:57 AM
Just found this post. Actually your workaround was the only solution worked for me.I'm
succeeding get over 30,000 iops limitation only by changing this parameter, but the performence is really inconsistent.
Did you find better solution for this issue?