Slow performance Storage pool.

We also encounter performance problems with storage pools.

The RC is somewhat faster than the CP version.

Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with Bursts.

Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).

Inserting an ARC 1882X RAID card increases speed with a factor 5 10.

Hence hardware RAID on the same hardware is 5 10 times faster!

We noticed that the Recourse Monitor becomes unstable (irresponsive) while testing.

There are no heavy processor loads while testing.

JanV.

June 22nd, 2012 6:17am

We also encounter performance problems with storage pools.

The RC is somewhat faster than the CP version.

Hardware: Intel S1200BT (test) motherboard with LSI 9200-8e SAS 6Gb/s HBA connected to 12 ST91000640SS disks. Heavy problems with Bursts.

Using the ARC 1320IX-16 HBA card is somewhat faster and looks more stable (less bursts).

Inserting an ARC 1882X RAID card increases speed with a factor 5 10.

Hence hardware RAID on the same hardware is 5 10 times faster!

We noticed that the Recourse Monitor becomes unstable (irresponsive) while testing.

There are no heavy processor loads while testing.

Free Windows Admin Tool Kit Click here and download it now
June 24th, 2012 8:58pm

Dear nismo,

I am the type of tester that:

  1. Installs using default parameters
  2. Sits and tests using day to day operations for a day or two.
    In this case I created a data set containing various files including some VHDX files.
    I copy the files over the disk sets I have (From A to B, from B to B, from B to A).
  3. Thereafter compare the results with other hardware (again two days).
  4. I use the new copy tool (nice!) of Server 2012 combined with Recourse Monitor to measure
  5. I have a chronometer but did not need to use it because results are obvious.

In the case of Storage spaces I read the Windows Server8 Beta Understand and Troubleshoot Storage Spaces in WS 8 Beta".

For data redundancy I choose (of course) Parity although is apparently is not possible to use this setting for clustering (strange).

For the ARC 1882X RAID controller I used default settings with RAID 6.

That is way faster than storage pools.

Unfortunately, I cannot answer to your stripe size matter, because during the setup of Storage Pools I did not encounter this setting.Hence I probably defaulted.
Regarding "Thin" or "Fixed" provisioning: no big speed differences.

On the ARC 1882X RAID I used default 64KB.

Please assist.

JanV.

June 25th, 2012 6:17am

Sorry to hear this as we get very similar numbers with spaces and JBOD...

-nismo

Free Windows Admin Tool Kit Click here and download it now
June 25th, 2012 10:35am

I get 12MB/sec write and 40-50MB/sec read on a 10TB parity Storage Space over 23 SATA disks. Using SQLIO for 64KB sequential reads and writes.

In other words, the performance of SS is substantially worse than that of a single underlying disk.

Also, the machine responsiveness during heavy reads and writes to a parity SS slows is terrible. I get several second response times to keyboard presses, mouse clicks, etc. during heavy SS activity. After I/O is stopped, responsiveness is restored to normal after 10-20 seconds.

My 8-year old software RAID5 card (RAIDCore) over 8 of these disks get 400MB/sec+. Even the old Windows software RAID5 is better.

Unfortunately, SS does not have remotely adequate performance, at least not in Server 2012 RC.



June 25th, 2012 11:33am

Well, this is what we get too (at best).

And, with that "400MB/sec+" for the RAID 5 you do experience the same we do. SS is an order of magnitude slower than hardware RAID.

That is a pity because we had very high expectations. SS is a very nice concept.
Could someone at Microsoft tell us what the performance goal of this project was?

Alternatively, what can we expect for Server 2012 RTM?

Alliteratively, what do we overlook? Mistune?

Thanks,

Free Windows Admin Tool Kit Click here and download it now
June 25th, 2012 6:36pm

While that does seem far lower than we would expect for Storage Spaces, In general one should expect parity (on any substem) to be less per formant than other resiliency types like Mirror or Simple.  Also, with Storage Spaces, using the parity resiliency type is best suited for large block sequential reads.

Regarding the bursting effect, you are seeing, this is likely the result of Windows caching the I/O because it can be read faster than is written.  To see file copies without buffering, you can use Xcopy with the /J switch to do un-buffered copies, which should result in more consistent numbers across the length of the total file copy.

Note: Allocating a dedicated physical disk as a Journal should improve the performance that you are seeing with the Parity Storage Spaces.

Another factor which would contribute to the results you are seeing is that, Storage Spaces has a maximum of 8 columns (spindles) when using Parity. To achieve maximum performance we would recommend evaluating with either the Simple or Mirror resiliency types which are not subject to this limit of 8 columns, and should result in more consistency in the I/O performance.

For example, reads on a Mirror are increased by reading from all copies, and writes do not incur the inherent performance penalty associated with Parity.


June 26th, 2012 9:03pm

Thanks Bruce.

I was getting ~20MB/sec large block sequential writes when using just 3x 4TB disks in a 8TB parity SS. Each of these disks could write at 130MB/sec+ when used alone. So this is still extremely poor given the underlying hardware that is available. That would also not hit column limit problems.

My old software RAID card gets pretty close to RAID0 performance (7/8ths, given one disk is used for parity) with large block sequential writes on RAID5 across 8 disks. Also Software RAID on Linux/Ubuntu has no similar performance problem. So I really don't think the need to make parity calculations should be causing these problems.

I suspect the problem may be physical disk contention with the journaling, with sequential writes being turned into random writes across separate areas for data and journal. On ZFS (SS's closest competitor), journaling without a dedicated ZIL (journal disk) causes the performance of a raidz zdev (kind of like a RAID5 parity virtual disk) to be limited to the write performance of a single physical disk. But even being able to write at the limit of 100MB/sec+ for a single disk would make a huge difference to SS. Right now we are getting a fraction of that.

Can you let us know how to add a dedicated physical disk as a journal? I don't see it in the SS UI. Maybe through PowerShell? http://technet.microsoft.com/en-us/library/hh848705

Free Windows Admin Tool Kit Click here and download it now
June 27th, 2012 12:07am

Yes, you can manage manage this via the Storage module in PowerShell.

Assuming you have a Storage Spaces pool named "StorageSpacesPool", and one available physical disk, this would look like ;

 

$PDToAdd = Get-PhysicalDisk -CanPool $True

Add-PhysicalDisk -StoragePoolFriendlyName "StorageSpacesPool" -PhysicalDisks $PDToAdd -Usage Journal

 

June 27th, 2012 1:15am

Thanks. So it looks like the journal disk is attached to the pool, not any particular virtual disk. So is it the case that a single journal disk can be used to support multiple parity virtual disks?

If so, this is good design. For example, A ZFS ZIL (equivalent to a journal disk) only requires 1GB of space to support 100MB/sec of outstanding writes, for example, as it writes out the journal within 10 seconds. So presumably the journal disk doesn't need to be that big, even if it is supporting multiple virtual disks?

It is a shame that none of this is documented properly anywhere, given there is no way to set this using the UI (at least in the RC). The "Usage" switch is shown at http://technet.microsoft.com/en-us/library/hh848702, but none of the possible options (e.g. "Journal") are shown. Will this be put into the UI for RTM?

Free Windows Admin Tool Kit Click here and download it now
June 28th, 2012 1:41am

Sorry, one correction.. You would actually need to allocate 2 disks to be journals, and then they are usable across multiple Storage Spaces using parity.

On Windows 8 client, we are not planning to add this in the control panel for managing Storage Spaces for this release. 

Note however that when running Windows Server 2012, the setting for Journal is controllable using the File and Storage Services canvas in Server Manager.

For the documentation issue, this TechNet content was originally posted for the consumer preview, and this is one specific area that has changed slightly since that time for the release preview.  The writers are working hard on this as I type, and we will have this updated to reflect the current cmdlets usage in the near future.

Could you try copying with a large file (such as the ISO for the Release preview), and let me know how that is working ? Even without dedicated journal disks we would expect much better performance, I suspect part of this is the result of small I/O's whereas we would see optimal performance with large files on Parity.

June 28th, 2012 3:19pm

Thanks Bruce. Ok, so I assume the journal needs to be redundant with two disks.

How big do they have to be? i.e. how large can the journal get? Is it related to sustained write speed, as with the ZFS ZIL? Could you use SSD or USB flash drives, for example?

Makes sense not to make things complex with journal options on the Windows 8 client UI. On the Server 2012 RC UI, I have the option to add a single new disk to the pool for Data Store, Manual, or Hot Spare - but not Journal. Do I need two disks before the Journal option shows up? Or is it somewhere else?

I have moved some disks around, but still don't get more than 18MB/sec writes to my pool, even with the 3.5GB ISO. See below (for some reason, the bottom of the pictures are cutting off when I upload today - I've tried IE and Chrome). The copy also freezes at 99% for 40 seconds or so before completing.

 

I have also noticed that the UI on Server 2012 RC is extremely sluggish, and the Resource Monitor is mostly unresponsive, when writing to a SS. JVled on the original post noticed the same.

SS also seems to disable the write-back cache and read-ahead on the underlying physical disks, according to the controller's UI.

So I am now wondering if there is some issue when there is a SS parity virtual disk in place, where activity to the disks on a SS causes massive interrupts (or something similar), and this therefore slows down the responsiveness of the entire machine? That would explain the unresponsiveness of the UI and the slowness of the SS disk?

Details of my hardware configuration is here: http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/578a4c17-ed61-4d1a-837b-bdde46f2e7fe. The 23 SS physical disks are now across 4 controllers: 14 on 2x PCI-X, 8 on 1x PCIe, and 1 on motherboard SATA.

Free Windows Admin Tool Kit Click here and download it now
June 29th, 2012 2:15am

This is an amazing thread!  Bruce, thank you very much.  I spent a good chunk of time trying to figure out how to configure this after reading clues about it from some posts about using SSDs for journaling for the back-end storage of Azure.  I was able to piece together the PowerShell b/c of the tab completion for the enumerator in -usage, but I wish I had seen this sooner - it would have saved me a lot of time. I'm eager to see the follow-ups to David's questions, but I'd like to add my experience b/c it's not working for me either.

I have 4 disks.  If I put 3 auto and 1 in journal, I cannot create a virtual disk.  I get errors that there are not enough available drives.  If I remove the journal drive, I can create the disk.  If I add the disk back as a journal drive after I have created the virtual disk, I am successful.  However, when I do copies with it and look at transfers/sec on the journal disk, I see no activity.

Are my problems because I need 2 journal drives in order for it to get detected?  Do I need to create the virtual disk with the journal drive somehow or should the parity disks just start using it as long as it is in the pool?

Finally, are there any counters to monitor to determine the extent of a journaling bottleneck?

Thank you again for a really good thread!

June 29th, 2012 3:32am

Well, it looks like I can answer my main question.  You do need two journal disks.  I placed a second disk as a journal disk in the pool and I can now create a virtual disk using that pool.  So to recap, 3 disks in auto and 2 in journal.  Also, perfmon is now showing that those disks are being used when I do a transfer so offloading the journaling appears to be working for me now.  I'm doing all of this with VHDs just to prove out the technology (until I can get this on some beefy kit), but it's interesting:  my three parity disks are getting 90 IOPS each while my 2 journaling disks are getting 270 IOPS.  The volume itself appears to only be getting about 140 IOPS.  I thought I understood journaling, but perhaps I do not.  It appears that every IO that is written to the parity disks (both real data and parity data) is also written to the journaling disks.  

My final question still stands.  Any counters to help us understand what is happening with the journaling?  Or is reads/writes per second on the physical journaling disks all we have? 

Thanks again.

Free Windows Admin Tool Kit Click here and download it now
June 29th, 2012 4:18am

Great you managed to add and use journal disks in the pool.

Your benchmarking shows that every IO is written to both journal disks, which is what you would expect given the journal needs to be redundant. So 90 IOs/data disk x 3 = 270 IOs for each journal disk.

However I would expect that the MB/sec written to the journal disk is much less than what is written to the data disks - at least for large transfers.

June 29th, 2012 4:47am

I think we (and MS) should not forget a principle / fundamental point:

We (that is small / medium business) had high costs when we wanted to implement a SAN.

Our simple HP (Lefthand) SAN had a price tag of about 50.000.

In addition, the SAN requires serious care / training / updating etc. Upgrading it is seriously expensive.

With SS we could buy the same or bigger capacity for less than 10K. Easily expansion. Easy maintenance. Virtually no training. Probably more secure (the OS has direct contact with the disks).

So: SS design should stay Simple and Fast. The whole idea of journaling disks is not simple.

Neither is the management via PowerShell. We need a good and simple UI that can do the basic job.

And a set and forget scenario. As Default.

I would like to repeat, as David Trounce I and already mentioned, that the Resource Monitor is mostly unresponsive, when writing to a SS and other RAID sets (also without SS).

I also installed Server 2012 on a HP G6 server using its DAS internal P 410i RAID controller (HP RAID 50 - NPG: 2).

Everything works fine and fast (coping and the like) however Resource Monitor is also unresponsive and consumes (while copying) 15 25 % of processor capacity. Hence, this behavior is probably not related to SS.

Free Windows Admin Tool Kit Click here and download it now
June 29th, 2012 6:41am

I'm wondering if there is some multi-thread timing / interrupt / synchronization problem with SS.

The same root cause could be causing these performance problems we see, and also the occasional issues of virtual disks going offline - see here: http://social.technet.microsoft.com/Forums/en-US/winserver8gen/thread/517d1f02-b619-4a3a-928f-1bd5e8fef5fe

June 29th, 2012 5:11pm

I must say, I am not unsatisfied with the speed of my storage space. It's not that bad.

Running a 2012server virtual machine, Core i7-2600 (attached 4 cores and 4 GB ram), 4 disks of 2TB, I am getting write speeds of minimal 20 to around 75 MB/s on a parity space. Mostly around 55 MB/s.

Tested this from Gbit network, from local Crucial M4 SSD (capable of reading 450MB/s) and from local WD20EARS (capable of reading 140MB/s).

The writespeeds always are the same ~55 MB/s. For me it's workable, although it would be nicer to have single disk performance at least. But it's not the 15-20 MB/s some people are complaining about thankfully.


Free Windows Admin Tool Kit Click here and download it now
July 2nd, 2012 10:39am

I added two 500GB 7200rpm SATA disks for journal use to my pool per Bruce's instructions above. But this has had no real impact on the performance of the SS - which is still terrible.

More news: When I have two 9MB/sec read processes running off the SS (using robocopy to two write-limited NAS boxes), and then I try to copy a new folder to the SS:
* The reads drop to <1MB/sec each
* The copy process goes unresponsive very quickly.
 -- When copying from a network client, the client starts copying, but within 1-2 minutes states "the destination is no longer available".
 -- When copying using Explorer on the Server 2012 RC machine, after a minute Explorer freezes, and after several minutes more, it automatically restarts (presumably this is Explorer hang recovery).

This is even with the journal disks in place. So this makes SS pretty much unusable to host a network file share, at least on this configuration.


July 2nd, 2012 6:50pm

Bruce - can I now remove the two journal disks from the pool without impacting the redundancy of the pool? Just the following is a little worrying.

PS C:\Users\administrator.TROUNCE> $pdttoadd

FriendlyName        CanPool             OperationalStatus   HealthStatus        Usage                              Size
------------        -------             -----------------   ------------        -----                              ----
PhysicalDisk31      True                OK                  Healthy             Auto-Select                   465.76 GB
PhysicalDisk27      True                OK                  Healthy             Auto-Select                   465.76 GB

PS C:\Users\administrator.TROUNCE> remove-physicaldisk -storagepoolfriendlyname "Pool" -physicaldisks $pdttoadd

Confirm
Are you sure you want to perform this action?
Removing a Physical Disk will cause problems with the fault tolerence capabilities of StoragePool "Pool".
[Y] Yes  [A] Yes to All  [N] No  [L] No to All  [S] Suspend  [?] Help (default is "Y"): 


Free Windows Admin Tool Kit Click here and download it now
July 3rd, 2012 12:15am

In order to safely remove the Journal disks, you would need to perform the following steps:

Note: I'll use $PDToRemove as the variable containing the two disks that are journals which need to be removed.

# $PDToRemove would contain the Physical Disks to remove

# Retire the Physical Disks
$PDToRemove | Set-PhysicalDisk -Usage Retired

# Run a repair on the Storage Spaces to ensure all data is moved off the
# Journals
Get-VirtualDisk | Repair-VirtualDisk

# Repair progress can be monitored via Get-StorageJob
# When the StorageJob has completed, rerun Get-VirtualDisk
# to confirm they do not have an operational status of "In Service"
Get-VirtualDisk

# Then you would remove the physical disks using the command:
Remove-PhysicalDisk -StoragePoolFriendlyName <poolname> -PhysicalDisks $PDToRemove

July 3rd, 2012 5:34pm

Thanks Bruce. That worked. It still threw out that same warning "Removing a Physical Disk will cause problems with the fault tolerence capabilities of StoragePool "Pool".".

I took a chance and choose "Yes" to the "are you sure?" prompt, and the virtual disks are still "OK" after removing the journal disks. And those disks show up again in Disk Management.

So it seems that this warning should not show up after the journal disks are retired? Removing retired disks should not cause problems with fault tolerance.


Free Windows Admin Tool Kit Click here and download it now
July 3rd, 2012 5:51pm

A new test. On the same hardware, I created a new Parity virtual disk across 8x 500GB 7200rpm SATA disks, all attached to the same PCIe contoller (Supermicro AOC-SASLP-MV8). No journal disks.

Using SQLIO, 64K block size, sequential access, 8 threads, 8 outstanding, I get 490MB/sec reads and 53MB/sec writes. So this is much better performance. Still not great on writes (presumably given the need to journal), but much better at least.

I suspect that the terrible performance problems on my larger 23-disk SS are (1) related to using disks across multiple controllers, possibly synchronization / interrupt problems, and/or (2) using 23 disks for a single parity virtual disk.

I now plan to set up multiple parity virtual disks using separate sets of physical disks. So four parity virtual disks of 8 disks each. I can then stripe these four virtual disks together as a single volume in Disk Management, to give me a RAID50 and hopefully 4x the write performance.

Being able to organize such that physical disks for each virtual disk were placed across multiple controllers would be helpful, to give controller as well as disk redundancy. But without double-parity (RAID6) or decent performance for virtual disks across multiple controllers, this is out of the question.


July 3rd, 2012 6:47pm

Update: when I run read and write tests on this new 8-disk virtual disk, immediately or soon after, reads from the entirely separate, existing 23-disk virtual disk are dramatically impacted, dropping from 20MB/sec+ to 1-2MB/sec.

For write tests. larger block sizes (256KB, 1MB) have an impact almost immediately. 64KB block size tests take 10 seconds before the impact starts to occur.

For read tests, 1MB block size reads have an impact immediately. 256KB and 64KB block size reads do not seem to have that much impact. 512KB block sizes reduce the read speed to ~2MB/sec.

These virtual disks are using different disk pools, and the disks for each pool are on different controllers. So this should be not be caused by journaling (which is separated by pool, and is only for writes). Also, given the predictable but variable delay before the impact occurs, this looks like an OS problem, not a hardware problem.

I suspect there is some dedicated pool of buffer memory being used for SS caching (separate from normal memory caching of reads and writes), and that performance drops dramatically once this pool is exhausted. It may be ~64MB based on these tests. 

There is presumably the same root cause for this and all the performance and unresponsiveness problems raised above.

Free Windows Admin Tool Kit Click here and download it now
July 3rd, 2012 8:26pm

Based on some testing, I have several new pieces of information on this issue.

1. Performance limited by controller configuration. First, I tracked down the underlying root cause of the performance problems I've been having. Two of my controller cards are RAIDCore PCI-X controllers, which I am using for 16x SATA connections. These have fantastic performance for physical disks that are initialized with RAIDCore structures (so they can be used in arrays, or even as JBOD). They also support non-initialized disks in "Legacy" mode, which is what I've been using to pass-through the entire physical disk to SS. But for some reason, occasionally (but not always) the performance on Server 2012 in Legacy mode is terrible - 8MB/sec read and write per disk. So this was not directly a SS issue.

So given my SS pools were built on top of disks, some of which were on the RAIDCore controllers in Legacy mode, on the prior configuration the performance of virtual disks was limited by some of the underlying disks having this poor performance. This may also have caused the unresponsiveness the entire machine, if the Legacy mode operation had interrupt problems. So the first lesson is - check the entire physical disk stack, under the configuration you are using for SS first.

My solution is to use all RAIDCore-attached disks with the RAIDCore structures in place, and so the performance is more like 100MB/sec read and write per disk. The problems with this are (a) a limit of 8 arrays/JBOD groups to be presented to the OS (for 16 disks across two controllers, and (b) loss of a little capacity to RAIDCore structures.

However, the other advantage is the ability to group disks as JBOD or RAID0 before presenting them to SS, which provides better performance and efficiency due to limitations in SS.

Unfortunately, this goes against advice at http://social.technet.microsoft.com/wiki/contents/articles/11382.storage-spaces-frequently-asked-questions-faq.aspx, which says "RAID adapters, if used, must be in non-RAID mode with all RAID functionality disabled.". But it seems necessary for performance, at least on RAIDCore controllers.

2. SS/Virtual disk performance guidelines. Based on testing different configurations, I have the following suggestions for parity virtual disks:

(a) Use disks in SS pools in multiples of 8 disks. SS has a maximum of 8 columns for parity virtual disks. But it will use all disks in the pool to create the virtual disk. So if you have 14 disks in the pool, it will use all 14 disks with a rotating parity, but still with 8 columns (1 parity slab per 7 data slabs). Then, and unexpectedly, the write performance of this is a little worse than if you were just to use 8 disks. Also, the efficiency of being able to fully use different sized disks is much higher with multiples of 8 disks in the pool.

I have 32 underlying disks but a maximum of 28 disks available to the OS (due to the 8 array limit for RAIDCore). But my best configuration for performance and efficiency is when using 24 disks in the pool.

(b) Use disks as similar sized as possible in the SS pool. This is about the efficiency of being able to use all the space available. SS can use different sized disks with reasonable efficiency, but it can't fully use the last hundred GB of the pool with 8 columns - if there are different sized disks and there are not a multiple of 8 disks in the pool. You can create a second virtual disk with fewer columns to soak up this remaining space. However, my solution to this has been to put my smaller disks on the RAIDCore controller, and group them as RAID0 (for equal sized) or JBOD (for different sized) before presenting them to SS. 

It would be better if SS could do this itself rather than needing a RAID controller to do this. e.g. you have 6x 2TB and 4x 1TB disks in the pool. Right now, SS will stripe 8 columns across all 10 disks (for the first 10TB /8*7), then 8 columns across 6 disks (for the remaining 6TB /8*7). But it would be higher performance and a more efficient use of space to stripe 8 columns across 8 disk groups, configured as 6x 2TB and 2x (1TB + 1TB JBOD).

(c) For maximum performance, use Windows to stripe different virtual disks across different pools of 8 disks each. On my hardware, each SS parity virtual disk appears to be limited to 490MB/sec reads (70MB/sec/disk, up to 7 disks with 8 columns) and usually only 55MB/sec writes (regardless of the number of disks). If I use more disks - e.g. 16 disks, this limit is still in place. But you can create two separate pools of 8 disks, create a virtual disk in each pool, and stripe them together in Disk Management. This then doubles the read and write performance to 980MB/sec read and 110MB/sec write.

It is a shame that SS does not parallelize the virtual disk access across multiple 8-column groups that are on different physical disks, and that you need work around this by striping virtual disks together. Effectively you are creating a RAID50 - a Windows RAID0 of SS RAID5 disks. It would be better if SS could natively create and use a RAID50 for performance. There doesn't seem like any advantage not to do this, as with the 8 column limit SS is using 2/16 of the available disk space for parity anyhow.

You may pay a space efficiency penalty if you have unequal sized disks by going the striping route. SS's layout algorithm seems optimized for space efficiency, not performance. Though it would be even more efficient to have dynamic striping / variable column width (like ZFS) on a single virtual disk, to fully be able to use the space at the end of the disks.

(d) Journal does not seem to add much performance. I tried a 14-disk configuration, both with and without dedicated journal disks. Read speed was unaffected (as expected), but write speed only increased slightly (from 48MB/sec to 54MB/sec). This was the same as what I got with a balanced 8-disk configuration. It may be that dedicated journal disks have more advantages under random writes. I am primarily interested in sequential read and write performance.

Also, the journal only seems to be used if it in on the pool before the virtual disk is created. It doesn't seem that journal disks are used for existing virtual disks if added to the pool after the virtual disk is created.

Final configuration

For my configuration, I have now configured my 32 underlying disks over 5 controllers (15 over 2x PCI-X RAIDCore BC4852, 13 over 2x PCIe Supermicro AOC-SASLP-MV8, and 4 over motherboard SATA), as 24 disks presented to Windows. Some are grouped on my RAIDCore card to get as close as possible to 1TB disks, given various limitations. I am optimizing for space efficiency and sequential write speed, which are the effective limits for use as a network file share.

So I have: 5x 1TB, 5x (500GB+500GB RAID0), (640GB+250GB JBOD), (3x250GB RAID0), and 12x 500GB. This gets me 366MB/sec reads (note - for some reason, this is worse than the 490MB/sec when just using 8 of disks in a virtual disk) and 76MB/sec write (better than 55MB/sec on a 8-disk group). On space efficiency, I'm able to use all but 29GB in the pool in a single 14,266GB parity virtual disk.

I hope these results are interesting and helpful to others!





July 5th, 2012 5:44pm

Neither is the management via PowerShell. We need a good and simple UI that can do the basic job.

I disagree with this point. If you are creating, in effect, SANs, PowerShell is by far the simplest way once you get past the learning curve of PowerShell. The CMDLets are pretty intuitive (again once your intuition is tuned with a bit of training!) and straightforward.

My advice is that all changes made to production storage pools is done by a script that can be audited to see EXACTALY what it did, vs trying to remember what you did in a GUI.

Free Windows Admin Tool Kit Click here and download it now
July 6th, 2012 1:18pm

I don't see the "PowerShell vs. GUI" debate as an either/or. It seems to me that both are useful under different circumstances and for different users.

* UI: for simple and standard operations, for operations that need to be done quickly, for one-off operations, for things that don't need logging, for test and development, for less experienced users (for whom PowerShell is a steep learning curve on top of the Windows component concepts, and the UI provides guidance and training)

* PowerShell: for more complex operations that require scripting, for operations that need to be repeatable, when good logging is required, for production, for more experienced users comfortable with PowerShell

One of the reasons that Windows has been so successful is because the GUI makes it easy to use by a wide range of users with varying skills, and that you can fairly easily self-teach by doing. But the GUI does become a limitation when you need to script, apply repeatable operations, document what you did, etc.

SQL Server takes a similar dual approach. The engine runs on SQL (the scripting language), but the GUI writes SQL for you. Similar with VBA in Office applications - you can record what you do in the GUI to script VBA. I assume that the way Microsoft is going with Windows is that the GUI will be writing PowerShell commands for you.

Ideally, each GUI dialog would have a button at the end which was "Script", which gives you choices of "Script action to new PowerShell window", "Script action to File", "Script action to Clipboard". That way you get the simplicity of the GUI, but the transition to PowerShell is made much easier.

July 6th, 2012 2:53pm

Update: with one fewer 500GB disk (due to a disk failure), I'm now able to get 739MB/sec sequential reads and 155MB/sec sequential writes, with a 14TB Windows Disk Management stripe across 4x 3.5TB arrays:

* 3.5TB SS parity disk (8x 500GB on Supermicro AOC-SASLP-MV8)
* 3.5TB SS parity disk (8x 500GB on Supermicro AOC-SASLP-MV8)
* 3.5TB SS parity disk (5x 500GB, (2x250GB JBOD), (2x250GB JBOD), 640GB, over motherboard SATA and RAIDCore)
* 4TB hardware RAID5 (5x 1TB on RAIDCore)

Total usable space: 13,458GB (12,991GB stripe, plus 376GB extra on the hardware RAID5, plus 91GB extra on the 640GB)

So you can improve your Storage Spaces performance by several times, by manually/forcibly configuring it optimally. If you naively assume that Storage Spaces will automatically use the available hardware for maximum performance, you are likely to be disappointed.

Having said this, the sequential write performance is still pretty limited given the available hardware. By comparison, I get 700MB/sec sequential reads and writes over 8x 3TB 7200rpm SATA disks on a 7 year old RAIDCore BC4852. My understanding is that this is a limitation due to the need for journaling for data integrity.
Free Windows Admin Tool Kit Click here and download it now
July 9th, 2012 4:15pm

I have similar problem. The Parity volume implementation is very slow on large sequential write, while I expect it should have similar performance as Simple volume. See attached screenshot Parity vs. Simple.

The other solution, ZFS or Linux's software RAID 5 have no performance problem on large sequential write. It is comparable to RAID 0.

July 15th, 2012 7:14pm

Excellent thread!

Only intending to use this for 4 or 5 disks in a home/dev setup (I don't get to play with storage at work...) and I'm still a tad hesitant about commiting to SS (and using Metro on a server...), but this is really useful information that'll help me decide. Thanks!

Free Windows Admin Tool Kit Click here and download it now
July 20th, 2012 9:41am

Hi there,

Storage Spaces seem to me to be a very useful tool for my purpose as a flexible way of providing space for home/SoHo file and media users. I am running Server 2012 essentials on a Intel E8400, 8Gb RAM, OS is on a single 250Gb Sata disk (Samsung HD25KJ). All attached disks are connected through the onboard (MSI P35 Neo2-FR) Sata controller (Intel). Cache flushing is allowed on the non SS-pool data drives. (since i cant access them in disk management with an running pool)

In my test setup i do have 3 disks (500Gb each, 2 x Seagate ST3500820AS and 1 Samsung WDC WD5000AAKS-00YGA ) for data storage and ONE SSD (CMFSSD-32D1) for Journaling. To provide the redundancy for journaling, i created 2 VHDX on the SSD and mounted them via disk management. NOTE: the SSD was still seen as one device in server manager when creating the pool, but adding the 2 virtual SSD devices via Powershell (as mentioned by Bruce in this thread) as Journal worked out without error. A virtual parity disk is created with thin prov. and 1TB of capacity.

I am confused about the write performance of my setup, IOPS are around 173-190 (4k random write queue:1)(seq write with ~20MB/s is pretty slow as well). One disk should get about ~75, so this seems to be a little bit more than native raid5 write performance, without use of the journal disk (r/w IOPS of the SSD are around >10.000). Read performance is about 100 IOPS, which is low as well, but yet unaffected by the SSD backing.

What i am wondering is, that during the write benchmark (AS-SSD or CrystalDiskMark, both show the same values) the Journaling drive has some access, which is shown in the resource monitor. So the SSD seems to be used, but the pool does not benefit from that.

Jorunal1 and 2 write during write benchmark: 15Mb/s each with a queue of 2-4, no reading from journal drive during benchmark.

Any ideas of how to bring the pool to its full performance? 


  • Edited by BlackLiner Tuesday, August 28, 2012 8:23 AM
August 28th, 2012 8:19am

@Blackliner,

My reading of the new Storage functionality finds that MS is targeting deployments with large pools of HDDs, think 30+, not 3 to 12 HDDs. Further, from other reports, the best setup uses "Mirror" resiliency, not Parity.  ("Simple" is really not appropriate for high availability purposes).

Free Windows Admin Tool Kit Click here and download it now
September 4th, 2012 10:22pm

Hi "Bruce Langworthy" wrote:

"Note however that when running Windows Server 2012, the setting for Journal is controllable using the File and Storage Services canvas in Server Manager."

I'm trying to add two SSD drives as journal disks to a pool, but I've been unable to find an option in the Server 2012 RTM GUI, can anybody help me find it?

P.

September 8th, 2012 1:41am

I still could not find any UI to set journal disks, so I did it using PS?

I found that if I add journal disks to a pool that is already used by a virtual disk and a volume, that disk does not use the journal disks.

I had to delete the volume, delete the virtual disk, recreate the virtual disk, and recreate the volume, then the SSD journal disks were getting used.

How do I get an existing virtual disk and volume to use the journal disks?

P.

Free Windows Admin Tool Kit Click here and download it now
September 11th, 2012 2:00am

I started the thread on June 22, and in the meantime Server 2012 RTM-ed.

However, unfortunately I still cannot find decent documentation for configuration and tuning SS

(be it with GUI or PowerShell).

Am I overlooking something?

September 11th, 2012 2:53pm

I did some perf testing:

http://blog.insanegenius.com/2012/09/22/storage-spaces-leaves-me-empty/

Free Windows Admin Tool Kit Click here and download it now
September 22nd, 2012 10:56pm

Thanks for the nice tests.
September 24th, 2012 11:03am

I did some perf testing:

 CrystalDiskMark data and ATTO Disk Benchmark not get real measurement for SS. Cache depends.

I use dd for windows, powershell script for time substraction and large size of data. My results:

Core i7, 16GB, 4x Seagate ST3000DM001.

Simple read 6 618 Mbytes for 10,15 seconds, 652,03 MB/s Simple write 30 000 Mbytes for 57,82 seconds, 518,88 MB/s Mirror read 6 618 Mbytes for 23,33 seconds, 283,64 MB/s Mirror write 30 000 Mbytes for 115,68 seconds, 259,34 MB/s Parity read 6 618 Mbytes for 13,18 seconds, 502,02 MB/s Parity write 30 000 Mbytes for 1 031,68 seconds, 29,08 MB/s

Free Windows Admin Tool Kit Click here and download it now
September 27th, 2012 10:20am

Is Microsoft going to fix parity option in SS ? How is SS going to help people who can attain much better parity with linux raid or zfs solution and how is it that people are going to use SS to build a SAN ?

Why does Microsoft just say there is a problem and we are working on a fix.


  • Edited by ashT23 Saturday, October 06, 2012 2:20 PM
October 6th, 2012 2:18pm

I'm experiencing the same issues..

Server 2012 Standard

2x LSISAS 1068E - latest drivers, BIOS, firmware.

Configured as below:

The 23.5GB is a VD created on a RAID0 of 2 64GB Adata SSD's.

I was under the impression that setting a disk with Journal as the usage type would speed things up (a la the ZIL on ZFS).

Can anyone from Microsoft comment on the usage of the Journal? The Hot Spare certainly works as it should, I've testing pulling disks and rebuilding.

Free Windows Admin Tool Kit Click here and download it now
November 19th, 2012 12:08am

Thanks for your reply & testing.

I did not test myself during past months because we are waiting for hardware suppliers to come with solutions.

However, apparently, that takes a while

What I see / encounter is lots of theory, but few (best) practice.

Nicest theory (and also some test systems and they are fast!) can be found in this webcast http://channel9.msdn.com/Events/TechEd/NorthAmerica/2012/VIR306

Further, no (rewritten) manuals, best practice, and badly written whitepapers (PowerShell).

It is stunning. E.g., the use, configuration, and effect of a Journal disks is still not clear to me.

My test equipment ( 15K) is idle for several months already.

November 19th, 2012 8:01am

I started this discussion and would like to end it as solved.

We recently bought adequate standard hardware equipment and configured storage spaces as advised in the blogs of Jose Barreto.

http://blogs.technet.com/b/josebda/ and http://blogs.technet.com/b/josebda/archive/2013/03/11/updated-links-on-windows-server-2012-file-server-and-smb-3-0.aspx .

We created Hyper-V over SMB, clustered storage (SS).

For the storage we used standard SAS 2.5 inch 7.2K 1TB disks.

In brief: this is a very good fast and cheap solution for small- and mid-sized companies.

We soon upscale this solution to use it for our larger companies.

We can recommend it to everybody.

Knowing that we soon can use cheap SAS SSDs the solution could be also used for high performance databases.

There is only one disadvantage for the system.

Hardware vendors are very slow in offering solutions.

They are probably not interested, because the system is so simple and so cheap.

We had to look for adequate hardware ourselves.

Unfortunately, Microsoft (specifically the storage team) is very slow giving best practice examples for SS.

More specifically, the storage team blog of Bruce Langworthy http://blogs.msdn.com/b/san/ is not publishing for more than 5 months!

The PowerShell document (http://www.microsoft.com/en-us/download/details.aspx?id=30125) regarding SS was not updated since Server 2012 was in beta. 12 Jun 2012! Horror.

However, with the help of Jose Barreto we managed to make it work.

Jose, thanks!

JanV.

Free Windows Admin Tool Kit Click here and download it now
April 6th, 2013 7:56am

I've also got a very poor performance when using a 3 column parity mode and 3 disks: maybe around 0 to 3 MB/s

An alternative configuration is to use many thin disks with 8 columns parity - a much better write performance.

new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk01" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk02" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk03" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk04" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk05" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk06" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk07" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk08" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk09" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB
new-virtualdisk -storagepoolfriendlyname "BackupStorage" -friendlyname "BackupStorageThinDisk10" -size 4TB -provisioningtype thin -resiliencysettingname Parity -numberofcolumns 8 -interleave 1MB

But I expect the interleave of 1 MB being one of the major points for the performance upgrade (next to the number of columns) 

March 13th, 2015 5:29pm

So three years later and not exactly the same situation, but...

in my case, I've got a pair of PCIe solid-state drives (Dell's nvme Express flash).  I actually have 4 of them in a new server... 2 configured via. a mirrored storage pool, the other 2 mirrored using the standard Windows disk management.  Using ATTO, the storage pool performance is reasonable if not great up to the 1024KB transfer size... but once it hits 2KB, read performance tanks.  I mean it drops ~ a factor of 10!!  

That said, at the 1024KB transfer size, read performance is amazing... 4,294 MB/Sec.  Write performance with the same size is "only" 908 MB/sec.  But when it hits 2048KB, write performance stays pretty much the same while read drops to ~ 400 MB/sec.  I reconfigured the pool to use a 4K physical & logical sector size, but that had little effect. I've tried various settings, such as -IsPowerProtected, but most of those seem geared around writes & caching.  I think my issue is related to some kind of throttling or collision.  

Using the old disk mirroring, performance stays relatively close to a single, non-mirrored drive.  

It just seems like Storage Spaces/pooling is a half-baked product and even with Server 2012 R2 and all the hotfixes, it's still broken!

Free Windows Admin Tool Kit Click here and download it now
April 24th, 2015 2:51pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics