Support for Exchange Databases running within VMDKs on NFS datastores

The virtualization community are asking Microsoft to change their support position for Exchange Server running in VMDKs on NFS datastores. Support is long overdue.  

The virtualized SCSI stack, including initiators and disks inside of a VM, are unaware of the underlying storage protocol that provides data connectivity between a hypervisor and storage device. Support is long overdue and the rationale for the lack of support is not recognized by the storage industry.

The following co-authors (from 7 different vendors & 2 solution integrators) are aiming to work together for the good of our customers and the community too raise awareness of this issue in the hopes that Microsoft will reconsider the support policy.

Josh Odgers (@josh_odgers)

Senior Solutions & Performance Engineer @ Nutanix

VMware Certified Design Expert #90 & vExpert

Vaughn Stewart (@vStewed)

Chief Evangelist @ Pure Storage

vExpert

Chad Sakac (@sakacc)

SVP, Global Systems Engineering @ EMC

vExpert

Matt Liebowitz (@mattliebowitz)

Solution Architect @ EMC

VCP & vExpert & author of Virtualizing Microsoft Business Critical Applications on VMware vSphere

Michael Webster (@vcdxnz001)

Senior Solutions & Performance Engineer @ Nutanix

VMware Certified Design Expert #66 & vExpert

Andrew Mitchell (@amitchell01)

Consulting Architect @ VMware

VMware Certified Design Expert #30

Rob Waite (@rob_waite_oz)

Solution Architect @ Tintri

VCP & vExpert

Nick Howell (@that1guynick)

vTME @ Netapp

vExpert

Chris Wahl (@ChrisWahl)

Solution Architect @ AHEAD

VMware Certified Design Expert #104

Gabriel Chapman (@bacon_is_king)

Solution Architect @ Simplivity

www.gabrielchapman.com

VCP & vExpert

Mahmoud Magdy

Senior Technical Architect

vExpert, MVP and Symantec BExpert

The Details of Microsofts Position

At present, Microsoft supports Exchange deployments on NAS, specifically only on their hypervisor and their file based protocol, SMB 3.0. The intent of our efforts is to address the gap in supporting Exchange in VMware vSphere & KVM with datastores connect via NFS 3.0.

The support policy can be found here and is summarized below...

The storage used by the Exchange guest machine for storage of Exchange data (for example, mailbox databases and transport queues) can be virtual storage of a fixed size (for example, fixed virtual hard disks (VHDs) in a Hyper-V environment), SCSI pass-through storage, or Internet SCSI (iSCSI) storage. Pass-through storage is storage that's configured at the host level and dedicated to one guest machine. All storage used by an Exchange guest machine for storage of Exchange data must be block-level storage because Exchange 2013 doesn't support the use of network attached storage (NAS) volumes, other than in the SMB 3.0 scenario outlined later in this topic. Also, NAS storage that's presented to the guest as block-level storage via the hypervisor isn't supported.

Another statement of interest in the above link is as follows;

"Configuring iSCSI storage to use an iSCSI initiator inside an Exchange guest virtual machine is supported. However, there is reduced performance in this configuration if the network stack inside a virtual machine isn't full-featured (for example, not all virtual network stacks support jumbo frames)."

While the contributors to this post agree this is not an ideal configuration, it is not a performance issue when used with enterprise grade storage and with properly architected networking.

The support statements is odd as there is a large portion of the VMware market that is deployed over NFS. This configuration is supported and is the preferred means of data connectivity for many. A decade ago, one could run Exchange 5.5 over NAS (CIFS). This support was removed with Exchange 2000. It appears the concerns from the Exchange 5.5. The difference with a virtualized instance is the Exchange application is NOT accessing data via NAS. All VM data access is via virtualized SCSI.

This may be a valid (or legacy) support policy back in the days before virtualization became mainstream, and before the evolution of 10/40Gb Ethernet where performance may have been limited by 1GB connections (as opposed to the NFS protocol itself) or for deployments where NFS is presented directly into the Guest OS which we agree would not work.

The abstraction of virtual SCSI from the underlying infrastructure (FC, DAS, iSCSI or NFS) is shown in the below diagram courtesy of http://pubs.vmware.com

Over the years, the contributors of this community post have seen countless successful deployments of Exchange on vSphere, using both block (FC,FCoE,iSCSI) and file based storage (NFS/SMB) so why is only NFS not supported?

There are a number of blog posts related to Exchange and NFS storage, one such example is by Tony Redmond (@12Knocksinna), titled NFS and Exchange, Not a good combination.

To Tonys credit, he goes much further than most posts we have seen, which in most cases just say Its not supported and give no technical justification as to why.

Tony wrote

One small, but terribly important issue is Exchanges requirement that it should be able to abort a transaction should bad things happen. Block-level storage allows this to happen but file-level storage supports aborts on a best-effort basis. Simply put, without the ability for an application to signal the storage subsystem that a transaction should be aborted, you run the risk of introducing corruptions into the databases through bad transactions.

With a VMDK presented to Exchange, we are not aware of any reason why Exchange (or any other application) would not function exactly the same as if the VMDK was residing on a FC or iSCSI backed LUN, or if a LUN was presented to the guest OS via an In Guest iSCSI initiator.

This is due to vSphere abstracting the underlying storage from the VM. To the operating system running within the guest the virtual hard disk appears and acts just like a local physical SCSI disk, regardless of the underlying storage protocol.

In support of this, Microsoft has a program called Exchange Solution Reviewed Program or ESRP which Microsoft partners can use to validate Exchange solutions. This program requires specific tests including one of24 hours using Jetstress with predefined settings, to validate the subsystem not only system performance, but consistency of the database.

Here is a Jetstress report showing the ESRP test passing with the following VM configuration with Exchange running within a VMDK on an NFS datastore

1 x Windows 2008 VM

1 vCPU (2.6Ghz)

24Gb vRAM

4 x PVSCSI Adapters

8 x VDMKs (2 per PVSCSI)

8 Exchange Databases (one per VMDK)

2 DAG Copies

The 24 Hour test can be viewed here - 24 Hour Jetstress Test

The Databases checksum from the above 24 hour test can be viewed here - DB Checksum

Note: The above test was ran for the purpose of this post, to show the storage abstraction works for Exchange, not to demonstrate maximum performance for the underlying storage platform.

So if a vendor validates a VMDK on NFS implementation by successfully completing Microsofts official tests (via ESRP) is there any reason not to support it?

We believe Microsoft should provide some formal process to storage vendors to certify their solutions for Exchange, in the worst case, at least allowing vendors to submit hardware for validation using Microsofts internal certification/QA process where these tools cannot be shared publicly.

Please show your support for this issue by voting and leaving your constructive or encouraging feedback in the comments at the Microsoft Exchange Improvement Suggestions Forum below. This issue is already rated as the #1 issue so the more support the better!

http://exchange.ideascale.com/a/dtd/support-storing-exchange-data-on-file-shares-nfs-smb/571697-27207

So to Microsoft - and too all the Microsoft Exchange & Storage experts we ask;

1. Can you clarify by providing some form of documentation what the issue is with Exchange on NFS natively. The goal to ensure if there is an issue, its understood by the community

2. Can you clarify by providing some form of documentation what the issue is with Exchange storage in a VDMK on an NFS datastore (where the storage is abstracted by the hypervisor). The goal again is to ensure if there is an issue, its understood by the community and can possibly be addressed in future hypervisors.

3. If the support statement is simply outdated and needs updating, lets work together to make it happen for the good of all Microsofts customers, especially those who have NFS storage from one of the many vendors in the market.

Now for those customers experiencing this issue today, lets discuss the current workarounds available if you need to comply with the current support policy and the continued impact to Microsoft customers if the policy is not updated.

Option 1. Use in guest iSCSI initiators to present LUNs direct to the operating system

Issues

a) Increased complexity of managing two storage protocols (NFS + iSCSI), also some vendors license features like iSCSI at additional cost, so it makes it so expensive to purchase a license on the storage just to support Exchange.

b) Some storage solutions may not support NFS and iSCSI

c) Increased points of failure eg: iSCSI initiator

d) Potential for reduced storage capacity efficiency

e) No ability for the guest to take advantage of advanced storage features that are added to the vSphere storage stack.

Option 2. Present iSCSI LUNs to vSphere hypervisor

Issues

a) Increased complexity of managing two storage protocols (NFS + iSCSI) and additional cost as explained above.

b) Some storage solutions may not support NFS and iSCSI

c) Increased points of failure eg: iSCSI initiator

d) Potential for reduced storage capacity efficiency

Option 3. Run Exchange on physical hardware (not virtual)

Issues

a) Increased complexity of designing and managing another silo of compute/storage

b) Increased datacenter requirements which lead to increased OPEX (ie: Power/Cooling/Space)

c) Decreased ROI from physical hardware compared to virtualization

d) Decreased storage capacity efficiency

e) Increased CAPEX due to reduced flexibility in sizing physical hardware ie: Sizing for end state not current requirements

f) Delayed time to market / upgrades / expansion due to procurement of hardware

g) Increased hardware maintenance costs

h) Increased complexity for Disaster Recovery

It is clear based on the experience of the contributors of this article, that NFS has a large footprint in the market and for these customers using NFS, Microsoft should seek a mutual Win-Win situation for its large and small customers using NFS.

Microsoft should extend support for NFS backed storage deployments to facilitate and simplify Exchange deployments for customers using NFS storage, which without the updated support statement would likely be forced into using multi-protocol or standalone silo type deployment for Exchange, adding complexity and resulting in increased CAPEX/OPEX.

Let's hope Microsoft care about their customers as much as the authors of this document do!


  • Edited by Josh Odgers Tuesday, February 11, 2014 2:42 AM
February 11th, 2014 2:42am

Great writeup!

One thing I would emphasize is even though its storage + converged infrastructure vendors leading the charge... this is an extremely frequent request from our customers, they really want Exchange + VMDK + NFS datastores to be supported by MSFT so they can virtualize Exchange in the same way they virtualize the rest of their environment.  

Many of them also just ignore this support statement since they know there is no technical reason for it.  


Free Windows Admin Tool Kit Click here and download it now
February 11th, 2014 5:40am

Hey everyone, I hope all are well and enjoying weather that is as nice it is here in Northern CA, today. :)

I've been working with Exchange, into the six-figure (mailbox count) range for over 15 years. After working in the field and teaching Msft (MCT), I landed at NetApp for 12+ years, where I first started working with others on an advanced storage management solution for Microsoft Exchange 5.5 (and 2xxx in the following years...). In the beginning, this was all before VSS, etc. I've seen a lot!

I currently work at Tintri in engineering, where I am (as you may guess) focused on some exciting and innovative Microsoft integrations.

Note: My comments are of my own personal opinion here, while working from home, juggling a few personal appointments today.

Candidly, if I were Perry Clarke (Exchange GM) and the decision to blindly support "NFS" was an all-or-none proposition, I would decisively and with conviction say "no" <the end>. I believe that is part of the problem surrounding this topic. There is often no clear qualification in the language when people ask. It's just "NFS", foo, bar. A small PC with a 1TB SATA disk and Linux could technically yield "NFS storage"... If I were Microsoft, I would NOT want to take a support call or have a customer report a major outage (or data loss) because I had blessed such a configuration.

Now back to reality. At Tintri, if we can deliver 100K r/w IOPS averaging a few milliseconds, power-off controllers, yank network cables, fail switches, etc. (as we do regularly, such as with our Cisco UCS qualification) with no errors, than we are clearly in an entirely different airspace. With a disruptive, irrefutably robust, industrial strength technology such as ours (I can only speak for Tintri), pushback from Microsoft can only push customers *away* from Exchange. One (of the many) reasons the Exchange team innovated so many great storage-driven features in recent years, is because SANs were too complex, expensive, etc.

Given my long love affair with Exchange, I want to see Exchange thrive for years to come. I've been around long enough to understand how the Exchange team arrived at their disposition toward insulating Exchange from storage hiccups (it is not without reason), but times have and will continue to rapidly change. And that's not to mention the benefits of virtualization, and the automation capabilities that you simply DO NOT HAVE with bare-metal servers. Who cares if you can't put 12 cores and a jazillion GB's of RAM into a VM. That would be silly anyways, right? Multiple VMs, across multiple virtual hosts is obviously a better way to distribute the load while maximizing availability.

Lastly, I hope Microsoft isn't betting that Enterprises will move to the cloud too quickly. The continued public-cloud volatility and the outcome and context in which it will all play out is yet to be determined, especially in the short-term.

I can say this with confidence: Exchange works and performs brilliantly on Tintri VMstore, and we add a lot of value. Put us to the test, through any qualification process, and we will shine. I know, because I've done it myself and I am among the hardest to "impress" (so to speak).

I have written enough (for now)!

Warmest regards,

-Jp
February 11th, 2014 10:06pm

I can't comment on Tintri as I haven't tested it myself, but everything else you mentioned I agree with with the exception of the below 2 comments.

"Candidly, if I were Perry Clarke (Exchange GM) and the decision to blindly support "NFS" was an all-or-none proposition, I would decisively and with conviction say "no" <the end>."

I agree, BUT would say the same for any storage solution / protocol - so if its just NFS, the current "needs to support SCSI aborts" excuse does not fly with Exchange in a VMDK on abstracted NFS storage.

I believe that is part of the problem surrounding this topic. There is often no clear qualification in the language when people ask. It's just "NFS", foo, bar. A small PC with a 1TB SATA disk and Linux could technically yield "NFS storage"... If I were Microsoft, I would NOT want to take a support call or have a customer report a major outage (or data loss) because I had blessed such a configuration."

I would submit the same is true for iSCSI, any consumer grade NAS (or as you mentioned linux VM) can serve iSCSI and Microsoft support iSCSI "blindly" in the same way your saying not to "blindly" support NFS. (Which as I mentioned earlier I agree they should "blindly" support anything!)

I have seen plenty of storage simply undersized to the point of such high latency the OS has seen high latency & even delayed write failures and similar issues - its not the fault of the FC,FCoE,iSCSI or NFS protocol though, it was simply an overloaded/undersized storage solution. But in this case,  MS would support Exchange as long as it wasn't running on NFS!?

I think the case is simple, Microsoft should have a qualification process for all storage (as support based on a protocol which can be fully abstracted make no sense) - this would make support easier for MS, and provide more certainty for clients when deploying Exchange on a given platform, knowing it has passed the qualification tests.

Win/Win/Win for MS , Storage Vendors & our mutual customers

Free Windows Admin Tool Kit Click here and download it now
February 11th, 2014 11:17pm

Josh, thank you for pointing out that bit about iSCSI (that you could hack together a bummer "solution", and it would actually be supported). So true!

Workload sizing correctly is very important, and it cannot be based on the size (space consumed) of the databases. I've seen the low-level storage traces as they've changed over the years with the Exchange versions, and Microsoft has made some vast improvements since the ~2000/2003 EDB versions. But all of the factors need to be evaluated together.

Elevating ESRP testing for NFS/SMB (vhdx|vmdk(EDB)) to first-class status would be well received by everyone. But if I were Microsoft, I would try to find a clever way to discourage garage-built, hillbilly "storage systems" (or servers for that matter) in favor of real products, with real support and sound engineering so that *customers* have resources to hold accountable with any issues.

:::I see WAG (Wild-Ass-Guess*) Engineering everywhere::::    ;)

Cheers,

-Jp

*Credit where it is due for that awesome term: Rob Cohee, Autodesk

February 12th, 2014 2:12am

After lots of feedback, I have expanded on the exact configuration being proposed to be supported in the below post.

What does Exchange running in a VMDK on NFS datastore look like to the Guest OS?

Free Windows Admin Tool Kit Click here and download it now
February 13th, 2014 11:58pm

This support issue is getting plenty of interest across the industry, to the point VMware have had numerous inquires from customers about this.

Quote "VMware has received a number of inquiries from our customers and partners about this movement"

http://blogs.vmware.com/apps/2014/02/microsoft-exchange-server-nas-support-position.html


February 14th, 2014 9:21am

Great article, thank you.
Free Windows Admin Tool Kit Click here and download it now
March 11th, 2014 3:44pm

A great workaround for this outdated support policy is to stop using Exchange. There are plenty competitive products out there that do the same or a better job and are supported in this combination or even native NFS. 
  • Edited by pdvanhelden Thursday, June 05, 2014 3:20 PM
June 5th, 2014 3:19pm

Excellent discussion!. I too have been struggling with this issue in our Exchange environment. I really want to get rid of physical mode RDM's ASAP. Keep up the fight!
Free Windows Admin Tool Kit Click here and download it now
August 1st, 2014 5:39pm

Further information about Integrity of Write I/O for VMs on NFS Datastores

Part 1 - Emulation of the SCSI Protocol
http://www.joshodgers.com/2014/11/03/integrity-of-io-for-vms-on-nfs-datastores-part-1-emulation-of-the-scsi-protocol/

Part 2 - Forced Unit Access (FUA) & Write Through
http://www.joshodgers.com/2014/11/04/integrity-of-io-for-vms-on-nfs-datastores-part-2-forced-unit-access-fua-write-through/

Part 3 - Write Ordering
http://www.joshodgers.com/2014/11/06/integrity-of-io-for-vms-on-nfs-datastores-part-3-write-ordering/

Part 4 - Torn Writes
http://www.joshodgers.com/2014/11/10/integrity-of-io-for-vms-on-nfs-datastores-part-4-torn-writes/

Part 5 - Data Corruption 
http://www.joshodgers.com/2014/11/12/integrity-of-io-for-vms-on-nfs-datastores-part-5-data-corruption/
November 12th, 2014 2:37am

Josh, you and Nutanix and all of your co-petitioners are missing a fundamental point, while making a common and grievous mistake about the SCSI 'ABORT' command.

Microsoft is right on this and all of the NFS proponents are wrong.

Here is why:

When an ANSI-standard block device (target) receives an ABORT, and when the target acknowledges the ABORT was successful, it means that the write request that was queued to physical media has been cancelled, >>and that the data that was in that LBA has NOT been modified<<.

A successfully acknowledged ABORT thereby means that the write was cancelled while still queued, and that data at the physical media was >>DEFINITELY NOT CHANGED<<.

Therefore, the data at the physical storage LBA is >>NOT<< left in an indeterminate STATE.  Successful ABORT means "NO change was made, and the ORIGINAL data is unchanged"

You may want to read and re-read the above until the distinction is clear.

SCSI ABORT means that the data at the physical LBA in question is >>NOT<< left in an indeterminate state.  It means that the data at the LBA in question remains >>DEFINITELY<< in the state it was in BEFORE the write request was issued.

Subsequent recovery mechanisms rely on the STATE of the data being absolutely known.

The mechanisms you describe in VMware's implementation only remove the queued request from the >>EMULATOR<< queue, but the emulator has NO CLUE whether the write that was in it's queue, which it subsequently queued to NFS and which NFS queued to the physical disk, was aborted AT the physical disk. 

Therefore, following an 'emulated abort', no one has any way of knowing whether the data residing on physical disk is in the original state, or if it has been modified. 

In reality, VMware's implementation guarantees that WHATEVER is there is GARBAGE (indeterminate).

All known implementations of SCSI block emulation on NFS work (or more accurately, FAIL to work) in exactly the same way.  Microsoft knows this.

Now, when you go back and re-read the VMware patent you cited, you will notice that it makes no claim about the state of the data on physical media following an abort.

It's very simple.  In SCSI, a successfully acknowledged ABORT means "it DEFINITELY never happened. WHATEVER is there now is what WAS there prior to the write request."

In VMware's SCSI 'emulation', ABORT means something very different.  It means "let's agree to PRETEND it never happened, but in reality we have no bloody clue about the present state of the data on the physical media.  It could be the original data, could have been overwritten -- we have no clue!"

Josh, if you had chosen to actually READ the VMware patent language you cited (7,865,663), you might have noticed VMware admits EXACTLY this:

"The virtualization software emulates the SCSI abort command for a virtual storage device stored on an NFS device by removing the corresponding request from the information maintained by the virtualization software about pending I/O requests. If the command is still in Flight, the command is removed from the information, maintained by the virtualization software, by marking that the command was completed such that the guest Finds out that the command was actually aborted. In one implementation, the corresponding request is removed from the virtual SCSI request list. Therefore, if the results come back for a given request that has been removed from the virtual SCSI request list, the result of the reply is ignored (e.g., dropped) thereby emulating a virtual SCSI abort command."

So..."EMULATED ABORT" means "let's agree to PRETEND it never happened, but in reality we have NO BLOODY CLUE about the present state of the data on the physical media."  And...VMware's block emulation is bull$hit. 

Still confused?

Read, lather, rinse, comprehend, repeat:

"...by marking that the command was completed such that the guest 'Finds out' that the command was actually aborted."
"...by marking that the command was completed such that the guest (Assumes) that the command was actually aborted."
"...by marking that the command was completed such that the guest (Believes) that the command was actually aborted."

Comprehension may be aided thusly:

ftp://ftp.t10.org/t10/document.06/06-179r0.pdf

VMware here is like Bill Clinton, saying "It depends on what the meaning of the word 'is' is."

ROF, LOL.

Before I leave you, there is one other grievous miscomprension to discuss.  You assume that the VMware PATENT describes the VMware PRODUCT.  That's a mistake.  Note the weasle-words "in ONE implementation".

What they patented is in no way indicative of how it is actually implemented.

Meanwhile:

http://www.arnnet.com.au/article/576765/nutanix-rises-up-stack-acropolis-hypervisor-embargo-june9-est2230/

And:

http://www.theregister.co.uk/2015/03/05/vmware_sued_for_gpl_violation_by_linux_kernel_developer/

So...even if VMware DID have a patented implementation of SCSI ABORT emulation, where does that leave Nutanix with Acropolis?

Sorry, but at the end of the day, the company that designed the application gets to say what they do or do not support.  VMware's storage technology is a 'house of cards" and MSFT knows this, as do all of us who truly understand the subject matter.  NFS is dinosaur technology that was never intended for ACID compliant databases.  NFS is for file-servers and not for enterprise applications.  Can we move on now?


  • Edited by Eric_W 8 hours 5 minutes ago typo
Free Windows Admin Tool Kit Click here and download it now
August 11th, 2015 7:20pm

Hi Eric,

I would appreciate if you would share your full name and employer, since the above was your 1st post on TechNet.

Thanks for the reply, although I think the negativity could have been left out which would have made the conversation more constructive.

I figured I'd get the question of Nutanix Acropolis out of the way first. Acropolis Hypervisor (AHV) uses iSCSI and is certified on the Server Virtualization Validation Program (SVVP) so its fully supported for MS Exchange. Acropolis will also manage Hyper-V (running SMB 3.0) in an upcoming release so that's another supported solution on Nutanix in addition to ESXi and iSCSI which we also support.

Regarding the article you referenced regarding VMware being sued, I'm not sure what relevance that has to the topic?

If you and Microsoft were correct about SCSI protocol emulation on ESXi not working as per the patent, every SQL deployment on ESXi/NFS datastores would have corruptions left, right and center. This is not the case. I'm not saying corruptions never occur, but when they do its not due to SCSI protocol emulation of VMDKs on NFS datastores.

In addition the SQL team support SQL in VMDK on NFS datastores, I've personally validated this and even wrote the below article.

http://www.joshodgers.com/2015/03/17/microsoft-support-for-ms-sql-on-nfs-datastores/ 

Now you may come back with NFS isn't supported for some SQL deployment types, and this is true, as old style clustering using shared disks is not supported but things like Always on Availability groups are supported.

http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1037959

Coming back to what I believe is your main point around SCSI aborts and the various quotes you choose to use.

The one you left out which is detailed in my post below is the following:

"Accordingly, a faithful emulation of SCSI aborts and resets, where the guest OS has total control over which commands are aborted and retried can be achieved by keeping a virtual SCSI request list of outstanding requests that have been sent to the NFS server. When the response to a request comes back, an attempt is made to find a matching request in the virtual SCSI request list. If successful, the matching request is removed from the list and the result of the response is returned to the virtual machine."

So what you quoted is true but its not the complete story, if the Virtual SCSI request exists it is dropped as you mention but a response is also sent to the virtual machine so it and the application is completely aware of what's going on, in the same way a physical system is or a virtual machine running on iSCSI/FC etc.

http://www.joshodgers.com/2014/11/03/integrity-of-io-for-vms-on-nfs-datastores-part-1-emulation-of-the-scsi-protocol/

So in summary: The SQL team supports VMDKs on NFS datastores, and SQL along with most applications have the same block storage requirements as the Exchange team quote including Write ordering, Forced Unit Access, Write Through (which BTW Nutanix does regardless of storage protocol) and protection from Torn I/Os.

I don't expect we'll agree on this topic, as over the years its gone beyond technical and probably started as MS simply not wanting to help/encourage ESXi deployments but has in the last few years gone into the realm of religious debate.

To me, I just hate it when customers are being scared into expensive capital expenditure or dedicated HW silos for one application (such as Exchange) due to FUD. Worse still the FUD being repeated (in 99% of cases) by people simply parroting what somebody else says as opposed to understanding some of the topics outlined in my blog.

Now don't mis-interpret the above as me saying you are parroting, I have no idea if you are or not but too your credit you have gone much deeper into the conversation than most.

Nutanix customers can chose from 3 hypervisors each of which is fully supported (by MS) using iSCSI or SMB3.0 but the purpose of the original post was about updating the support statement for all vendors, Nutanix simply took the lead in raising awareness in the community which I am happy to say has been achieved.

My closing point, It seems crazy I can run MS Exchange in a supported config on a consumer grade NAS running iSCSI but not enterprise grade highly QA'd storage solution/s from many vendors if NFS datastores are used with ESXi.

For such a business critical application, if it was my call, I would have a storage certification program regardless of storage protocol as plenty of block & file storage implementations vary in quality. Even some enterprise arrays don't do FUA or Write Through which the Exchange team have always insisted is required but only in the context of NFS discussions.

Maybe a good topic would be to dive into how SMB 3.0 and VHDXs do emulation of the SCSI protocol - but that would only strengthen the case for NFS and no further evidence is required IMO.

Again, thanks for the comment and especially for going further than the bulk of people who just parrot a version of what they have heard in the past from MS.

August 11th, 2015 8:50pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics