Backpressure activated after Exchange CU5 install-version buckets

Having many of these below:

Event ID 16028 A forced configuration update for Microsoft.Transport.TransportServerConfiguration has successfully completed.

Followed by this every few hours.

Event ID 15004 Resource Pressure Increased from Medium to High

Version buckets =219

Did not have this issue before CU5 update.  Updated from CU5

Am not using any third party transport scripts.
August 8th, 2014 6:54pm

Check disk latency.  Here is common counters and desired values for Exchange 2010 HUB:

http://technet.microsoft.com/en-us/library/ff367923(v=exchg.141).aspx

Unfortunately no document exists for Exchange 2013 with desired counter values, but the above link can be followed with regard to common counters (disk,cpu,memory,etc)

Free Windows Admin Tool Kit Click here and download it now
August 9th, 2014 8:06pm

Hi,

The value "High" means that the resource is severely overused, full back pressure is applied. Based on my search, the number of version buckets usually increase to unacceptably high levels because of virus issues, mail database being hooked by A/V, long transition by large mail, hard disk drive performance and some other factors.

It is not easy to directly check why the version buckets value increased, however below things are worthy checking:

1. Make sure the server is not virus affected.

2. Temporary stop/disable any anti-virus or anti-spam software to check result.

Best regards,

If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.c

August 11th, 2014 8:30am

I'm having the same issues described here after CU5 update:

http://social.technet.microsoft.com/Forums/en-US/e65e819b-6452-48c3-bcca-35f4dd1c42e4/exchange-2013-service-pack-1-issues?forum=exchangesvrgeneral

There are no viruses, or any third party codes running on this box.

The box is running as a Hyper-V guest.  I've had no issues with performance until this update.

Free Windows Admin Tool Kit Click here and download it now
August 11th, 2014 1:43pm

Having the same issues here.  We went from CU3 to CU5, 3 node DAG.

No issues with back pressure before upgrading to CU5.  Now it's kicking in 1-2 times daily for very brief durations.  Ruled out CPU, disk, network, SAN, memory issues so far.  Temporarily uninstalled system a/v however has no effect.  Have opened a ticket with Microsoft.

August 11th, 2014 7:12pm

Hello,

I am trying to involve someone familiar with this topic to further look at this issue. There might be some time delay. Appreciate your patience.

Thank you for your understanding.

Best regards,

If you have feedback for TechNet Subscriber Support, contact tnmff@microsoft.com

Free Windows Admin Tool Kit Click here and download it now
August 13th, 2014 1:52am

For now, I created a schedule task to reboot the transport service when the back pressure event is logged.
August 13th, 2014 2:00am

Hi,

A list of changes that are made to the message queue database is kept in memory until those changes can be committed to a transaction log. Then the list is committed to the message queue database itself. These outstanding message queue database transactions that are kept in memory are known as version buckets. The number of version buckets may increase to unacceptably high levels because of an unexpectedly high volume of incoming messages, spam attacks, problems with the message queue database integrity, or hard drive performance.

For a workaround, I suggest we change three parameters as below:

 

1). Launch to the location on the Exchange 2013 Mailbox server: C:\Program Files\Microsoft\Exchange Server\V15\Bin\EdgeTransport.exe.

2). Open the EdgeTransport.exe.config file with Notepad, find the following parameters and increase the value as:

 

<add key="VersionBucketsHighThreshold" value="400" />

<add key="VersionBucketsMediumThreshold" value="240" />

<add key="VersionBucketsNormalThreshold" value="160" />

 

And these need to be tweaked sometimes pending on how the server is performing, above setting is as your reference.

 

Note: before editing, please copy EdgeTransport.exe.config file to other location as a backup one firstly.

 

Then, please restart Microsoft Exchange Transport Service in services.msc to take the change effective.

 

Meanwhile, to help prevent the Exchange Server resource, I also recommend we configure the message size limits to avoid any large attachments.

For more information about Back Pressure in Exchange 2013, we can refer to this article:

Back Pressure

http://technet.microsoft.com/en-us/library/bb201658(v=exchg.150).aspx

Best Regards,

Tracy Liang

Free Windows Admin Tool Kit Click here and download it now
August 13th, 2014 7:05am

Hi,

Is there any update on this issue?

Best Regards,

Tracy Liang

August 15th, 2014 8:28am

Users were scanning large documents to email and creating the backpressure situation.  They are now scanning these directly to a file share.  Although they were doing this before the update with no issue.
Free Windows Admin Tool Kit Click here and download it now
August 28th, 2014 12:16pm

We have been struggling with this issue since I believe 2013 SP1. We could barely send a 60MB attachment without "version buckets" back pressure monitoring causing 30 min+ transport delays. After exhausting many other potential causes including extensive performance testing, we tried to replicate the issue on a clean 2013 CU7 in our lab environment (same hardware config as production), but couldn't.  After copying the edgetransport.exe.config from our production server to the lab CU7 lab server, the same backpressure issue appeared.  After copying the out of the box CU7 edgetransport.exe.config to our production server, we are no longer experiencing back pressure issues.

Hope this helps others as this was a fairly difficult one to track down.

Example of an Event log message we had been receiving...

The resource pressure increased from Normal to High.

The following resources are under pressure:
Version buckets = 288 [High] [Normal=80 Medium=120 High=200]
  • Proposed as answer by Robi Siers 12 hours 46 minutes ago
March 18th, 2015 2:16pm

We have been struggling with this issue since I believe 2013 SP1. We could barely send a 60MB attachment without "version buckets" back pressure monitoring causing 30 min+ transport delays. After exhausting many other potential causes including extensive performance testing, we tried to replicate the issue on a clean 2013 CU7 in our lab environment (same hardware config as production), but couldn't.  After copying the edgetransport.exe.config from our production server to the lab CU7 lab server, the same backpressure issue appeared.  After copying the out of the box CU7 edgetransport.exe.config to our production server, we are no longer experiencing back pressure issues.

Hope this helps others as this was a fairly difficult one to track down.

Example of an Event log message we had been receiving...

The resource pressure increased from Normal to High.

The following resources are under pressure:
Version buckets = 288 [High] [Normal=80 Medium=120 High=200]
  • Proposed as answer by Robi Siers Wednesday, March 18, 2015 6:36 PM
Free Windows Admin Tool Kit Click here and download it now
March 18th, 2015 6:15pm

The file that resolved the issue did not contain the following.

<add key="EnableResourceMonitoring" value="true" />
    <add key="ResourceMonitoringInterval" value="00:00:02" />
    <add key="PercentageDatabaseDiskSpaceUsedHighThreshold" value="0" />
    <add key="PercentageDatabaseDiskSpaceUsedMediumThreshold" value="0" />
    <add key="PercentageDatabaseDiskSpaceUsedNormalThreshold" value="0" />
    <add key="PercentageDatabaseLoggingDiskSpaceUsedHighThreshold" value="0" />
    <add key="PercentageDatabaseLoggingDiskSpaceUsedMediumThreshold" value="0" />
    <add key="PercentageDatabaseLoggingDiskSpaceUsedNormalThreshold" value="0" />
    <add key="PercentagePrivateBytesUsedHighThreshold" value="0" />
    <add key="PercentagePrivateBytesUsedMediumThreshold" value="0" />
    <add key="PercentagePrivateBytesUsedNormalThreshold" value="0" />
    <add key="VersionBucketsHighThreshold" value="200" />
    <add key="VersionBucketsMediumThreshold" value="120" />
    <add key="VersionBucketsNormalThreshold" value="80" />
    <add key="VersionBucketsHistoryDepth" value="10" />
    <add key="BatchPointHighThreshold" value="8000" />
    <add key="BatchPointMediumThreshold" value="4000" />
    <add key="BatchPointNormalThreshold" value="2000" />
    <add key="BatchPointHistoryDepth" value="300" />
    <add key="BatchPointUseCostForPressure" value="true" />
    <add key="BatchPointBatchSize" value="40" />
    <add key="BatchPointBatchTimeout" value="00:00:00.100" />
    <add key="BatchPointItemExpiryInterval" value="00:05:00" />
    <add key="SubmissionQueueHighThreshold" value="10000" />
    <add key="SubmissionQueueMediumThreshold" value="4000" />
    <add key="SubmissionQueueNormalThreshold" value="2000" />
    <add key="SubmissionQueueHistoryDepth" value="300" />
    <add key="SmtpBaseThrottlingDelayInterval" value="00:00:00" />
    <add key="SmtpMaxThrottlingDelayInterval" value="00:00:55" />
    <add key="SmtpStepThrottlingDelayInterval" value="00:00:01" />
    <add key="SmtpStartThrottlingDelayInterval" value="00:00:01" />
    <add key="PercentagePhysicalMemoryUsedLimit" value="94" />
    <add key="DehydrateMessagesUnderMemoryPressure" value="true" />
    <add key="PrivateBytesHistoryDepth" value="30" />

August 6th, 2015 1:00pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics