Does anybody experience SBMessaging receive locations are disabled when Azure service bus is unavailable with a MessagingCommunicationException? Is this an expected behavior / 'by design'? We see this exception a few times a day. I don't see any retry attempts based on the event log. Our current solution is to have a powershell script that enables the receive location if it's disabled. This script runs every 15 minutes. Below is a sample of the error in the event log.
The receive location "Warnings_SB_New" with URL "XXXXXX" is shutting down. Details:"System.ServiceModel.EndpointNotFoundException: Error during communication with Service Bus. Check the connection information, then retry. ---> Microsoft.ServiceBus.Messaging.MessagingCommunicationException: Error during communication with Service Bus. Check the connection information, then retry. ---> System.ServiceModel.CommunicationObjectFaultedException: Internal Server Error: The server did not provide a meaningful reply; this might be caused by a premature session shutdown..TrackingId:bb29d71c-a4d5-430c-a2d9-8a2e3acfad1d, Timestamp:5/9/2014 2:38:19 AMThere doesn't seem to be any retry count properties on a SB-messaging receive Adapter. So I would assume that the behavior is the same as a FILE receive Adapter polling from an NTFS folder. This will also disable itself as soon as a problem occurs (folder unavailable, wrong credentials etc.)
However, if you say that the Azure Service Bus itself (and not your Internet connection?) becomes unavailable on an average of 2 times a day, I would say that this architecture is not very clever.
MS: Is this the case for the Azure Services and the SB-Messaging receive Adapter?
Morten la Cour
- Marked as answer by Pengzhen SongMicrosoft contingent staff, Moderator Friday, May 16, 2014 10:15 AM
There doesn't seem to be any retry count properties on a SB-messaging receive Adapter. So I would assume that the behavior is the same as a FILE receive Adapter polling from an NTFS folder. This will also disable itself as soon as a problem occurs (folder unavailable, wrong credentials etc.)
However, if you say that the Azure Service Bus itself (and not your Internet connection?) becomes unavailable on an average of 2 times a day, I would say that this architecture is not very clever.
MS: Is this the case for the Azure Services and the SB-Messaging receive Adapter?
Morten la Cour
- Marked as answer by Pengzhen SongMicrosoft contingent staff, Moderator Friday, May 16, 2014 10:15 AM
There doesn't seem to be any retry count properties on a SB-messaging receive Adapter. So I would assume that the behavior is the same as a FILE receive Adapter polling from an NTFS folder. This will also disable itself as soon as a problem occurs (folder unavailable, wrong credentials etc.)
However, if you say that the Azure Service Bus itself (and not your Internet connection?) becomes unavailable on an average of 2 times a day, I would say that this architecture is not very clever.
MS: Is this the case for the Azure Services and the SB-Messaging receive Adapter?
Morten la Cour
- Marked as answer by Pengzhen SongMicrosoft contingent staff, Moderator Friday, May 16, 2014 10:15 AM
If you found a solution to this please post it here. We see this error on a daily basis, sometimes several times per day on several of our queues. We suspect an internal networking problem is the cause, but we haven't proved it yet. We see two BizTalk Server events in our Application event logs, 5704 and 5649 which is the event you reported in your original post. In the interim, until we identify the underlying cause we have a Schedule Task that watches for those event log events rather than run on a timer. When the event occurs the action runs a PowerShell script to enable the queue and email folks who care.
FYI, the Task Scheduler has built-in event log trigger, we didn't have to write any special script to do that. Our task has two actions, one to run a script to enable the queue, and another built-in action to email us everytime it runs.
It's a hack, I know, but it keeps us up and running until we find the critter that's causing the problem.
- Edited by klibbert Thursday, September 11, 2014 10:40 PM
If you found a solution to this please post it here. We see this error on a daily basis, sometimes several times per day on several of our queues. We suspect an internal networking problem is the cause, but we haven't proved it yet. We see two BizTalk Server events in our Application event logs, 5704 and 5649 which is the event you reported in your original post. In the interim, until we identify the underlying cause we have a Schedule Task that watches for those event log events rather than run on a timer. When the event occurs the action runs a PowerShell script to enable the queue and email folks who care.
FYI, the Task Scheduler has built-in event log trigger, we didn't have to write any special script to do that. Our task has two actions, one to run a script to enable the queue, and another built-in action to email us everytime it runs.
It's a hack, I know, but it keeps us up and running until we find the critter that's causing the problem.
- Edited by klibbert Thursday, September 11, 2014 10:40 PM
If you found a solution to this please post it here. We see this error on a daily basis, sometimes several times per day on several of our queues. We suspect an internal networking problem is the cause, but we haven't proved it yet. We see two BizTalk Server events in our Application event logs, 5704 and 5649 which is the event you reported in your original post. In the interim, until we identify the underlying cause we have a Schedule Task that watches for those event log events rather than run on a timer. When the event occurs the action runs a PowerShell script to enable the queue and email folks who care.
FYI, the Task Scheduler has built-in event log trigger, we didn't have to write any special script to do that. Our task has two actions, one to run a script to enable the queue, and another built-in action to email us everytime it runs.
It's a hack, I know, but it keeps us up and running until we find the critter that's causing the problem.
- Edited by klibbert Thursday, September 11, 2014 10:40 PM
Hi R. de Veen,
Below as the script we used with BTS2013. We created a command line and setup a scheduled task. Again, we don't use this script with BTS2013R2 since we do not have this issue with BTS2013 R2.
# Sample from https://github.com/tomasr/bts-ps-scripts/blob/master/bts-ports.ps # declare our parameters: the action to take # param( [string] $action=$(throw 'need action'), [string] $name ) $ctpRLs = @( 'Errors_SB_New' #,'Error_OldTopic_OldLoggingDB' ,'Information_SB_New' ,'Warnings_SB_New' #,'Information_NewTopic_OldLoggingDB' ,'Critical_SB_New' #'Critical_OldTopic_OldLoggingDB' ) # # functions for pinging receive locations # function uti-ping-recv-loc { foreach ($rl in $ctpRLs) { Write-Host "`n" if (( $rl -eq '') -or ($rl -eq $null)) { throw "Receive location cannot be blank." } #Validate if this is a valid receive location # $recvloc = get-wmiobject MSBTS_ReceiveLocation ` -namespace 'root\MicrosoftBizTalkServer' ` -filter "Name='$rl'" if ($recvloc -eq $null) { Write-Host "'$rl' does not exist. Skipping..." -foregroundcolor red -backgroundcolor black } else { Write-Host "Ping '$rl'..." if ($recvloc.IsDisabled) { Write-Host "Restarting '$rl'..." bts-set-recv-loc-status -name $rl -status 'enable' Start-Sleep -s 10 if ($recvloc.IsDisabled) { Write-Host "Failed to enable $rl. Check the event log for more information." -foregroundcolor red -backgroundcolor black } else { Write-Host "'$rl' is enabled." } } else { Write-Host "'$rl' is enabled." } } $i++ } Write-Host "`nComplete!!!`n`n" -foregroundcolor green -backgroundcolor black } # # enable or disable the specified # receive location # function bts-set-recv-loc-status($name, $status) { $recvloc = get-wmiobject MSBTS_ReceiveLocation ` -namespace 'root\MicrosoftBizTalkServer' ` -filter "Name='$name'" switch ( $status ) { 'disable' { [void]$recvloc.Disable() } 'enable' { [void]$recvloc.Enable() } } } # # controls a send port # function bts-set-send-port-status($name, $status) { $sendport = get-wmiobject MSBTS_sendPort ` -namespace 'root\MicrosoftBizTalkServer' ` -filter "Name='$name'" switch ( $status ) { 'start' { [void]$sendport.Start() } 'stop' { [void]$sendport.Stop() } 'enlist' { [void]$sendport.Enlist() } 'unenlist' { [void]$sendport.UnEnlist() } } } function is-valid-opt($check, $options) { foreach ( $val in $options ) { if ( $check -eq $val ) { return $true } } } # # main script # switch ( $action ) { 'ping-recv-loc' { uti-ping-recv-loc } '/?'{ Write-Host "`n" Write-Host "Syntax: .\RestartELCRLs.ps1 ping-recv-loc" Write-Host "`n" } }
Wow, thanks that was fast!
Is there any way to update the SB-Messaging adapters to the 2013R2 version on BizTalk2013? Any KB or CU or Setup.exe available?