FIM Service Partioning Guidelines
I understand that we can partition the FIM Service as outlined in the Technet Doc (http://technet.microsoft.com/en-us/library/ff602886(WS.10).aspx#bkmk_Service) which states "Using different external names for FIM Service will also allow server partitioning for workflows. When a workflow instance is created the external name of the server is added to the instance. Another server with the same external name can pick up and resume hydrated workflows. This partitioning will ensure that workflows started on the FIM-Admin instance never will be processed by the FIM-User instances ensuring responsive servers used by end-users." Some questions I have one this are: 1 - If you have multiple nodes in a FIM Service partition set, can you control how many workflows are processed at a time centrally, or per node? I have found you can easily bring your SQL server to a halt if you fire a ROPU WF, with 6 commonly named FIM Service partitions running. Is there a throttling mechanism or distributed throttle that can be employed? It seems each FIMService may have it's own throttle (What is it? Can it be configured?), but what about across instances? 2 - If a workflow is triggered on FIM-ADMIN, which creates a request for approval, does this impact if the approval is responded to from FIM-USER instance? Does partitioning of the FIM service instances allow cross comunication between the 2 instances? My worry was isolating requests that were created from WF that began on FIM-ADMIN, that could not be responded to from the Portal which is using FIM-USER Service Instance What "Gotchas" can exist with partitioning Services? I could see the big need to create maybe 3 Nodes of FIM-ADMIN service, and start a ROPU WF on the FIM-ADMIN group, so that it does not affect the end users o nthe Portal using FIM-USER, but I don't want to paint myself into a corner. Any insight would be appreciated Thanks, Jef ----- http://jeftek.com
August 19th, 2010 1:58am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics