FIM 2010 - Performance when processing groups with many group members
Hi experts, I have a question regarding the performance of FIM 2010 when processing large user groups. Currently, we use MIIS 2003 to manage about 110.000 user objects (about 40 attributes per object) and about 10 user groups. Data are retrieved from 4 different sources (2x MS ADAM, MS AD, Sun ONE Directory Server), merged, and pushed in one target system (MS ADAM). Groups are automatically assigned to users based on certain user attributes. We use the Group Populator application for that. This setup worked fine until two of our user groups became very large (>50.000 members). Now synchronisation of these user groups is extremely slow and takes a very long time (sometime multiple days). Now we plan to replace the current setup with FIM 2010. I already found out that we do not need the Group Populator application anymore as FIM 2010 comes with a portal that allows admins to define automatic user group assignment based on the user's attributes. The question is how will deal FIM2010 with large groups will it still take that much time or will it be no problem at all. I checked with MS capacity planning document but could not find a scenario that is similar to ours. Microsoft always assumes a large number of groups with just a few members. If someone of you has already dealt with a scenario as described above, please let me know what to expect. I also appreciate any help regarding the sizing of the solution. Best regards and many thanks in advance, Martin
July 27th, 2011 3:30am

This topic is covered quite well in this post. The bottleneck discussed specifically here is the SQL MA ... something which thankfully won't affect you. However the AD MA itself can struggle in this scenario, and I understand the non-standard approach implemented by Joe (see the thread) was successful in overcoming these limitations ... but any decision to deviate from the standard approach should not be taken lightly as it will invariably involve custom ECMA development. Incidentally, one approach I've seen at another site with similar group membership numbers was to design a nested group structure - more complex management but significant improvement in sync times ... e.g. adding one member to one of 10 groups of 10K members costs 10x less than adding one member to one group of 100K members. Bob Bradley (FIMBob!) ... now using Event Broker 3.0 @ http://www.unifysolutions.net/ourSolutions.cfm?solution=event for just-in-time delivery of FIM 2010 policy via the sync engine
Free Windows Admin Tool Kit Click here and download it now
July 27th, 2011 8:43am

FIM is much faster at this IME but it's still not going to be super fast. I haven't measured this with any large groups on your scale, though. If you need to run your normal provisioning on a more regular schedule, you might consider a seperate FIM Sync/Portal instance just for large groups.My Book - Active Directory, 4th Edition My Blog - www.briandesmond.com
July 27th, 2011 9:36pm

Hi Bob, Hi Brian, thank you very much for your suggestions and recommendations. It helped me to better understand where the potential bottlenecks are. I am not sure yet how we will implement it. I guess we will give it a try and run a first test with the standard implementation. Depending on the results we may have to change some things. I also have some questions regarding the required hardware setup but I think I need to create a new thread for this. Best regards, Martin
Free Windows Admin Tool Kit Click here and download it now
August 1st, 2011 5:17am

Althought well discussed I just want to mention that the right infrastructure makes all the difference. The three points to look out of are simple: - The fastest single processor is always better since the sync process is single threaded (many slower is worse that one or two very fast GHz CPU's) - SQL SQL SQL - make sure your SQL IO is optimized - especially when you do group (large amounts of reference values) - If the SQL is remote make sure your network is as fast as possible The first two are most of the time the killers with SQL being the biggest. I have seen 100% perf increase at sites where we just optimised SQL. HTHAlmero Steyn (http://www.puttyq.com) [If a post helps to resolve your issue, please click the "Mark as Answer" of that post or "Helpful" button of that post. By marking a post as Answered or Helpful, you help others find the answer faster.]
August 1st, 2011 5:16pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics