Number of distribution points per secondary site - in practice

Hello!  

I know there is a recommended limit of 250 distribution points per secondary site.  I would like to know real world examples of implementations with a large number of DPs reporting to a secondary site and successes or issues you may have encountered.  

In my experience, having a huge number of DPs reporting to one site produces some other practical management issues (other than pure sizing)  such as more cumbersome troubleshooting of packages, delays in package distribution, et al.  

In the end, I would like to hear what would my colleagues recommend in the real world for number of DPs reporting to a secondary site in a hub and spoke model and/or your experiences related to this post deployment.  

Thank you, 

M

August 22nd, 2013 5:21pm

First, it's not a recommended limit, it's a supported limit. Two different things.

As far as the number, it simply depends upon your topology. Given that Secondary sites only support 5,000 clients, having 250 DPs would be a stretch (that's a DP for every 20 clients) but certainly possible.

In general, I'm anti-secondary site, but there are always many factors to consider.

As far as actual issues with that many DPs, that all once again depends upon your topology and network capabilities. ConfigMgr will handle that many fine.

Free Windows Admin Tool Kit Click here and download it now
August 22nd, 2013 6:20pm

I'd be curious to know what is driving the thought or having 250 DP's on a secondary site?
August 23rd, 2013 8:45am

Hi:

Thank you Jason for your feedback.  I am a bit concerned about the potential operational and day to day management issues with so many DPs on a secondary site and distribution manager being overloaded. 

This is a branch office scenario and OSD is a required functionality.  So they need to have around 1000 DPs (one per branch) and need to put some secondary sites in to divide the load.  So the driving question was how many secondary sites do we put in?  Networking wise, it is all part of a meshed MPLS network.

Thank you,

Free Windows Admin Tool Kit Click here and download it now
August 23rd, 2013 11:20am

Having a DP on a secondary is not different than having it on a primary -- it's the exact same process and at the end of the day it's simply a file copy. It's supported because that's what they've tested so there's no reason to doubt the capabilities.

Do you know about PullDPs though? In R2, the supported number of DPs should increase when PullDPs are used obviating the need to insert secondary sites.

August 23rd, 2013 11:28am

In my opinion the placement of Secondary sites should be decided by geography as much as the number of DPs. You should deploy Secondary sites in geographical regions to support the DPs in that region. They can be evenly spread as much as possible.

Free Windows Admin Tool Kit Click here and download it now
August 23rd, 2013 11:31am

Hi:

Yes, the pull DPs is a good point and worth exploring.

Not doubting it will work, just I have seen issues in the past where some packages get stuck (or have other versioning issues etc) and have to be re-sent or removed or refreshed and the number of DPs you are managing for that site then impacts the ease of resolution.

So wondering if maybe recommending 100-150 DPs per secondary site would make it more manageable in the long-term.

Brgds

August 23rd, 2013 12:56pm

Hi:

Thank you Jason for your feedback.  I am a bit concerned about the potential operational and day to day management issues with so many DPs on a secondary site and distribution manager being overloaded. 

This is a branch office scenario and OSD is a required functionality.  So they need to have around 1000 DPs (one per branch) and need to put some secondary sites in to divide the load.  So the driving question was how many secondary sites do we put in?  Networking wise, it is all part of a meshed MPLS network.

Thank

Free Windows Admin Tool Kit Click here and download it now
August 23rd, 2013 3:04pm

just backing up John's last post (and disclaimer: I work for 1E), but even considering 1,000 DPs under any topology/configuration is crazy, in my opinion. There are tons of not-so-obvious levels of effort to maintain that crowd as you alluded to in an earlier post, to say nothing of the cost of HW, VM licensing/infrastructure, OS, etc, etc... and don't lose sight of the security implications of that many added IIS instances in the estate.

Our Nomad 2012 product is designed to eliminate all that wasted hardware and needless labor. We are deployed into systems anywhere from 500 to 450,000 managed client PC's. Not only are the basic DP roles now managed by the existing PCs with no disruption to the users, but you are also able to use those same systems for significant improvements to the OSD experience. This includes User State Migration, PxE service points, running Nomad inside the WinPE environment, and on and on... All this is done with zero added consoles, full augmentation of, and integration with, CfgMgr. All this and NO impact to the normal business traffic on the WAN while moving any and all content anywhere in the estate, regardless of the type or quality of the links..

Enough of the soap box. Contact me directly if you'd like to know more beyond whats exposed on the above link.

Good

August 23rd, 2013 5:07pm

Hi Mustafa,

I understand your pains.  I was working on an SCCM implementation and, if we were to go with a traditional SCCM infrastructure, then we were looking at roughly 2,400 SCCM servers.  Obviously that isn't a viable option for most organizations.  We ended up using Adaptiva OneSite to eliminate all of those servers.  In the end we had less than 10 servers.  This created a much more manageable SCCM environment.  In our implementation, the ROI on Adaptiva OneSite was a "no brainer".

I will say that both Adaptiva OneSite and 1E Nomad are competing products.  I would recommend evaluating both systems fully before making a decision. Adaptiva OneSite definitely scales for the enterprise, so there are no concerns there.  I just wanted to share my pleasant real-life experience with the product and let you know that there is more than one option out there.

If you'd like to hear more about my experience with the product, or have other questions, please just let me know.  I'd love to help!

Free Windows Admin Tool Kit Click here and download it now
August 26th, 2013 8:21pm

Thank you Jason and thank you to everyone for their insightful comments.

It helps to know you have implemented this real world successfully.  I will read up on it a bit and thank you for the link to the MMS session.

After this discussion, I have a more rounded frame of reference when discussing options with our customers.

Brgds

Mustafa

August 27th, 2013 10:21am

Glad we could be of help to you Mustafa.
Free Windows Admin Tool Kit Click here and download it now
August 27th, 2013 10:37am

I agree with Jason, I have managed hierarchies with over 18,000 distribution point servers and I can tell you having remote servers is a pain to support.  Using a peer-to-peer option like OneSite will save you many many headaches.  I have supported/implemented 1E's Nomad product in an environment of over 200,000 clients.  While it did provide benefit, the two short coming's (IMO) of the product related to how the available bandwidth was determined (we had to set the work rate limit much below the desired percentage as it would exceed that setting on intermediatary links) and the master model for fan out is not great for large subnets (actually if we had more than 50 machines remotely, we would have to still put a DP there). 

OneSite does a daisy-chain peer-to-peer which scales much better and the bandwidth is managed on each router interface rather than an aggregate of segments.  OneSite also handles MP policy fan out to reduce the number of MP's you would need as well as throttling that if you have large transfers for inventory or policy.  The product also keeps at least one machine per subnet up to help ensure you have cache available.  If it isn't available on that machine, it can wake up the machine that does have it if it available.  Another nice thing when content is delivered for the first time, it will be dropped in each subnet on the traceroute which means it is available one hop away if no one locally has the content.  The caching of local package/application content dynamically uses free hard drive space while not reporting to use any.  If the OS needs the hard drive space, OneSite will automatically remove the package with the highest occurrence in that subnet.  You can have 3GB image files in the cache, but the end user will not see their hard drive space being consumed.

Definitely do some testing of these products. I think you will be very happy you did. 

August 27th, 2013 12:40pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics