Branchcache between 2 2012R2 servers

Hi,

I found the BranchCache documentation lacking as more and more usual with MS. There's a lot of info on what it does and a lot of marketing jabbajabba but not much technical info - at least not for troubleshooting. I am having a remote site which only has one 2012R2 server. No clients. That server holds a copy of some data in the main site. On several places there is some info on how 'well branchcache works with deduplication in 2012R2'.

My goal is to use branchcache to limit WAN bandwidth usage while syncing the data over SMB. What I have done now is enabled branchcache for network shares on the main sites share, configured it's GPO to 0ms latency, allowed hasing for all shares (just to be sure) but still enabled Branchcache on the specific share.

On the remote site, having just one server there I enabled Branchcache in distributed mode, as I see no use in hosted mode in this environment (but please prove me wrong if needed!). It's enabled through GPO and it is in fact enabled:

PS C:\Users\<me>> get-bcstatus

BranchCacheIsEnabled        : True
BranchCacheServiceStatus    : Running
BranchCacheServiceStartType : Automatic


ClientConfiguration:

    CurrentClientMode           : DistributedCache
    HostedCacheServerList       :
    HostedCacheDiscoveryEnabled : False


ContentServerConfiguration:

    ContentServerIsEnabled : True


HostedCacheServerConfiguration:

    HostedCacheServerIsEnabled        : False
    ClientAuthenticationMode          : Domain
    HostedCacheScpRegistrationEnabled : False


NetworkConfiguration:

    ContentRetrievalUrlReservationEnabled : True
    HostedCacheHttpUrlReservationEnabled  : True
    HostedCacheHttpsUrlReservationEnabled : True
    ContentRetrievalFirewallRulesEnabled  : True
    PeerDiscoveryFirewallRulesEnabled     : True
    HostedCacheServerFirewallRulesEnabled : True
    HostedCacheClientFirewallRulesEnabled : True


HashCache:

    CacheFileDirectoryPath               : C:\Windows\ServiceProfiles\NetworkService\AppData\Local\PeerDistPub
    MaxCacheSizeAsPercentageOfDiskVolume : 1
    MaxCacheSizeAsNumberOfBytes          : 533169397
    CurrentSizeOnDiskAsNumberOfBytes     : 29433856
    CurrentActiveCacheSize               : 0


DataCache:

    CacheFileDirectoryPath               : C:\Windows\ServiceProfiles\NetworkService\AppData\Local\PeerDistRepub
    MaxCacheSizeAsPercentageOfDiskVolume : 5
    MaxCacheSizeAsNumberOfBytes          : 2665846985
    CurrentSizeOnDiskAsNumberOfBytes     : 29433874
    CurrentActiveCacheSize               : 0

    DataCacheExtensions:

Both main-site as well as remote-site volumes are deduped by the way. As I understand from the sumire info available that should in fact help branchcache as the files are already hashed. No matter how often I copy a single file from the main site, I never get any results. I have perfmon opened with all branchcache counters, but they don't reflect a single action or byte at all. I have used https://mizitechinfo.wordpress.com/2014/12/30/step-by-step-deploy-configure-branchcache-in-windows-server-2012-r2/, https://gallery.technet.microsoft.com/Windows-Server-2012-R2-and-c18a6dd1 and https://technet.microsoft.com/library/jj572990 to no avail.

I am now installing Windows 8.1 Enterprise now as here and there I read you need enterprise to use this. However all client components seem to be available in 2012R2 as well.



My concrete questions:

- Is it at all possible to use 2012R2 as a client? About the same question here: https://social.technet.microsoft.com/Forums/windowsserver/en-US/551c55ab-7e49-4a18-8315-13fcf3cab522/branchcache-client-on-a-rd-host?forum=winserverfiles but no answer.

- What should I expect BranchCache to do together with dedupe?





January 28th, 2015 2:40pm

I didn't try hashgen yet, nor am I familiar with it. I assume the source data (in the main site) needs to be hashed with this? I ran it:

C:\Users\<me>>hashgen "v:\New folder"
Processing directory v:\New folder

 File hpacuoffline-8.75-12.0.iso processed successfully for hash version 1.
 File hpacuoffline-8.75-12.0.iso processed successfully for hash version 2.
 File VSE880POC975906.zip processed successfully for hash version 1.
 File VSE880POC975906.zip processed successfully for hash version 2.

 Processing complete.
 4 Files processed successfully.
 0 Files processed unsuccessfully.

I know I only need V2 hashes with 2012R2 only, but just to be sure. I copied these files to the remote site, either push and pull, but it won't work. Still 0 bytes cached.

After that I also let the remote site hash but still 0 bytes in cache and doesn't copy any faster (the two testfiles were already in destination once while hashing).

I am at the moment not even sure anymore how the process is, I've read too much acbout it today I guess. When I have a Branchcache enabled share at site A, and a distributed (or even hosted for that matter) cache on site B, and I copy a file a few times (three I believe) the data should be in the distributed cache, correct? In addition I believe, but the information is very scarce on that, that when we have deduped volumes we copy from and to, branchcache can use the hashes from that as well and really only transfer changed blocks from site A to B opposed to the whole file if it isn't available. A bit like source-side deduplication more or less. That is what I need to achieve in fact. The files that need to be synced there are about 98% identical each day, but large in individual size. I don't want to transfer 100GB when only 1GB is actual new data. That's my goal.

 Thanks for your help so far, greatly appreciated!


Free Windows Admin Tool Kit Click here and download it now
January 28th, 2015 9:27pm

Good idea to try with BITS, hadn't thought of that. However it's still not working. The transfer works fine but no hasing is used nor is anything cached.

About the standard / enterprise thing, I wonder about that as 2012R2 has no enterprise edition, those were merged to standard (ie. most features that needed enterprise before are available in 2012(R2) standard). The only 'higher' option is datacenter but I don't think that's feasible for Branchcache.


January 30th, 2015 7:49am

On the main site I have branchcache for network enabled, and the hasing policies set to 'allow all shared to be hashed'. On the remote site I have branchcache for network as well as branchcache itself installed, if you don't do so you can't even enable a caching server. I can't get over the fact that you can install and configure it fine but can't use it? That's weird beyond believe.

I can try with Windows 8.1 but I am not sure if that's going to run on my remote HP server hardware. I could ofcourse always virtualize it but well, thats another bunch of overhead. In addition while I know they share about the same kernel, I don't think I want to use a desktop OS for a backup repository :)

I just don't get why it would not be possible between servers, and second why there is no serious documentation on this once again...

I'll try a test setup with Windows 8.1 though.
Free Windows Admin Tool Kit Click here and download it now
January 31st, 2015 11:23am

I said servers but I meant server. I am testing by copying a 100MB file from the very same share each time. What I expected is the second or maybe third copy to be consuming less bandwidth. Alas it's not working like that (yet..)

By the way what I've read and made out of the scarce documentation available is that when used together with dedup, any block that already exists in either file is not transfered again. That way it should in fact work if I copy the same file from two servers. I'm looking up the source of that info.

January 31st, 2015 5:12pm

FIXED IT (I think)

So, you need to install the Desktop Experience feature on your server. This is because BranchCache SMB is linked to offline files.

Install the feature under 'User Interfaces And Infrastructure' (it needs a reboot)

Go to sync center - enable offline files (another friggin reboot)

Then retry your tests - I just dragged a file from the other server share and it went straight into the BranchCache cache

Cheers

Phil

http://2pintsoftware.com



  • Proposed as answer by Phil Wilcock Saturday, January 31, 2015 6:46 PM
Free Windows Admin Tool Kit Click here and download it now
January 31st, 2015 6:46pm

Finally had time to spend on this, and glad to report I finally got it working. The issue was I was probably stared blind at it, as I forgot to install the desktopfeatures on the source-server in the datacenter, which made branchcache-SMB not work. It now works, altough it has some quirks - in only works when the remote server pulls data from the source, when source copies data to remote it doesn't. It works together with Dedupe rather well - as soon as dedup ran these hashes are used.

So some quirks I have to work out but for now I'm good :) Thanks for your help!


March 17th, 2015 10:26am

It still doesn't work the way I thought it would. I have two testfiles, 1.iso and 2.iso. Each about 150MB in size. When I copy them, it uses regular bandwidth as expected. Then I reboot the branchserver to be sure filesystem cache is flushed and stuff like that. Copy them over again, done in a second. However, when I combine the two files in one larger file, I would expect it to be fast again, as the content of that new file is exactly the same as of the two seperate files. That file travels the line completely again though.

Even if I copy one of the testfiles on the source server to another file, creating 2 identical files with only a different name, the whole file travels the line again.

So I feel Branchcache does use dedupe hashes to prevent generating CPU to create new hashes, but that it does not use the 'global' store of hashes, but on a per file level. As far as my test go I cannot get it to do source-side-dedupe. That's what I want to achieve, to copy only unique data across but on a volume-wide base. We make one full backup every week, whicih differs only a few percent opposed to last weeks backup. I'd like to transfer only the unique blocks as the rest is already at the destination side.


Free Windows Admin Tool Kit Click here and download it now
March 18th, 2015 2:49pm

In my testsetup I deliberately used a rather slow ADSL link, which has about 25-30ms latency to the datacenter. Branchcache treshold is set to 0ms. The remote server is setup as distributed AND it is in fact a 2012R2 server. Maybe in your environment you have actual client OSes? Although I tested with Windows 8.1 as well and it didn't work well there either.

So what's your setup and can you confirm that when you copy an ISO across for example, then on the source side copy that iso to a new file, copy that across as well it's fast? And are you using dedupe or regular hashing?

May 30th, 2015 10:36am

Never got it to do BC initiated by the source side, and if I recall the design from memory (not always the best) it will never be initiated that way.

So you would have to pull from the destination side, otherwise it will never work. :( But maybe by using remote Powershell you can get around that limitation?

Our tests have been with Enterprise client OS's but should be the same. Might be Worth a revisit to sanitize and document the results a bit more.

//A

Free Windows Admin Tool Kit Click here and download it now
May 31st, 2015 8:39am

Thank you for your reply. I know BC is working only when pulling, which is ok by me for this setup. I can't get it to work globally though, but as said only per file. How large is your cache? And do you use dedupe? I am actually not sure by now if it really uses the dedupe store (granted the blocks are in fact in the dedupstore).

So maybe you can answer a few questions; how large is your BC cache? Are you using dedupe on the source side?

May 31st, 2015 8:46am

I was pulling 4GB wims over the wire using Explorer. Cache was set to 25% of HD space (of 127GB virtual mahine) using a 10Mb/s link witht the BC latency setting set to 0 I belive.

Dedupe on both client side and server side, i.e. Windows 8 from Server 2012 R2.

A note on the dedupe store, the new file will be put together from the BC cache, which is de-dup aware, but its not actually put together from dedupe per say I believe.

//Andreas

http://2PintSoftware.com

Free Windows Admin Tool Kit Click here and download it now
May 31st, 2015 8:58am

That makes sense. As your WIM files fit complely in BC's cache, it's not sure here whether or not it uses the dedup-chunks. My issue is that I am trying to sync 1TB+ files over. Way too big for BC cache. about 95% of the data is deduped though, hence it's chunks and hashes are in the dedupe store. I'd love to see BC actually use that (preferably on both sides) in order to minimize network traffic.

Still even with smaller testfiles it doesn't work like it should. I completely rebuilt my test setup and still the same. Same files copies across fast after it's cached, but same file with another name doesn't. Seems like it doesn't realy work hash-bashed in my setup.

June 8th, 2015 1:19pm

Hmmm, this must be why it doesnt work. You are simply running out of BC cache space, causing full download to happen. Simple fix, double the storage space and increase the BC cache size! ;-)

Your suggestion makes sense, but would have to be a Another Components I Think. One guy on the forum here did a tool what you are asking for, not free of charge though.

//Andreas

Free Windows Admin Tool Kit Click here and download it now
June 9th, 2015 10:37am

But why would I need a large BC cache if BC can use the dedup-store (which it claims it does) where 95% of my data is in?

Are you able to find the name or thread of the tool you mentioned? I've been working with Replacador, but that isn't really what I want, as that syncs a whole disk, rather than files. To little flexibility there.


Thanks for your reply!
June 9th, 2015 10:41am

Ah, it was replicador that I was thinking of, I thought it did files as well, thats why we didnt write one. Interesting... might have a look for that.

When BC does de-dup in has its own de-dup storage on the "client" side, which is managed by the cache. So basically the entire data set (de-duped) needs to fit inside the cache.

//Andreas

Free Windows Admin Tool Kit Click here and download it now
June 23rd, 2015 7:24am

Hi Andreas,

yes that's what I found. And with that I can almost state that Microsofts statement BC works together with dedupe is hardly true. Yes it works, as practically any other program works with it as dedupe is on such low level its driver dedupes any requests a regular program does to the OS anyway. Working together in this regards from my perspective is that it actually integrates and uses each other chunks in order to benefit from each other. Now they just work 'next to each other. I can now safely state that all tools, scripts and utilities I wrote myself are dedupe compatible ;)

June 23rd, 2015 7:32am

I did check with the MS devs though, and in theory the BC Cache size should be the same as your de-duped content. So if you de-dup the source (which I assume you have) then get th size of that. Then set the destination machine BC cache to a bit larger than the source. As content is pulled (push will never be BC aware AFAIK) from the source to match the destination it should all go via the BC Cache and de-dupe will go in to action as the BC cache is de-duped.

Worth a try?

//Andreas

Free Windows Admin Tool Kit Click here and download it now
June 23rd, 2015 8:36pm

I finally found out why BC wasn't working for me. Branchcache for Network files, ie on SMB, at least in my environment seems to have a limit of about 3.8GB per file. Not 4GB which would sound like a more logical limit. I have backup files ranging from about 2GB up to 1.2TB in size. All files up to about 3.8GB work perfectly fine and get 'BranchCached' pretty well. Whenever I transfer a larger file through SMB it wouldn't work anymore. The BC counters wouldn't go up. It might have to do with the offline files stuff. I don't even understand why MS has made it dependent on that anyway. I gave it the largest amount of space though, but that didn't help.
The solution for me is as simple as moving to IIS and use HTTP(s) to transfer the files. It needs a little adaption in my scripts but that's fine. I've tried a 140GB file with that and while it isn't finished yet, the BC counters go up just as expected.
I'll let you know when I finally got what I wanted to achieve.
July 15th, 2015 9:28am

Yeah, make sure you use BITS and not the regular HTTP BranchCache.

A writeup on the topic: http://2pintsoftware.com/branchcache-de-duplication/

//Andreas

Free Windows Admin Tool Kit Click here and download it now
July 24th, 2015 7:01am

I've just yesterday about given up on Branchcache. I've let it run for about one and a half week now, and my BC cache has built up to 1.1TB, using BITS. Yet hardly any blocks are coming from the cache. To test this I copy a full of the next week, where the full of the week before is already there and in the cache. About 1-2% is coming from the BC cache while all rest has to travel the line, still ending up in the cache. One might say, then the blocks are just different, if they weren't the cache wouldn't build up. Yet deduplication dedupes both files against each other to be pratically identical.

So I am terribly confused and also frustrated to say the least; dedup works fine - hence blocks are equal. BC doesn't work well - hence the blocks aren't equal.

July 24th, 2015 7:15am

Ok, thats not good. How are you copying it HTTP or BITS?

What file sizes do you have the most of? Think I am going to test this, need some TB disks though!

Did it work if you started with a smaller size and then build up gradially? Where does it stop working?

What size is your de-duped source volume?

//Andreas

Free Windows Admin Tool Kit Click here and download it now
July 24th, 2015 11:00am

Also, can you check BranchCache + DeDup + VSS logs for any errors?
July 24th, 2015 3:29pm

Sorry for the late reply, been quite busy on other challenges. There's two things: I am currently tranfering another full, so again which differs only about a few %. For now it seems to work rather well. The one thing I should note is that the initial sync I did when I brought the remote backup server to the main site, I transfered through HTTP rather than BITS. This is ofcourse reflected in the BC stats. For some reason, the next full transfer I did through BITS just had a very, very low cache hit ratio. I assume ALL BC caches, be it SMB, HTTP or BITS are thrown in the same store? For some reasont the second BITS transfer I am doing now runs perfectly fine, and is actually mostly capped by diskspace.

A second thing is that I found the IO performance very low. At the moment, as I am still testing, I am running this on a rather old HP G6 hardware with 6x 1T on a SmartArray 410i with 512MB cache. However as I have only 6 spindels in this machine I created just one big array, so working with BC means a lot of random IO. Still performance was very, very bad. Processmonitor as well as Resource Monitor revealed BC was working from the pagefile all the time, generating lots and lots of small IO's to the pagefile (which is on the very same disk array). The machine has plenty of RAM available though, so I disabled paging completely (I'm not fond of paging anyway) and BC works so much more smoothly now which about a third of the IOPS I consumed with paging enabled.

So again I have the feeling it's going to be ok ;) But I'll update you again when I've done more testing.

A sidequestion would be what happens if a server with BC cache has to be reinstalled, or hardware needs to be rebuild. Is it possible, without exporting it first, to bind an existing BC cache to a machine? Or does it really need to be build up again from scratch.
Free Windows Admin Tool Kit Click here and download it now
August 3rd, 2015 7:25am

Interesting about the page file... hmmm, will talk to the devs on that.

You can export a cache, or do it the unsupported way and just doing a file transfer as long as the new machine has a the same seed key. The unsupported is of course much faster and works well but not something they support if it doesnt work. (Lack of testing as its more a Product of other things than a feature).

Keep us posted!

//Andreas

August 5th, 2015 9:03am

I bought a dedicated backup server and a NAS, for now with 8 spindels, devided in 2x4, so two arrays, one for BC Cache and one for the actual backup data. I am currently doing transfers from the remote site, and the cache is building up again. However, I want to transfer the files across from the testserver that were already there - a great part of the initial sync is already on there. Now I have another issue - some files seem not to get hashed by IIS as when I download them the BC counters don't change. Other files (in the same directory even) work fine. Getting the same file again from the main-site backup, so over the WAN, makes the BC cache build up as expected though.

Can I trigger IIS to force it to use BC? Or force it to generate hashes?

[edit]

an update to that; when I download the file through HTTP the 'Bytes from server' counter runs up fine, but the complete / partial file segments stays the same all the time. When I download a 'working' file, either through BITS or HTTP(s) both counters go up.

I've just installed this server completely fresh, and as most files work I tend to believe it's not a configuration error on the 'client' side.

[edit2]

I've split the file up in 20GB pieces, to no avail. IIS just won't hash some files. I don't get it. The main-site IIS box of even lower specs (as it's a VM) does the job fine!

Free Windows Admin Tool Kit Click here and download it now
August 10th, 2015 12:19pm

I think I've got it sorted now. I had to move and enlarge the BCPublicationCache. I used netsh now. After then triggering Publish-BCWebContent manually, it generated the hashes fine and now those files are actually working as expected.

Is there a powershell command to modify the PublicationCache? While Get-BCStatus shows the HashCache location and size, I din't find a powershell command to change it.

Questions, questions, questions... If only MS would supply some technical manuals woth their products... I feel it's getting worse and worse regarding that. Where is the time you bought a TV and a full technical scheme was included.. or the time you bought an OS and a 1200 page manual was included... I know that's not '2015' but still. MS is seriously lacking in information on some aspects of their software.

August 11th, 2015 11:29am

hi Robert,

so you stuck with it! Sounds like you are getting there. Here's a bit of PowerShell that you can use to re-size the Hash cache - undocumented of course!

 $x = Get-CimInstance -ClassName MSFT_NetBranchCacheCache -Namespace root\standardcimv2 -filter "cachefiledirectorypath LIKE '%PeerDistPub'"
 set-bccache -cache $x -Percentage 20

for the data cache - change %PeerDistPub to %PeerDistRePub

Free Windows Admin Tool Kit Click here and download it now
August 14th, 2015 11:59am

Thanks, yeah that could be done wth any WMI query but that's not a friendly way is it :) for the database I'd use the set-BCCache sniplet. So as long as it's working I'll use netsh for the publishingcache. Also the netsh command would move the cache if you supply another directory, which the WMIC query would not do, calling for additional manual work.

Anyway, I keep running into issues. I am once again SO frustrated with MS (or Windows at least), for their lack of troubleshoot info and technical documentation. I have created a new post on here for that, but maybe you have some insight on that as well. On ALL hardware I've tried, I am getting BITS error 0x80040009 randomly which make the bits transfer restart at 0 bytes. That's not much fun when you are at 95% of your 1.2TB transfer. I just don't get it. I'm an experienced MS guy, I know my stuff. But even on fresh installs of 2012R2 I can't get BranchCache to run well. If only MS would have some reasonable logging like VMS used to have back in the days, but most of there errors I run into are too cryptic to decypher.
These systems are running for 3 days now, with 1GBps network connected to the same switch, to transfer 1.2TB data devided in 20GB files. I could've printed them out and OCR'ed them in by now.

Anyway, error 0x80040009 it is, restarting my Bits transfers. Anyone?



August 14th, 2015 12:34pm

Update,

I have now got my cache filled with about 1TB of unique data. For one server a full is already transfered. Now the next full, differs only a few % opposed to the previous full (looking at dedup stats and also just reasoning it) but when I look at the BC counters during the transfer not even 1% of data comes from BC cache. I can imagine the OS part of the image to be more unique as that also contains the pagefile Not sure if Veeam excludes that but I don't think so. Still the OS is about 20GB in size, the rest is Exchange data, which is mostly identical to the previous week. Dedup dedupes both files down to about 95%, so their data should be rather identical. I just don't understand.

I posted some info on the BITS-error-thread as well.



Free Windows Admin Tool Kit Click here and download it now
August 17th, 2015 3:14am

How is it transferred? .Wim? .vhd? And why are you capturring and transferring your pagefile?

//Andreas

August 18th, 2015 1:54am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics