A SSAS model where we never need the cube to be offline

Hello All ,

I am thinking of a possible design where the cube will never go offline.

Usually when I do some code changes on my cube the cube goes offline and I need to Full Process it again to get it back .

However , in cases where the cube is extremely critical for the business users , it would be great if I can deliver a solution where the cube never goes down.

May 21st, 2015 10:37am

U could build up a Load Balancing structure. 2 or more Instances with same Database and an SSAS LB(there something on codeplex). The process would be: Take node 1 offline (users will access only node2), update the DB on node1, bring node 1 back online and do the same with node 2. Look for SSAS Read only databases.
Free Windows Admin Tool Kit Click here and download it now
May 21st, 2015 1:26pm

I am looking for something which does not need more than one instance
May 21st, 2015 1:34pm

You cannot technically have no downtime when you deploy changes.

However, you could deploy to a new database name, and process the new database/cube.  Then drop the old and rename the new to the old name.  That would limit your downtime.


IMHO, a 30 minute planned outage for updates should be acceptable by business users.  If you cube takes longer than 30 minutes to process, you might look at improving the processing.

Free Windows Admin Tool Kit Click here and download it now
May 21st, 2015 1:47pm

You cannot technically have no downtime when you deploy changes.

Actually if you have enough resources on your server you can get pretty close to appearing like zero downtime.

If you use the SSAS deployment wizard you can create a batch command which includes both an ALTER and a PROCESS command in the one transaction. Because it's in a transaction the end users will continue to query the old version of the cube until the batch commits, then after that they will see the new version.

So you will in theory have zero downtime. However there are a couple of caveats.

1. The commit phase is not instantaneous. The commit includes both the updating of the central version map file as well as the delete of the old version of the database and the amount of time this takes depends on the size of your database the speed of your disk sub-system. During this time new user queries will appear to be "paused" so to the end users it looks like a drop in performance. But if your database is small enough and your IO system is fast enough this may only be a few seconds and may not be noticeable to the end users.

2. The default processing command does not limit the amount of parallelism, so it will consume all available resource in order to try to complete the processing as fast as possible. This can cause issue if you want users to be able to continue querying the cube while it is being processed. I've found that if I set the parallelism on the processing command to somewhere between 25%-50% of the number of cores in the server that it will then leave sufficient resources available to be able to service a reasonable volume of end user queries while processing. To change the parallel setting you will need to create a script from the deployment wizard and edit it.

You will need about 1.5 times the database size of free disk space in order to hold both the old and new copies of the database as well as some temporary scratch space while the transaction is open. And you also need to have sufficient RAM to hold both the non-shrinkable memory allocations that will be held for the duration of the processing as well as the memory required by the normal query workload. But if you can meet both of these you can get pretty close to the appearance of zero downtime for small to medium databases on a single server.

For large databases the duration of the commit can lengthen to the point where you would really need to have 2 or more servers in a load balanced arrangement to achieve true zero downtime.

May 24th, 2015 8:39am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics