You cannot technically have no downtime when you deploy changes.
Actually if you have enough resources on your server you can get pretty close to appearing like zero downtime.
If you use the SSAS deployment wizard you can create a batch command which includes both an ALTER and a PROCESS command in the one transaction. Because it's in a transaction the end users will continue to query the old version of the cube until the batch
commits, then after that they will see the new version.
So you will in theory have zero downtime. However there are a couple of caveats.
1. The commit phase is not instantaneous. The commit includes both the updating of the central version map file as well as the delete of the old version of the database and the amount of time this takes depends on the size of your database the speed of your
disk sub-system. During this time new user queries will appear to be "paused" so to the end users it looks like a drop in performance. But if your database is small enough and your IO system is fast enough this may only be a few seconds and may not
be noticeable to the end users.
2. The default processing command does not limit the amount of parallelism, so it will consume all available resource in order to try to complete the processing as fast as possible. This can cause issue if you want users to be able to continue querying the
cube while it is being processed. I've found that if I set the parallelism on the processing command to somewhere between 25%-50% of the number of cores in the server that it will then leave sufficient resources available to be able to service a reasonable
volume of end user queries while processing. To change the parallel setting you will need to create a script from the deployment wizard and edit it.
You will need about 1.5 times the database size of free disk space in order to hold both the old and new copies of the database as well as some temporary scratch space while the transaction is open. And you also need to have sufficient RAM to hold both the
non-shrinkable memory allocations that will be held for the duration of the processing as well as the memory required by the normal query workload. But if you can meet both of these you can get pretty close to the appearance of zero downtime for small to medium
databases on a single server.
For large databases the duration of the commit can lengthen to the point where you would really need to have 2 or more servers in a load balanced arrangement to achieve true zero downtime.