How to reduce cube processing time ?

Hello SSAS Guru's, 

Quick question. We have this massive single cube (7 facts, 41 measure groups and 27 dimensions) . It currently takes 20 hours to process (full process) a day and I am asked work on it bring the processing time down (if possible to couple of hours a day). Total size of the cube is 40 GB. 

1) Is there a way to figure out which dimension or measure group is taking most of the time ? 

2) I have seen most of the dimension attributes are strings. Would it help if I change KeyColumn to Integer and NameColumn to String ? Right now, they both are strings. 

3) I will create Monthly partitions and process just last two partitions for 41 measure groups instead of a full process on data from 2013. I am sure that will cut sown at least 10 hours of time in processing. 

4) I see all the facts are views. Some are views joining to other views joining to other views. Is that an Issue ? 

5) What other points need to be changed or checked to bring the processing time down. 

Thanks in advance 

Ram

July 24th, 2015 5:47pm

You are definetely on the right track with the changes you are planning on making.

1. You can figure out which parts are taking the longest to process, the simpleset is to actually kick of the processing of the cube interacntively, it will give you statistics on time per dimention/measure group.

2. SQL Server like any computer likes number over strings, changing your keys to int or probably bigint datatype is always a best practice.

3. Make sure you have indexing to support the logical cube partitioning on your DW side if you want to maximize your gain.

4. Definetely a problem :) you generally want to align your logical and physical models to insure most optimal performance. Even though defining your DSV with named queries works as your data set grows, your quickly run into the problems you are experiencing. Also if you do proper alignment it opens up other options for you as well.

I actually did a presentation for 24HOP on this exact topic, the link to the recording is below.

http://www.sqlpass.org/24hours/2015/goc/Sessions/SessionDetails.aspx?sid=35559
  • Edited by SQLGru 8 hours 32 minutes ago
Free Windows Admin Tool Kit Click here and download it now
July 24th, 2015 6:38pm

You are definetely on the right track with the changes you are planning on making.

1. You can figure out which parts are taking the longest to process, the simpleset is to actually kick of the processing of the cube interacntively, it will give you statistics on time per dimention/measure group.

2. SQL Server like any computer likes number over strings, changing your keys to int or probably bigint datatype is always a best practice.

3. Make sure you have indexing to support the logical cube partitioning on your DW side if you want to maximize your gain.

4. Definetely a problem :) you generally want to align your logical and physical models to insure most optimal performance. Even though defining your DSV with named queries works as your data set grows, your quickly run into the problems you are experiencing. Also if you do proper alignment it opens up other options for you as well.

I actually did a presentation for 24HOP on this exact topic, the link to the recording is below.

http://www.sqlpass.org/24hours/2015/goc/Sessions/SessionDetails.aspx?sid=35559
July 24th, 2015 10:35pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics