SQL Server Database Files configuration

Hello,

I need some expert advise on below Database Files setting.

1.       Server with 2 disk drives as below :

a.       D drive : 1TB (For database files)

b.      E drive : 1TB (For log files)

2.       I have created 2 different databases for creating disk tables and in-memory tables.

3.       For DiskDatabase, files are (default settings, I have not changed anything here)

a.       DiskDatabase (Initial size : 2099MB, Auto growth by :  1 MB, Maximum size : Unlimited)

b.      DiskDatabase_Log (Initial size : 4524MB, Auto growth by :  10 percent, Maximum size : limited to 2,097,152)

4.       For InMemoryDatabase, files are (default settings, I have not changed anything here)

a.       InMemoryDatabase (Initial size : 3MB, Auto growth by :  1 MB, Maximum size : Unlimited)

b.      InMemoryDatabase_Log (Initial size : 1MB, Auto growth by :  10 percent, Maximum size : limited to 2,097,152)

c.       InMemory_FileGroup (Initial size : 153MB, Auto growth by & Maximum size : Unlimited)

Questions are :

  1.        Is it better to have disk database and in-memory database separate in terms of clearing transaction logs?
  2.        Though both the drive sizes are 1TB, maximum file size shows : limited to 2 TB by default. Is that okay? Wont we get Transaction full error ?

 

Please suggest on this configuration of database where we have to run huge active transactions on millions of records.

Thank you,

Mittal.

May 20th, 2015 8:05am

1) What is memory assign to sql ?


Free Windows Admin Tool Kit Click here and download it now
May 20th, 2015 8:17am

Hello Dinesh,

Please see below screen shot for detailed memory allocation.

May 20th, 2015 8:23am

technically speaking, you should capacity planning and set your database size to the size you would except it to grow in the next 6 months to 1 year.

 you should change auto growth - set it to 500MB instead of 1MB. you should always pre-sized the data files and should depend only on auto growth events for unforeseen situations.

make sure your are instant file initialization - so, your data log auto growth does not cause any slowness or timeout issues.

similarly for log files, you can set predefined log growth like (500MB) instead of 10%. log file cannot instant initialized and hence it is important to have it sized more accurately -

also, depending on the recovery model - if full or bulk, you need to take log backup to maintain it's size..

if simple recovery model, you should not worry about it as the virtual log(VLF) are truncated

yes, if the physicial file is only 1 TB, that will be limit regardless whether it is 2TB or 3TB.

I do not know explicity about in memory tables..

Free Windows Admin Tool Kit Click here and download it now
May 20th, 2015 8:25am

@Stan, Thank you for your detailed response.

 Can you please explain what do you mean by below line ? What I have to do here?

"make sure your are instant file initialization - so, your data log auto growth does not cause any slowness or timeout issues."

Thank you,

Mittal.

May 20th, 2015 8:30am

@Stan, Thank you for your detailed response.

 Can you please explain what do you mean by below line ? What I have to do here?

"make sure your are instant file initialization - so, your data log auto growth does not cause any slowness or timeout issues."

Thank you,

Mittal.

In short,  before any new size is allocated to the database - that new space is zeroed out and then assigned to the database. so, if your database wants to grow by 2GB, it will zero out 2 GB space first and then assign that to database(it is assigned first but the you cannot write into the new space until the zero out process has finished)

so, if your database is 1TB and auto growth is 10 percent - that is 100GB - so your transaction would wait till this 100GB growth has happened - which includes both zero out process and growth - and this naturally will take time and in these scenarios your database txn will time out.

with IFI(instant file initialization), you will skip the zero out process on the data file. log file is always zeroed out.

you need enable certain rights to sql server service account and there are lot of blogs that discuss about this

http://www.sqlskills.com/blogs/kimberly/instant-initialization-what-why-and-how/

http://blogs.msdn.com/b/sql_pfe_blog/archive/2009/12/23/how-and-why-to-enable-instant-file-initialization.aspx

https://www.sqlpassion.at/archive/2014/02/18/improving-sql-server-performance-by-using-instant-file-initialization/

in my example, 100GB growth will take time regardless you have IFI or not, if you do not have IFI, it will be worse.

so, in good ennvirnoment, you should monitor the free space in the database files and pre size them rather than waiting on autogrowth event,,,

coming to you question on log file, 2TB is limit for log file but you can have more than one log file , so you can beat around the limit. but there is NEVER EVER a reason to let your log file grow to that size or even remotely close. You need log backup(full,bulk recovery) to manage the log size. I am just saying this as an information but do not do this... the biggest log file i have seen is 150 GB and it was a mistake--a big mistake...there is no reason for log file to be that big unless there are some issues like replication,hadr etc. in such cases, you can fix the issue - take log backup,if possible - shrink log file to you normal size.. 

Free Windows Admin Tool Kit Click here and download it now
May 20th, 2015 9:04am

Hello Stan,

I have implemented IFI on the SQL Server instances. - It was for Data file only.

Can you please explain me how can I test whether log files configuration works fine and it starts overriding the log file once it reaches to the MAX Size given.

I want to make sure that It won't throw transaction full error or active transaction log error in future any time.

Can you please give me some steps which I can perform by monitoring the log file size?

And, In case of Full Recovery model, is it necessary to take backup of files ? else it would not clear the logs ? For that shall I create Agent job to take full or incremental backup  ?

One point should be noted here -- Client doesn't want to keep heavy transaction logs. As they are just generating the calculated data on the fly and store them for short time purpose. There is nothing like critical data, if they want then they can generate them again by running the program.

In this case, Can I change recovery model to Simple ?

Please suggest.

Thank you,

Mittal.

May 21st, 2015 1:11am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics