Placement of Transaction Logs
Looking for some guidance on placement of transaction logs. We have VM Ware with Fiber attached SAN storage. The disk and controllers are fully redundant. We're looking to keep databases and transaction logs separated mainly for performance. Multiple databases will share the same database VMDK disk. However, for the transaction logs, would it be better for the corresponding sets of logs to share a VMDK log disk together or create individual VMDK disks for each set of database logs? On one hand, having multiple sets of transaction logs in subfolders on a single large VMDK disk would allow them to share the space available and simplify the configuration. On the other hand, is it better to keep the logs isolated, each set on their own VMDK disk so that one filling up (or becoming corrupted in some way) won't affect the others? Not sure it matters if these are VMDK disk or not but mainly wanted to get feedback on the idea of separation of the logs into their own disks. Also, we are aware that VMDK disk is not fully supported on NAS based disk. This isn't NAS, it's FC attached SAN. Any known support issues there? Thanks!
June 5th, 2012 2:51pm

One of my last installations of Exchange 2010 was very similar to what you are attempting here. I would suggest keeping each database and related logs in completely separate VMDK or LUNs. Meaning, if you have two databases, you need four VMDKs, one for each database and one for each set of logs, that way if a DB fills up their disk, or if logs fill up their disk, it does not affect any other DBs. From my experience, as long as the SAN is not very heavily utilized and it is at least 4GB FC, you should have no real issues.
Free Windows Admin Tool Kit Click here and download it now
June 5th, 2012 3:16pm

Thanks, and that was the original plan. However, there are a limited number of SCSI controller ports available in VMWare (something like 57 available) and we'll have 16 DBs, 16 TLs, and double that since this is a DAG with another 16 DB's and 16 TL's replicated. We'd need 64 SCSI controllers since each VMDK disk takes one. So my thought was we could group several DB's together since they tend to grow slowly. If I have to choose, I'm more worried about runaway growth on TLs or invdividual TLs taking up all the space. But I agree, ideally we'd be able to seperate all DBs and TLs each into their own VMDK disk.
June 5th, 2012 3:32pm

I know of no limitation that you speak of. A VM on vSphere 4x or 5x can have up to 4 virtual scsi adaptors with 15 scsi targets per adaptor for a total of 60 devices (combination of CD,disk, or VMDirectPath. This is on a per-VM basis, there are no host based virtual scsi limitations that I know of. You can combine items together, you just have to take care with space constraints, and at the least, I would separate your transaction logs into separate disks. Just make sure to scale your disks appropriately for the expected amount of logs. I would also setup DRS so that the two DAG 'partners' are never on the same host at the same time.
Free Windows Admin Tool Kit Click here and download it now
June 5th, 2012 4:26pm

Right, I "think" we're saying the same thing about the scsi adaptors I may have just used the wrong term. Of the 60 devices per VM, 3 devices are in use for the OS and App disks (C, D, E), which leaves 57 devices remaining for DBs and TLs, right? Have you seen one that big yet? To seperate DBs and TLs (keeping DBs them under 100GB each max) we'd need 64 disks. May seem like a lot but when you include the DAG replication copies from other servers as well as those actively hosted locally they add up quickly. In this case we are just over the number of scsi devices available to the VM. Unless we went to fewer, larger DBs perhaps but the concern there was backup/restore time per DB so we wanted to keep them 100GB or less. And yes we are keeping them seperated on different VM hosts, good point. We've also had some fun testing the DAGServerMaintenance scripts while updating our VM hosts recently.
June 5th, 2012 4:53pm

Well, the system I 'just' installed has 16 databases, but they are all getting close to 300GB now, but it is a physical implementation with four DB servers. Each server houses 8 database copies, 4 active 4 passive, utilizing two sets of 'partner' servers. Anyway, I would scale your DBs up some to keep the disk count down, remember in a DAG the max supported DB size is 2TB (including whitespace, etc). As far as the backup is concerned, are you doing guest level backups (backup agent installed on the guest)? or are you backing up the SAN LUN? Do you have a redundant/replicated SAN? If you cover the redundancy bases, you should not have to go to a restore to get data back unless it is a major disaster or if somene wants to restore items that were deleted long ago. You could have another mailbox server with lagged copies (ones that wait to write the logs to the database for a period of time), so you would have an Active, a 'real time' passive, and a lagged passive so if the active were to become corrupted and it corrupted the passive, you could switch to the lagged copy and only loose the amount of time the copy is lagged for.
Free Windows Admin Tool Kit Click here and download it now
June 5th, 2012 5:31pm

We could probably scale up the DBs. Backup agent is installed on the guest. Replication is DAG replication with 3 copies per DB. Restores would typically be just to recover items that were hard deleted or deleted beyond the deleted items retention period, which is rare.
June 5th, 2012 5:44pm

I am jumping into the conversation just a little late but eblow are my thoughts based on experience and training. With Exchange 2010, Database activity is 'more' sequential than in previous versions which gives the ability of bundling the transaction logs on the same volume (which write sequential). In a previous setup I have 3 Multi-Role servers participating in a DAG with 10 DBs (3 Active on each node and each database having a copy on the other node). The DB size ranges between 250GB and 300GB. If you are worried about RPO and RTO by keeping the size down, that makes sense. Otherwise I may have you ponder adding a third node and increasing the database size so that you have that added protection in a 3 node DAG to ensure RPO and RTOs in case of failures. In regards to this design, each server had its own Datastore in vmware and separate virtual disks were created for each DB/Log disk. Mount points were used in the OS. Of course the best course of action is to run Jetstress on the servers against the storage provided before installing Exchange so that you can verify the performance of the storage and make sure that it meets your needs. http://technet.microsoft.com/en-us/library/ff706601.aspx is the link to jetstress documentation and here is the link to the field guide: http://gallery.technet.microsoft.com/Jetstress-Field-Guide-1602d64c. The numbers can be put back into the Exchange requrements calculator to validate your needs and design. Hope this helps you out.Jason Apt Microsoft Certified Master | Exchange 2010
Free Windows Admin Tool Kit Click here and download it now
June 5th, 2012 6:28pm

Thanks for the feedback. We do have a 3 member DAG, I had read some recommendations elsewhere to keep DBs in the 100 - 200GB range. Perhaps that was old school pre DAG world. Jetstress is also good idea, will run that to verify performance.
June 5th, 2012 6:44pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics