How to use DFS namespace to resolve my trouble ?
There is no more space in the original disk on the file server, NOTE: Both the file server and disks are based on VM and its shared database . Now we plan to add one fresh disk to resolve the problem of insufficient disk space, and wanna know whether DFS is able to fix the following situation or not? No 1. Keep the original folders and documents unchanged, such as p:\dept\sales\Kevin.xlsx, P:\ is mapping driver on the client and linked with D:\dept\sales on the file server. Now we decide to use the standalone of DFS to create one connection with the new disk. As a client, he feel nothing invariable, when he uploads and change the original data, but the data should be saved in the new disk rather than the old one. No 2 . Keep the folder permission unchanged in contrast with old folders or files. Is DFS useful to make them come true? if not, whether there is another better method exists.
August 17th, 2012 10:38pm

You need to enlarge your harddisk partition where the d: is. By adding a disk in your array, or any other mean. Is your d:\ a local array ? Like a raid5 ? If yes, check your array documentation to enlarge the array, and afterhand you enlarge the harddisk space available. Please note that often on some controller it would be hard to enlarge. So the step #1 is a good backupMCP | MCTS 70-236: Exchange Server 2007, Configuring Want to follow me ? | Blog: http://www.jabea.net | http://blogs.technet.com/b/wikininjas/
Free Windows Admin Tool Kit Click here and download it now
August 18th, 2012 12:19am

Glad to see your guide. As you mentioned, it will be a risk to enlarge the hard disk partition, so we are cautious to take the measure. Is there any idea to resolve the space issue, such as DFS ? Note:we are using the Virtual machine as file server, and HP EVA 6series as our shared storage. Considering insufficient space trouble, we re-config the remaining space, and divide part of it into the usable space. Of course, it's a good suggestion, Maybe we will adopt it after estimating the risk.
August 18th, 2012 3:37am

Besides it, the database is too large, it will costs lots of non-working time.
Free Windows Admin Tool Kit Click here and download it now
August 18th, 2012 3:46am

Iam not sure about using DFS for that purpose, thats why I told another solution. In my idea you can't have a replica out of space, as both side will be replicated with the same data, unless you can make a replica and after sometime cut off the first's one. Someone could answer better than me for that. Iam more used to the "hack&slash" method on my side. (Like preparing a LUN with a copy set of your data (adding the new drive to your VM to copy the data), and you just switch your share & db path at nigth to the new LUN when ready)MCP | MCTS 70-236: Exchange Server 2007, Configuring Want to follow me ? | Blog: http://www.jabea.net | http://blogs.technet.com/b/wikininjas/
August 18th, 2012 9:55am

At first, I don't use the DFS replication feature, I just want to use its feature of joint link, i am not sure whether it works, e.g. original shared folder D:\dept\sales as well as its sub-folders, meanwhile I create one shared folder e:\dept\sales on the fresh disk,then create namespace, and use standalone feature, thus namespace \\server\shared\dept\sales is made up of original folder as well as the new shared folder I mentioned before. For client, when he uploads the new files to the mapping driver, the data is saved in the e:\dept\sales rather than the d:\dept\sales because of its less space.Is the measure plausible or practical ? Secondary, as for you method, whether it means, I prepare one LUN which is with a copy of whole file data, then switch the mapping driver to the new LUN later, finally make the original disk discommissioned, right ? Could you share with me the article related with the "hack&slash" method ? I need the detailed configuration or steps. Thanks, guy.No pains, No gains
Free Windows Admin Tool Kit Click here and download it now
August 19th, 2012 10:37am

No reply ????No pains, No gains
August 21st, 2012 11:00pm

You are right for the secondary method. Using multiple time the robocopy to be sure all file are uptodate. At nigth you switch your file share from place and your DB. For the first method I seen in the past was volume mount point, the mapping was done with the disk management software. like d:\xxxxx\ccc pointed to another HDD, but I don't recommand that way. It's hard to diagnose a problem after (hdd access speed error, etc..) and if you quit your job in exemple, and the disk fail after, that can make a complex situation to debug for the other admin. (http://support.microsoft.com/kb/947021, http://technet.microsoft.com/en-us/library/cc960726.aspx (volume mount points)) I seen admin error like mounting 1x HDD into a RAID5 volume set (like d:\ is RAID5, and d:\test1\ is extended, but not protected), so the data got lost when that single HDD died.. MCP | MCTS 70-236: Exchange Server 2007, Configuring Want to follow me ? | Blog: http://www.jabea.net | http://blogs.technet.com/b/wikininjas/
Free Windows Admin Tool Kit Click here and download it now
August 22nd, 2012 9:32am

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics