Maximising throughput of a team in converged networking scenario using Hyper-V virtual switch

Hi,

Recently I have been looking into the converged network scenario, I have read quite a few resources about this but I am struggling to make sense of maximising the bandwidth I have available. I understand that a single TCP stream can only go down 1 physical NIC, so even if I had a team of 6 x 1Gbps NICS the team would be 6Gbps, but I will never have a single transfer go faster than 1 Gbps. Im pretty sure thats right, how I understood it anyway.

To make a converged network, I need to team adapters together, then make a virtual switch, and make virtual NICs off of that virtual switch. Each vNIC I can assign for different purposes, such as ISCSI, data sync and backup traffic. In the diagram below I have shown this configuration, each green pNIC is only 1Gbps, on my blue vNICs I use the minimum weighting configuration.

In my example below, I have 1 ISCSI vNIC, if a physical NIC failed then I understand that to mean that the bandwidth wont reduce for ISCSI because a single stream can only go down one NIC anyway, the team will reduce to 5Gbps. No vNIC will go faster than 1Gbps.

If this is correct, then how would I increase bandwidth to the disk system using ISCSI, is it simply a case of creating an additional vNIC for ISCSI traffic, adding it to the same team and configuring MPIO so the traffic will eventually end up through different pNICS? In my mind, MPIO cant round robin ISCSI data to more physical NICs because the ISCSI and MPIO only know about the vNIC, I assume it is the team that ultimately handles the placement of the stream onto the pNICS.

Do I simply do the same for Data Sync and Backup traffic?

Im not too concerned about the VMNET team, I am mostly focused on getting the best out of the core cluster resources I have. I havent shown the physical switches to the right of the diagram, but these are already configured to accept the VLAN traffic on the ports and the same is all configured for the other half of the solution.

Just a little confused over the whole thing on how I could potentially achieve the 6Gbps I have at my disposal. In this configuration and testing so far, it seems quite difficult for me to exceed about 1.5-2Gbps combined across all the vNICS (in this particular test server I am limited to 1 physical disk so its hard to gauge the performance, the disk will take probably about 200MB/2Gbps maximum, thats with ISCSI (file copying) and data Sync going on between two servers at the same time.

many thanks for your help

Steve


  • Edited by Steve Mills Thursday, April 30, 2015 1:07 PM
April 30th, 2015 1:04pm

Hi,
 
According to your description, my understanding is that you want to confirm that if it potentially to achieve 6Gbps.

I an afraid that it cant be completed.

As Network Teaming, more users and faster speed (A single session is opened on a single NIC and data moves across that link at a maximum speed of that NIC, if there are multi jobs, each job would create a new session that can each gets the maximum in theory. In actually, it is just better than a single process). But, the maximum on any one NIC is still 1 Gbps. It is impossible for one session to get speed more than the maximum of that NIC.

Best Regards,
Eve

Free Windows Admin Tool Kit Click here and download it now
May 3rd, 2015 11:23pm

This topic is archived. No further replies will be accepted.

Other recent topics Other recent topics