6

I recently got around to measuring effective throughputs on a pretty big, switched network. I measured the throughputs by having two laptops, both running Iperf; one being the server, the other a client. I made sure to measure both the up- and downlink throughputs. My problem comes from the fact that some 100 Mbit/s paths were measured as everything from 50 to 80 Mbit/s. That seems kind of low, even taking overhead and plenty of active users into account.

Some useful information:

The network uses RSTP and has no routed hops. There are at least 100 active users along the path from the Iperf client to the server. The throughputs were measured on paths with at least three 100 Mbit/s switches between the two laptops. I measured using TCP.

So my question can be summarized as: are these values something to be expected

Another question: I also got about 250-300 Mbit/s on a Gigabit switch while having the two nodes plugged into it. I used regular straight-through cables for both nodes. This can't be expected, even though the switch is in use by other machines, right?

Dave
  • 69
  • 1
  • 2
  • 2
    Switched networks are almost invariably all wire-speed, excluding microbursting. I would suspect your measurement kit may not scale to required rates. Also network should always be measured with UDP, TCP tends to measure hosts TCP stack implementation. – ytti Sep 06 '13 at 12:06
  • @ytti, I would say your response is true, as long as you confine the hosts to one TCP socket each. If you use iperf to open five parallel TCP sockets between the same hosts, you will saturate the link as long as the host isn't limited by the CPU or PCI bus – Mike Pennington Sep 06 '13 at 12:18
  • Thanks for the reply! Both laptops have Gigabit network interfaces, and Iperf supports well-over 100 Mbit/s. I measured using TCP because that's what most users will be using, so I'm not really interested in the absolutely highest throughput possible, but rather what an average user can expect. Does that make sense? – Dave Sep 06 '13 at 12:22
  • @Dave, If your goal is to quantify the average user experience, you just made these measurements very situational and application dependant... iperf wont measure that – Mike Pennington Sep 06 '13 at 12:28
  • It's indeed very situational, but each individual measurement isn't as important as comparing several of them to each other. Iperf will measure the highest throughput on a given path using TCP, right? – Dave Sep 06 '13 at 12:31
  • The point is that you should not assume your business apps behave like iperf – Mike Pennington Sep 06 '13 at 12:34
  • Also be aware of your window size when testing TCP on high speed links. You generally want to run multiple sessions (-P) at the same time – mellowd Sep 08 '13 at 12:01
  • @dave, are you looking to measure only network performance, or what kind of performance the users experience when they use your business applications? – Mike Pennington Sep 09 '13 at 09:56
  • I would suggest checking the speed of the trunk ports and duplex settings, ie Duplex full speed 100 or 1000 Duplex mismatching could be a problem, enable terminal monitoring to see if there is a duplex mismatch. I would also check to make sure of the RSTP path, If you have 3 or more switches, the throughput could suffer if the root bridge for the right vlan is not selected appropriately to allow for the best path to the end device from the switched network. command: show spanning tree Also check the allowed vlans through the trunk - this could be problematic for the best path to the root bridg – alex_da_gr8 Sep 11 '13 at 12:54
  • What are the make and model of the switches? Some cheaper switches will link at a gigabit but don't have enough bandwidth on the backplane to transfer at 1Gbps – Epaphus Sep 11 '13 at 17:55
  • Even if backplane BW is enough, some traffic patterns can kill a switch I hit a situation where an iSCSI SAN on a switch with small buffers performed sub-par: On some Cisco 3750 variants the ASIC's 2MB buffers are simply overwhelmed by the discs' traffic when connected to Enterprise iSCSI SAN and we had loads of Queue Drops. Also, are any of the trunks (links between switches) for the path you're using saturated? – Remi Letourneau Sep 11 '13 at 20:12
  • (Continued) Regarding your Speed Test with two laptop on the same switch, some NICs, even if they support GigEthernet, cannot transmit at full Gigabit speed due to various limitations (os, drivers, buffers, chipset...). Be sure it's not the case with your Laptops. – Remi Letourneau Sep 11 '13 at 20:18
  • Did any answer help you? If so, you should accept the answer so that the question doesn't keep popping up forever, looking for an answer. Alternatively, you can post and accept your own answer. – Ron Maupin Jan 05 '21 at 01:29

3 Answers3

3

With throughput defined as the amount of data transferred from point A to point B in a period of time. The significant variables in throughput are: Latency, Packet Size, and Retransmissions (quality). This was proven with the Mathis Equation.

http://www.slac.stanford.edu/comp/net/wan-mon/thru-vs-loss.html

TCP Basics: TCP connections establish a session with a SYN, the receiving machine sends and ACK, Then data flows. When the window (setting in windows saying how much data to collect before it asks for validation/acknowledgement of receipt) fills up another set of acknowledgements are sent to start the next flow of data.

If your environment has the default MTU of 1500, and you introduce a device that has an MTU of 1460, this will slow down your network because all packets flowing through the device will be fragmented; when the 1500 packet hits the device with MTU of 1460 it will fragment the packet into two packets and transmit it.

If you have Jumbo frames enabled and your MTU is 9220 you will get much higher throughput, 5 times higher. Each payload is larger even though the packet has the same latency.

In short: Throughput is conditional and depended on both host and network device settings. Use the Mathis Equation as a guide for what you could expect with the given data points for your network. Validate your findings and verify units; Megabit/sec and Megabyte or Kilobit and Kilobyte are not the same.

user3159
  • 2,646
  • 1
  • 11
  • 3
1

Test with UDP to get an accurate throughput measurement. Also, you'll need to increase the UDP send rate above the default to get a good measurement. With TCP, you'll be dealing with windowing which will give you an artificially low number for your purposes. I typically try to send UDP above the known link speed. You'll then see your send rate as X and your actual throughput as some number less than X.

Ryan
  • 1,271
  • 8
  • 11
0

You can turn off TCP windowing with iperf using the -N option for "tcp no-delay". As per the documentation "set TCP no delay, disabling Nagle's Algorithm"

I would highly recommend using this option with concurrent TCP sessions to simulate something closer to ACTUAL network traffic rather than shooting a UDP blast.

John Kennedy
  • 1,071
  • 7
  • 12
  • This reads more as a long comment and not an answer. If I convert it to a comment, it is too long and will be truncated. Could you please edit to make this an answer or remove it and add your comments where they are appropriate (requires 50 reputation)? – YLearn Nov 13 '13 at 18:06