7

I'm trying to learn more about how a network (both LAN and Internet) handles traffic, and one part of this is the question of parallelism.

In the documentation for Speedtest.net, in describing how exactly the test works, it mentions that "up to 4 threads" can be used, without describing what those threads actually consist of.

My question is, what determines the degree of parallelism that can occur across a network? I know once a signal reaches the destination server, it's all up to that server's resources and the receiving program. But how does parallelism work across the network?

For example, does the network interface card take the request and split it up for quicker transfer across cable? Or does it just package up everything into larger chunks and let a router/switch do that?

Specifically, I'm trying to understand this from a business > business network sense, so I'm not worried about things like a local cable modem or local DSL adapter.

Sean Long
  • 379
  • 1
  • 6
  • 2
    Very broad topic. In DSL you have home, it's highly parallel between your home modem and far-end DSLAM (different symbols sent in different frequencies). Your wireless at home is parallel. Optical ethernet is serial unless you go above 10GE (And even 10GE in short-range multimode can be paralellel). Once packet is actually in internet there various technologies like ECMP and 802.1AX to introduce parallelism but it's not within single packet. Hopefully someone can come up with more cohesive answer. – ytti Jun 13 '13 at 14:11
  • I'll try to clarify a bit, I didn't realize it was so dependent on the local setup (my thought was that if anything the NIC helps split it up). – Sean Long Jun 13 '13 at 14:18
  • Oh and none of this has anything to do with the speedtest threads that parallelism is not visible to network at all. That is just slow code pushing packets out and to offset the slow code you run more of it at the same time, but it introduces no packet-level parallelism. – ytti Jun 13 '13 at 14:21

2 Answers2

6

what determines the degree of parallelism that can occur across a network?

Let's baseline some stuff...

Speedtest.net sets up 4 parallel TCP sockets in javascript between the web browser and their bandwidth server. Speedtest.net transfers bulk-data over these sockets and then time the results to get throughput measurements.

4 Parallel TCP streams helps overcome latency and packet-loss in the path between the speedtest... parallel TCP streams are much better at overcoming these challenges than a single TCP socket... See this superuser answer for an example of how you can do this from the linux command-line.

It doesn't really matter if the streams are parallel between the same hosts or different hosts, Ethernet can handle thousands of simultaneous parallel streams... it doesn't even know the exact number of parallel sockets that exist beyond the bandwidth they consume.

Mike Pennington
  • 29,876
  • 11
  • 78
  • 152
5

I'm assuming you're speaking mainly about 10/100/1000 Ethernet. The key thing to understand is that as far as Ethernet is concerned, there can only be one frame sent at a time (one sent and one received if it is operating in Full Duplex). There is no "parallelism" in the sense of more than one frame leaving the NIC at any given time. This is where a discussion of buffers and queuing comes into play, which I won't get into here.

The system using the network card is capable of sustaining many different flows of traffic, destined for many different locations (or all to the same location). However they don't leave the NIC at the same time.

I would recommend reading the Wikipedia article on Ethernet or checking out the O'Reilly book Ethernet: The Definitive Guide for more information. Both link you out to many relevant sources.


As ytti pointed out in the comments above, this can also vary greatly based upon the datalink/network technologies we're discussing.

Brett Lykins
  • 8,288
  • 5
  • 36
  • 66