16

Consider the following scenario:

I have 2 PC's (PC1 and PC2) that wants to transmit at the same time to PC3 in a full duplex ethernet switch. Let's consider that all ports are in the same VLAN, so what happens internally in the switch? Who transmits first to PC3?

I have read before that CSMA/CD was used, but only it was used in earlier Ethernet versions that operated in half-duplex, and each port of the switch was a collision domain and if 2 machines attempted to transmit at the same time, an algorithm was executed to give a random time to each computer to transmit and solve the collisions problem. However, in a full duplex switch I read that the possibility of a collision is eliminated so, if 2 PCs attempt to transmit at the same time, what happens internally in the switch? Does the switch execute an algorithm to choose who transmit first?

John Jensen
  • 8,997
  • 4
  • 29
  • 47

4 Answers4

14

The switch will fully load the incoming frames of data, from the two sending systems, into its buffer(s). I'm not sure how it determines which frame would be first in the queue for subsequent forwarding; but it's probably based on initial receive time of the beginning of the frame. Then the switch works through the transmit buffer queue sending the frames out one-by-one onto the destination port/segment.

There's no issue with frames "running into each other." The real issue is can the ultimate port/segment accept the frames fast enough. (And, of course, can the switch process its buffer/queues fast enough.)

Craig Constantine
  • 4,972
  • 5
  • 36
  • 53
  • ALL: ...but I'm wondering: Do any switches have the capability/feature of beginning frame transmission onto the next port/segment, while still receiving the frame from the sending segment? – Craig Constantine Jun 04 '13 at 15:28
  • 9
    Yes, there is cut through and fragment free switching. Mainly used in high speed trading environments. Cut through will start transmitting as soon as DST MAC is known. Fragment free make sure that the frame is not a collision fragment so it needs to receive 64 bytes before transmitting. – Daniel Dib Jun 04 '13 at 15:35
  • 2
    Yes, I think that would be called cut-through forwarding by some, as opposed to store-and-forward. In a cut-through approach a switch could start forwarding as soon as it has recieved and looked up the destination MAC, but hybrid approaches exist where it waits just a bit longer, eg to be able to look at an IP to check an outbound ACL. (The Ethertype field would tell it wether an IP address is present in the frame). – Gerben Jun 04 '13 at 15:36
  • 1
    ^^^ This. This is why NE rocks. – Craig Constantine Jun 04 '13 at 15:38
  • Indeed, was just about to post about cut-through and Daniel beat me to it. – David Rothera Jun 04 '13 at 15:55
  • 4
    Since no one mentioned it, the drawback to cut through is the odd case of the frame error. While cut through does reduce the latency slightly on the initial frame (the bigger the frame the more this impacts), it forwards the frame irregardless of if the frame is valid because it does so without receiving the full frame and being able to verify the FCS. Store and forward switches will receive the full frame and be able to check the FCS before forwarding, allowing them to drop invalid frames. – YLearn Jun 04 '13 at 16:08
  • Come to think of it, when did the industry shift away from cut-through? And with what platforms? – generalnetworkerror Jun 04 '13 at 18:06
  • 1
    What you've labeled "fragment free" is called "modified cut-through" by most switch makers. It receives the first 64B to detect runts, then latches transmit to the dst port. If the port is unavailable (busy, congestion, etc) it switches to store-and-forward (fully buffers the packet.) I am unaware of anyone who does _only_ store-and-forward -- it's suicide for latency. – Ricky Jun 04 '13 at 18:44
  • 2
    Most switches are store-and-forward only, cut-through did comeback few years ago, because it's easy to sell to financial world. store-and-forward on 10G causes 1.2us latency, i.e. 235m. Also ingress and egress cannot be different speed on cut-through. – ytti Jun 04 '13 at 19:12
  • There isn't (IMHO) much difference between store and forward and cut-through when using a buffer with simultaneous read and write. Assuming the port speeds are the same, the output port can begin sending the buffered packet while it is still being received on the input port, after the first 64B or so. The buffer would be divided into frames so that other incoming packets can be written into their own independent buffer frames. – Zan Lynx Jun 04 '13 at 23:37
6

Very interesting question which unfortunately does not have any single correct answer, as the exact solution varies from hardware to hardware.

However this problem is explicitly discussed in Computer Networks - a Systems Apparoch at page 231-232.

The gist of the solution for design called 'Sunshine Switch' is that you have diagram input--batcher--trap--selector==banyan===outputs and there is delay box which connects selector to batcher. And I quote:

When more than l (ed. size of banyan) packets are are destined for a single output in the same cycle, they are recirculated through the delay box and resubmitted to the switch in the next cycle.

And further:

The trap network identifies those packets that will be able to exit the switch through the banyans (up to l of them per output port) and marks the rest for recirculation.

ytti
  • 9,776
  • 42
  • 53
3

There will always be SOME difference in when the two computers send to the third, unless you are doing anything special on the switch it will be transmitted on a FIFO basis so whichever frame arrives first will be transmitted first.

David Rothera
  • 2,788
  • 15
  • 20
  • 1
    As the PC[23] are on their own wires, there is nothing preventing frame for arriving exactly at the same time for the precision of 'moment' (frequency) HW is running in. I guess it's upto the switch ASIC designer to decide what to do in that case, but I'd guess it'll read frames of the ports in round-robin fashion. – ytti Jun 04 '13 at 15:39
  • 2
    Good point, I was more meaning that the chance of two frames arriving at *exactly* the same time were pretty low. As you mention it will probably be down to the ASIC design and I'm pretty sure it won't be documented anywhere unless you jump through a series of hoops with your accounts team. – David Rothera Jun 04 '13 at 15:57
  • He who interrupts first wins, assuming all else equal. – generalnetworkerror Jun 04 '13 at 18:10
0

Switches forward one packet at a time as they enter switch so no collisions. Then PC3 will process packets from PC1 & PC2 dividing its cpu time. Windowing and buffering will control communication flow.

Jon Rob
  • 9
  • 2