What is the relationship between a switches buffer size and the effect it has on latency? I am aware of the bufferbloat issue and understand that buffers that are too large in networks of slower BW (i.e. less then 1Gbps) can degrade performance, but how large is too large? I've also read that in networks consisting of 10Gbps and greater links usually fall victim to not enough buffering. Assuming that's true, why is that?
I understand this question may be very subjective to a given workload but any general guidelines or rules of thumb that can provide a foundation would be appreciated.