Buffering between networks of disparate speeds

joeS57

New Member
Joined
Nov 6, 2019
Messages
3
Reaction score
0
Credits
0
Hi all,

Fairly new to Linux. How does/can Linux accommodate buffering between networks of disparate speeds? That is, if I have a Gigabit Ethernet interface and want to forward data arriving at that interface to say a 10/100 network, how can one specify buffers of sufficient number in order to accommodate bursty traffic arriving on the Gigabit Ethernet interface? I'm assuming that the average rate of bursty data arrival is no greater than 10/100 rate so that no loss of data (buffer overrun) would occur if sufficient numbers of buffers were available to endure a burst of X seconds.

Thanks for any insight you could provide.
 


There are multiple levels of caching.
First the NIC itself usually has a very small buffer. After that the OS/kernel will start caching it to RAM.

If you think about it, this usually happens on writes to hard drives anyway.
The bottle neck is rarely a network port. Usually it is the write to the hard drive that slows you
down. Even with SSD drives.

Typically the worst that happens if you fill up all the buffers, is that the NIC (or sometimes even the application itself) says... whoa... slow down. Usually done via "sliding windows".



After it dumps some of it's data... (usually writes it to the hard drive)
It says... OK .. lets go again, I'm ready for more.

But in reality, even the hard drive will cache data before it writes it the hard drive.

This is why most modern OS's have a "eject drive" or "safely remove drive" option before
you un-plug the USB, or power off the computer. It just writes the data from the cache to the drive before you yank out the drive.
 
Hello,

You can configure memory limits with sysctl.
For example there are the net.core.rmem_max and net.core.wmem_max that defines the maximum buffer size for read and write data.

Bash:
# sysctl -a
 
Thanks for the responses. Much appreciated.

Discussions with other Linux familiar s/w eng suggested that kernel will allocate sufficient buffers as necessary to accommodate data arriving faster than it can be consumed with the upper limit, presumably, RAM available, or, as dos2unix advised above, offloading to disk if need be.
 
Also overlooked what is probably an obvious consideration. That is, a higher layer protocol would/could handle the disparate speeds, eg TCP/IP, provided, of course, that one is using it. The TCP can change the window size accordingly to throttle the arrival of any data between client and server on interfaces of disparate speeds.
 


Latest posts

Top