unable to connect using ssh when server RAM/Swap memory is exhausted.

Aneel

New Member
Joined
Oct 22, 2019
Messages
2
Reaction score
0
Credits
0
This is to confirm understanding about ssh, whether ssh fails if server RAM/Swap memory is exhausted.
Is server RAM/Swap space has any role in ssh connection? If yes, could you please explain in detail.

Recently we faced such issue where server RAM/Swap was exhausted and client was unable to connect with server using ssh.
We googled about related to above problem to understand architectural reason, but we couldn't find such information.

We will be thankful if such information get shared..!!
 
Last edited:


If your server is under an extremely heavy load and is running out of RAM and swap space - then it is extremely likely that a new ssh connection would fail.

If there is no RAM or swap left - the ssh daemon running on the server would not have enough resources available to allow it to start a new process for an incoming ssh session.

But this isn't a problem with ssh - the problem is the load on the server.

What you really need to find out is why the server is getting such a heavy thrashing and what is using up all of the memory and swap space. And then see if there is a way that you can reduce the load on the server. To limit the amount of resources available to the processes causing the problem.

You need to ensure that enough resources are kept free to allow the user to reliably be able to log in via ssh. But exactly how you'd do that is beyond my knowledge. I don't have much experience with Linux servers. But @Rob or somebody else here might have some insight.
 
Hi,

Most programs nowadays use dynamic memory allocation, which means they allocate memory on demand. SSH server is not an exception. But I don't think that the problem is the SSH server itself since
handling the connection is probably not making the server request another memory page.

When a user log in through SSH, a new process will be started to hold the actual shell. At least it will require a single memory page, which won't be available if the system is saturated.
Even if you could sucessfully open a shell, by reusing one for example, you wouldnt be able to do anything but using the internal shell commands, since the system couldnt start a new process.

So as JasKinasis said your best bet is to make sure the system won't go crazy on memory usage.
You can limit the memory usage on most server daemons throught configuration files.

You can use ulimit to set memory limits on a per process basis.
There is cgroups for setting per user limits.
 
If your server is under an extremely heavy load and is running out of RAM and swap space - then it is extremely likely that a new ssh connection would fail.

If there is no RAM or swap left - the ssh daemon running on the server would not have enough resources available to allow it to start a new process for an incoming ssh session.

But this isn't a problem with ssh - the problem is the load on the server.

What you really need to find out is why the server is getting such a heavy thrashing and what is using up all of the memory and swap space. And then see if there is a way that you can reduce the load on the server. To limit the amount of resources available to the processes causing the problem.

You need to ensure that enough resources are kept free to allow the user to reliably be able to log in via ssh. But exactly how you'd do that is beyond my knowledge. I don't have much experience with Linux servers. But @Rob or somebody else here might have some insight.
Thank you so much @JasKinasis for providing the answer.
 

Staff online


Top