Scaling A Forest of Servers Down to a Tree

E

Eric Hansen

Guest
When you’re managing a few servers on your own accessing them can be as easy (i.e.: shell aliases or editing your .ssh/config file) or hard (manually typing everything in) as you like. There’s no right or wrong way to manage any amount of servers, really. But once you have to start managing anywhere from 10 and up it can be very difficult, especially when ACL is involved. That is why I propose that you create a central machine that all SSH requests have to go through.

Why?
Think about it like this: you own a house, a hallway has numerous rooms down it and each room door inside has a specific key to it. If you only have 2 or 3 doors inside of the house key management is pretty simple. Leaving the front door open all you have to do is walk to a room, try a key and walk in when you find the right key.

What if you have 10 or more doors to manage? That means you have to carry around 10 keys and try each one on the door until you find a match. With the hallway not having a door on it you can’t trust just anyone to walk through, they might steal some of your items.

Now, put a door on the hallway, and you only have to actually manage 1 key, which is for the hallway door. You can (and maybe should) still lock all the doors down that hallway, but now you can also hang the keys up somewhere and label them, making everything easier in the end.

The idea behind this trick is similar. If you have a central server that accepts SSH connections, you can thus restrict all the other servers to accepting from only one machine. This can also reduce the amount of keys required to save on a user’s machine (which helps when you have more than 1 person accessing a server). For example, instead of having to redistribute the public key of a server when its changed to 10 employees who need access to it, you can just redistribute the new key to the central server.

How?
A lot of this has to do with SSH with a touch of iptables (you can use screen too in order to make it even easier).

SSH
Every server involved will need SSH installed (all but the central server can keep the default settings if wanted). The central server’s SSH will need to be locked down tighter (disable root logins, change authentication measures, etc…). Overall a pretty simple process and since SSH is all text-based most low-end VPSes will suffice for this project just fine.

IP Tables
iptables is really only needed on the non-central servers so you can restrict access on the port to one specific IP. A typical rule for this would be:
Code:
iptables -I INPUT -p tcp --dport 22 -s ! (central server ip) -j REJECT
Replace the “(central server ip)” portion with the actual IP but this is a quick and easy way to tell your system that unless traffic is coming from CentralIP:22 to reject it (not even acknowledge that its bad, just don’t bother with it). This would be the most time consuming but there are ways to make this easier still.

When?
The best time to do this is now, truthfully. You don’t want to wait too long because the longer you wait the harder it’ll be. If you’re just starting to build a network make this top priority, your sanity will thank you later. Already have 500 servers to manage and don’t want to start from scratch? I’ll show you how to work with this in my next article.
 

Attachments

  • slide.jpg
    slide.jpg
    59.1 KB · Views: 51,384


Latest posts

Top