A lot of people think that securing a server ends when the firewall’s up, ACLs are established and proper software protocols are put in place. While for most, including me, it is enough I’ve also been a part of my fair share of businesses who either use or need physical security. As a little back story, in 2010 I used to work for a relatively popular web hosting company. They own their own data center (DC) which made a lot of management tasks easy and nice. One thing I took deep note of though is their physical security. There were about 5 rows of server racks, mostly 1U servers and some hardware firewalls, spam filters, etc… Only the last 2 rows, however, were caged. Those two were for co-locations (i.e.: people bringing in their own servers), which made me wonder why they didn’t treat the rest of their customers that way. Heck, they even had a small 4x4 gated area in the DC with a rack and some servers there as well. Ultimately I didn’t stay there long, but it taught me a lot of things to not do (their set up wasn’t horrible but left a lot to be desired). Here are some things to take note of, not all will be applicable to everyone by the way. Cages Having a cage is nice, but if you’re not going to properly secure said cage then why even have it? The 2 rows that I mentioned that were caged had each rack locked and you had to get that key from a lockbox (which in itself was locked by buttons you had to push). I liked this idea, even if it meant it took longer to service said servers because there was no way to connect a VGA monitor and USB keyboard to the server without unlocking the cage first. There was at least a sense of security there. Which brings up another point: making the important portions accessible. Now, I’ll admit I’m not a shopper for cages but I do feel a cage should allow you access to the motherboard’s components freely (though at the same time the argument against is just as valid). More so it should be an optional feature which costs more (custom mesh fencing). Physical Access All you needed was an ID card so that you could swipe it in front of a card reader. This was pretty awesome (had the same thing while at Ford too), but I felt it was improperly used. You only needed to use it once, to get in. While some of the other doors required it too while in the DC, they were left open nearly the whole time I worked there. There wasn’t easy access otherwise but it still felt like the lazy bug was biting everyone hard. If you allow physical access to the servers you need to think of everyone as a criminal. If people can steal things, how would they? You won’t cover every nook and cranny but you will cover a good portion of them. Resources This is something that was lacking in all of my academic books as well, but one that should be addressed. We had to run a very long Ethernet port from a switch (had to find one that had a free port) to the 4x4 cage area to let people KVM in. This caused many issues such as it getting in the way when using the runaway cart (where the monitor and keyboard were mounted) to unsafe wire running. There were times where we had to leave the cage door open through the rows because the client wanted to KVM into their machine. It felt like this default the whole purpose of even locking them because again, no access otherwise to the necessary ports, the Ethernet cable running to the switch would then block the cage door from shutting all the way. Conclusion Truthfully generic “physical security” is hard to cover. However, it doesn’t mean it should just be swept under the rug. Even your own home computer should be considered for physical security. What if someone broke into your house, how would you keep it safe? Most don’t until its too late. There’s many solutions out there too to provide physical security. You just have to know what you’re looking at to do. Not everyone will need a cage or a proxy card, for example, but everyone should at least consider the resources when building their physical security.