The /etc/hosts file, quick and easy name lookup.

dos2unix

Well-Known Member
Joined
May 3, 2019
Messages
3,222
Reaction score
2,968
Credits
28,303

Using the /etc/hosts File in Linux​

The /etc/hosts file in Linux is a simple text file that maps hostnames to IP addresses. This file is used for local name resolution, allowing you to define custom domain names for IP addresses on your local network. Here's how you can use it effectively.

Sample /etc/hosts File​

Below is an example of a typical /etc/hosts file:

127.0.0.1 localhost
::1 localhost
192.168.1.10 server1.local
192.168.1.11 server2.local

In this example:

  • 127.0.0.1 is the loopback address for the local machine. NEVER remove this line.
  • 192.168.1.10 and 192.168.1.11 are static IP addresses assigned to server1.local and server2.local, respectively.
  • Now, instead of typing ssh 192.168.1.11, I can use ssh server2. Names are often easier to remember than IP addresses.

Best Practices for Using Static IP Addresses​

The /etc/hosts file works best with static IP addresses. Static IP addresses do not change, ensuring that the hostname-to-IP mapping remains consistent. This is crucial for services that rely on stable network configurations.

Why Not Use DHCP?​

Dynamic Host Configuration Protocol (DHCP) assigns IP addresses dynamically, which means the IP address of a device can change over time. This can lead to inconsistencies and connectivity issues if the /etc/hosts file is used with DHCP-assigned addresses.

Local Name Resolution​

It's important to note that the name resolution provided by the /etc/hosts file only works on the computer that has the modified file. This means that if you add an entry to the /etc/hosts file on one machine, only that machine will recognize the custom hostname.

Comparison with DNS​

The /etc/hosts file is similar to DNS (Domain Name System) in that it maps hostnames to IP addresses. However, it does not scale well for larger networks or the internet. DNS is a hierarchical and distributed system that can handle millions of domain names and IP addresses, whereas the /etc/hosts file is limited to the local machine and is manually managed.

Limitations​

  • Scalability: The /etc/hosts file is not suitable for large networks or internet-wide name resolution.
  • Dynamic IP Addresses: It does not work well with DHCP-assigned addresses, as these can change.
  • Local Scope: The mappings in the /etc/hosts file are only recognized by the local machine.

Conclusion​

The /etc/hosts file is a powerful tool for local name resolution and testing. By using static IP addresses and understanding its limitations, you can effectively manage hostname-to-IP mappings on your Linux system.
 
Last edited:


Sample /etc/hosts File with Aliases​

Code:
127.0.0.1   localhost
192.168.1.10   server1.local
192.168.1.11   server2.local
192.168.1.12   server3.local www.local

Notice I have two names after the 192.168.1.12 address. server3 and www.
Now if I'm running a web server on that computer, I can simply type...
Code:
http://www.local
instead of
Code:
http://192.168.1.12
 
You can use a hybrid solution sometimes. I do run dhcp on my primary router.
But I also have some static IPs on that network. It's not the best practice, but it works sometimes.
Occasionally, I do put dhcp generated IP addresses in a hosts file. This "usually" works. dhcp servers
usually have a lease file that remembers the IP address associated with a MAC address. Most of the time
if a dhcp lease runs out, it just renews with the same IP address.

However... ( there is always a but )
It once happened that I was on vacation for a couple of weeks, and the power went out while I was away.
The leases expired during that time, and when the power came back on, all my dhcp addresses had changed.
So, I had to go in and change the /etc/hosts file on several computers to match the new addresses.

If you only have 3 or 4 computers, not that big of deal. In my case, I have 15, it's a pain manually updating 15
hosts on 15 hosts files. ( but sometimes you can copy and paste )
 
Last edited:

Sample /etc/hosts File with Aliases​

Code:
127.0.0.1   localhost
192.168.1.10   server1.local
192.168.1.11   server2.local
192.168.1.12   server3.local www.local

Notice I have two names after the 192.168.1.12 address. server3 and www.
Now if I'm running a web server on that computer, I can simply type...
Code:
http://www.local
instead of
Code:
http://192.168.1.12
The /etc/hosts file can actually be used as a form of "ad blocker" by listing hosts that the user may wish to avoid connecting to. One example of a host file with a listing of hosts from websites a user wishes to avoid is Dan Pollock's hosts file, in which he writes:
#Use this file to prevent your computer from connecting to selected
# internet hosts. This is an easy and effective way to protect you from
# many types of spyware, reduces bandwidth use, blocks certain pop-up
# traps, prevents user tracking by way of "web bugs" embedded in spam,
# provides partial protection to IE from certain web-based exploits and
# blocks most advertising you would otherwise be subjected to on the
# internet.
The file is available from here:
and is regularly updated to include new sites. It can be downloaded with:
wget http://someonewhocares.org/hosts/hosts

In the past I've used this file successfully by installing it in /etc/hosts, with a few amendments which have been:

1. retain the original default /etc/hosts entry in place of the suggested section marked in the Pollock file between the lines:
#<localhost>
.
.
.
#<localhost>

2. include some hostnames that are in the Pollock hosts file but commented out

3. include some hostnames not in the original file


There is another similar file here:
Although the hosts file offered here is slated for MS machines, it's suitable for the /etc/hosts file in linux machines. The user can optionally alter the address:
0.0.0.0
to
127.0.0.1
to use the traditional linux loopback address if one finds issues with 0.0.0.0. The 0.0.0.0 address for a hostname is supposed to make that host not routable since it has no assigned address, whereas the 127.0.0.1 is specifically the loopback address of the localhost which is the usually user's computer. Assigning these addresses to hosts tells them, they are non-routable, or the localhost, so the network doesn't reach to them to download to the system. Both addresses have been successful here.

Generally ad blockers like uBlock Origin and Ghostery among many others achieve the ad blocking function without the fiddliness of amending the /etc/hosts file, but sometimes there are sites that are objectionable to users, and they can have the relevant hostnames entered into the
/etc/hosts individually with the loopback address to neutralise them.
 
Last edited:
That's an article I plan on writing. You can find pre-made hosts files that will block a bunch of stuff. The MSFT MVPs maintain(ed) a pretty solid list. I didn't search and my memory is poor but I think the MSFT format will work just fine in Linux. If not, the fields are easily matched and some quick work with a text editor (or the terminal) should get you sorted.

Also, don't mention hosts files on /. - else you'll attract APK and he's a straight up lunatic. I mean, he's not really always wrong, but he's definitely never right.

Edit: I just moused over @osprey's link and that's the MVP list. Heh... Great minds and all that.
 
You might mention ::1 for localhost6.

That's a good idea,

Code:
>:~# cat /etc/hosts
# Loopback entries; do not change.
# For historical reasons, localhost precedes localhost.localdomain:
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# See hosts(5) for proper format and other examples:
# 192.168.1.10 foo.example.org foo
# 192.168.1.13 bar.example.org bar

Usually, your default /etc/hosts files will have the loopback/localhost entries for both ip4 and ip6.
You can break some things if you delete these lines.
 
I remember when I was still getting started with using the Internet with Linux that I was still dependent on my ISP for DNS lookups. Then one day it stopped working. I built up my own DNS database in /etc/hosts, which has come in handy whenever the Internet has been down here, which happens increasingly often anymore, so I would at least have something. Then I figured out how to switch to Cloudflare for DNS service.

This caused some real trouble though when a web site would change its IP address and I already had that domain name in my database. My ISP has a web site that does that about once a month. I commented that one out of my /etc/hosts file. They rent a cloud computer from Amazon AWS for their wifi control panel web site. It seems it has a dynamic address that is different every time they reboot it. What a pain to deal with.

Signed,

Matthew Campbell
 
Where can I download the dns database so that I don't need to connect externally when opening a website? I am planning to self host dns server that's why.
 
Where can I download the dns database so that I don't need to connect externally when opening a website? I am planning to self host dns server that's why.
There is no such thing as DNS database.

From what I know your DNS server would contact root servers and cache the results but I may be wrong about how this works.
 
Where can I download the dns database so that I don't need to connect externally when opening a website? I am planning to self host dns server that's why.

This is kind of a trick question.

There is no such thing as DNS database.

Yes and no.

If you're running a DNS server for your local home or business. You manage zone and reverse zone files
with all your local computers in it. This article has some examples of that.


But that's ONLY the computers that belong to your organization. For everyone else you use the DNS servers out on
the internet. It's a distributed system. There is no one single DNS server with all the answers. There are hundreds of
millions (billions?) of computers on the internet. It is doubtful a single computer could hold all the domain and host
information of the whole internet. This is good for many reasons, first of all you have many backups if a server goes down.
Second, because the root servers are located all over the world, it makes it faster for you local geographic location.

Here is a list of the root servers.


Keep in mind that only keep track of the public domains. Not any private ones.
Also they only know the IP address of your DNS server. They don't know all of the computers that you manage.
But they know the IP address of your DNS server. So they forward the requests to your DNS server, or googles DNS
server, or microsoft's DNS server. I imagine there are millions of DNS servers all over the world.

How would you download all of that? I think it would take exobytes and petabytes or disk storage.
 
This is what happens when you go to... well, let's example.com

You type www.example.com into your browser.
Your DNS resolver checks its cache. If the information isn't there, it queries a root DNS server.
The root server responds with the address of the TLD server for .com.
The resolver queries the .com TLD server, which responds with the address of the authoritative DNS server for example.com.
The resolver queries the authoritative DNS server, which responds with the IP address for www.example.com.
Your browser uses the IP address to access the website.

Now example.com might have multiple servers. Let's say www.example.com ftp.example.com, mail.example.com
support.example.com, forums.example.com and maybe a dozen more. Only the authoritative DNS knows the IPs
for those. But the root server(s) which TLD domain you are in. the TLD server(s) knows where the DNS server(s)
for example.com are, and the DNS server(s) know where those hosts are.

TLD servers manage the top-level domains, which are the last part of a domain name, such as .com, .org, .net, or country-specific domains like .uk or .jp.
 
But that's ONLY the computers that belong to your organization. For everyone else you use the DNS servers out on
the internet. It's a distributed system. There is no one single DNS server with all the answers.
That's what I meant, but @oslon seems to ask for such a databse which covers entire internet.
 
I was looking around, at a few websites trying to find some statistics. How big is the internet?




It's amazing how pervasive it has become. I never even heard of the internet until I was in my mid 30's.
 
I would caution you in this because domain names change from time to time. Some are added while others go away. You would need to keep up which will require regular updates.

Signed,

Matthew Campbell
 
You would need to keep up which will require regular updates.

This being true is why I mentioned the MSFT MVPs list. That's well maintained. They're not MSFT employees but a group of people who are (or were) a part of the MVP program. They're all volunteers and have been maintaining this list for a long, long time.
 

Members online


Top