You need to block 0.0.0.0.

The firewall will block access to 127.0.0.0:80/8 and any other ports used by the web server. The web server is only available at the correct address.
I'm not sure if you understood the article.
Basically a web server to which you connect is able to connect to which ever port a local service is listening on localhost.

This can be any port and is defined by local service, when you run ss -tunlp you'll see which ports are open.
hardcoding ports in firewall isn't an option because you might install some software X that will open up Y ports at some point, or system upgrade might introduce new services to open up ports all of which any web site that we visit can use to attack if the service vulnerable, and firewall won't handle that unless port scanning rules are in place.

Reason port scanning rules are important is because web site cannot know which services are running locally so they need to run port scan locally.
The only other way around (without port scan) is to probe known ports, that is probe services which are known to be running on variety of distros by default.

In other words our desktop PC's are now no longer safe desktops hiding behind a hardware firewall that blocks everything, but they now act like vulnerable servers.
Sadly every desktop PC runs some local services, so this exploit is pretty serious IMO, especially since it existed since always.
 


It will if you set the rule in pre-routing chain which is processed prior stateful filtering is done.
Simply drop inbound toward 0.0.0.0 in pre-routing unless target port is 67 to let DHCP work.

Rule in stateful filter chain or later will not work because the attacking site already has established connection and so can bypass firewall and attack local services.
you will need this at kernel level the way BSD's are doing this since 98' when BSD's kernel stopped referring 0.0.0.0 to localhost. In other words this is kernel devs job.
 
you will need this at kernel level the way BSD's are doing this since 98' when BSD's kernel stopped referring 0.0.0.0 to localhost. In other words this is kernel devs job.
But doesn't nftables already operate at kernel level?
What kernel additions you think will be more effective than few firewall rules? (from networking perspective)
 
In the case when address mapped before packets are created netfilter is not going to be of helpful.
 
The trouble with uBlock origin in PaleMoon is that by its very nature, it's an outdated version. Since PaleMoon's Goanna engine is based on that of Firefox prior to the 'Australis" GUI refresh at FF v29 (NOT v129, which we're just approaching..!), it's almost 100 major releases behind.

Yes, the filter lists for it are regularly updated.....but the actual extension is deprecated, and has been for a few years now. I use Pale Moon frequently myself - it was one of the first browsers I ever gave the Puppy-portable treatment to - because it's nicely responsive even on elderly, low-spec hardware.

But I never quite feel 'safe' with it...

(shrug...)


Mike. ;)
 
The trouble with uBlock origin in PaleMoon is that by its very nature, it's an outdated version. Since PaleMoon's Goanna engine is based on that of Firefox prior to the 'Australis" GUI refresh at FF v29 (NOT v129, which we're just approaching..!), it's almost 100 major releases behind.
This is wrong reasoning. It is not "behind" or "outdated" on anything. Pale Moon is a true fork of Firefox, NOT a clone like Librewolf, etc. It is updated with all applicable Mozilla updates from those "100 major releases behind" (the only reason there's so many is because Mozilla decided to copy Chrome's illogical rapid release cycle). By that reasoning, FF 129 is also outdated because it is also "based" on FF29 and older. It isn't like it's a completely rewritten app.

[ Off-topic but IMO the best version of FF was 3.6.x. They started going downhill with version 4. ]
 
Last edited:
I'm not sure if you understood the article.
Basically a web server to which you connect is able to connect to which ever port a local service is listening on localhost.

This can be any port and is defined by local service, when you run ss -tunlp you'll see which ports are open.
hardcoding ports in firewall isn't an option because you might install some software X that will open up Y ports at some point, or system upgrade might introduce new services to open up ports all of which any web site that we visit can use to attack if the service vulnerable, and firewall won't handle that unless port scanning rules are in place.

Reason port scanning rules are important is because web site cannot know which services are running locally so they need to run port scan locally.
The only other way around (without port scan) is to probe known ports, that is probe services which are known to be running on variety of distros by default.

In other words our desktop PC's are now no longer safe desktops hiding behind a hardware firewall that blocks everything, but they now act like vulnerable servers.
Sadly every desktop PC runs some local services, so this exploit is pretty serious IMO, especially since it existed since always.
I think you might be the one having the problem understanding the article. It talks about a web browser (client) being abused by a hostile remote web server that uses JavaScript running in the web browser to access open ports on 0.0.0.0 on the client side. In order to establish a connection to anything using tcp/ip it is necessary to use a tcp handshake. The firewall prevents this if it blocks all traffic to and from 0.0.0.0 making such a port scan impossible. Such traffic will not be established or related since it would require a new connection. Having a connection between the remote web server and the local web browser does not include a separate connection between the local web browser and local ports on 0.0.0.0. Sockets just don't work that way. A socket is an endpoint for communications. There must be two endpoints, except in a group multicast or broadcast situations.

I run a port scan with nmap either every day or nearly every day. It scans all available network devices, other than localhost which is huge, that have not been blocked by the firewall rules. I know it works because I have tested and proven it. It looks to me like the hostile remote web server is using JavaScript running in the local web browser to create a separate network connection and set up a relay or proxy to the hostile remote web server so it can act as a C2 server.

Signed,

Matthew Campbell
 
It talks about a web browser (client) being abused by a hostile remote web server that uses JavaScript running in the web browser to access open ports on 0.0.0.0 on the client side. In order to establish a connection to anything using tcp/ip it is necessary to use a tcp handshake. The firewall prevents this if it blocks all traffic to and from 0.0.0.0 making such a port scan impossible.
I agree with most of what you said but not with this because a script that's downloaded from a hostile web server is not limited to neither TCP nor 0.0.0.0.
That script can as well probe UDP ports (or other protocols) and it can as well try probe the loopback (127.0.0.1) address or NIC address.
Feel free to correct me if I'm wrong about that.

Such traffic will not be established or related since it would require a new connection. Having a connection between the remote web server and the local web browser does not include a separate connection between the local web browser and local ports on 0.0.0.0.
You might be right about that but I'm not going to set up my firewall to assume this because I'm not 100% sure about it.
Sadly the article does not say anything about firewalls but it does say that hostile web servers perform local port scans!

I think it might be seen as established or related due to browser API's which allow access to local network.
I'd like to see, read or hear some technical research about that in regard to firewall to believe it.

I run a port scan with nmap either every day or nearly every day. It scans all available network devices, other than localhost which is huge, that have not been blocked by the firewall rules. I know it works because I have tested and proven it.
I run nmap in virtual machine from different subnet in guest to test firewall on host, and it blocks all possible nmap scans, including OS detection, TCP\UDP port scans etc.

I suggest you to test yours as well with this method, running nmap locally is not useful to test firewall, but it is useful to test scanning locally running services only (which needs to be done against all 3 addresses to confirm firewall works).

If you have 2 computers that will work as well if VM is not an option.

edit:
Such traffic will not be established or related since it would require a new connection.
I just figured out why this may not be the case...

Web browser just like any other networking software may require access to loopback, and due to this and those "buggy" browser API's those connections might as well be seen as established or related because the browser already established connection to loopback before a web server was visited.

This is only a hypothesis ofc. I didn't research it.
 
Last edited:
A firewall looks for established and/or related network traffic without regard to whether that traffic involves a web browser. If the firewall doesn't allow network traffic the web browser doesn't get a vote in the matter. Yes, using a VM to run a port scan will generally give you a better picture of what a remote attacker can see or do compared to a local port scan using root. A UDP port scan will generally require root unless an attempt is made to communicate with each UDP port and those attempts are refused by the host. Using a process with cap_net_raw can be used without root. Otherwise a port scan is generally limited to a connect(2) scan. An IPv4 socket, except for group multicast or broadcast traffic, uses two and only two endpoints. IPv6 can use anycast or unicast. 0.0.0.0 uses IPv4. Yes, it is possible for a web browser to connect to services running on 127.0.0.0/8. Web proxies often work this way. Each socket uses a separate connection. When your web browser reaches out to remote:443, remote is the address of the remote server, a connection will be made to some other remote port like say 52438 instead. The local port number used by your web browser might be something like 43582. Your firewall will see traffic from remote:52438 to local:43582 as established or related and allow it to pass. If it didn't then web traffic would become pretty much impossible. I have yet to learn JavaScript so I remain unaware of its capabilities. If it can tell the web browser to open a socket and attempt to access a port in the localhost domain then it could probe such ports just as easily. The article I read appeared to indicate that it was not allowed to do that, but was allowed to look for 0.0.0.0. If the web browser could not run malicious JavaScript to tell it to open a port in the localhost domain then having a list of running servies and their port numbers would prove useless if the web browser couldn't open ports in the locahost domain, at least when instructed to do so by a hostile web page. The web browser itself is a process running on the host computer so it has the same access to the system as any other program being run by the user as it uses the user's authority. The other thing to explore is whether the web browser could be told to write a bash script on the user's computer and run it as a local process using the user's authority. With something like that the remote attacker could cause the user to run ncat in such a way as to reach out to the C2 server and run a command shell on the user's end of things.

Signed,

Matthew Campbell
 


Top