23

I'm attempting to block myself from time-wasting websites but changes I make to /etc/hosts. For example:

127.0.0.1   localhost
127.0.1.1   ross-laptop

127.0.0.1   bing.com

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts

I can access bing.com in a freshly opened Chrome or Firefox - why is this not working?

Seth
  • 58,122
Ross
  • 1,812

5 Answers5

19

You'll find the browsers and the system will cache things for you. In order to get this to apply right off the bat you should make sure to clear caches and restart your browser. In order to test this out, try performing a dns check from a terminal such as

ping bing.com

You should get it replying back from 127.0.0.1. If this works then your hosts file change is good, but it's just cached in your browser.

Rick
  • 3,647
  • But does the cache really survive restarting the browser? I've run into this also. It would be nice if there's a way to invalidate the cache. – loevborg Aug 28 '10 at 18:10
  • 1
    Well I'm not 100% sure if the dns cache survives a browser reload. I do know the files cache does. The thing is that with today's ajax/multi request models of web development you don't want to be performing a dns hit on every request while loading a page. Each browser will do it's own tricks to help speed that up and so you'd have to check their dns caching mechanisms on a per browser/browser version basis. – Rick Aug 30 '10 at 00:17
  • I'd be surprised if the DNS cache doesn't persist through browser restarts, at least in most browsers (probably all the major ones). – David Z Aug 30 '10 at 02:42
7

Have you tried putting the 127.0.0.1 entries on the same line?

120.0.0.1 abc bing.com foo

That should work.

Marco Ceppi
  • 48,101
  • 3
    This did work, although I think it's to do with browsers caching it and not clearing the cache properly. – Ross Aug 28 '10 at 18:20
0

Open Terminal (ALT+F2).

Type sudo -i in the input field. Check the Run in terminal option. Finally click Run button.

Type your password if necessary and press enter. Then enter the following commands.

gedit /etc/hosts

You will get Gedit Text Editor window.

For Example, if we need to block Facebook add the following lines just after 127.0.0.1 localhost.

0.0.0.1 facebook.com    
0.0.0.1 www.facebook.com

By doing this, it will block the site in all browsers including Google Chrome, Chromium, Mozilla.

That's it. When you now open www.facebook.com or facebook.com, you cannot access it. To enable back Facebook, remove the lines we added from the file /etc/hosts.

Source - Subin's Blog

Subin
  • 760
-1

Besides CragM's solution, remember you can use all 127.x.x.x address for this purpose, don't repeat the same address.

127.0.0.1   localhost
127.0.0.2   ross-laptop
127.0.0.3   bing.com
127.0.0.4   foo.com
127.0.0.5   bar.com
......
grokus
  • 621
-1

Modifying /etc/hosts looks like a global hack. I'd suggest setting up a local http proxy instead (squid, privoxy etc) and point your browser to use it. This way you would get a more flexible way of managing blacklists at proxy level.

vh1
  • 1,433
  • 2
    It sounds like that would be a lot of effort compared to adding a couple of lines to /etc/hosts? Could you explain why this is a better solution? – 8128 Aug 29 '10 at 15:56
  • "It sounds" because you are making assumptions without trying it. Issuing apt-get install, editing a config file and/or a blacklist and pointing the browser to the local proxy is not more complicated. It is better because you don't make any global changes to your system, you're only setting up an alternate way of having some sites blocked. If you ever need to undo, you won't have to edit /etc/hosts again, you'll just set the browser to access sites directly, not through the proxy. – vh1 Aug 30 '10 at 14:14