46

This will be a kind of newbie question but I am not quite sure why we really need IPv6. AFAIK, the story is as follows:

In the olden days, when computers were not plentiful, 32 bit IP addresses were enough for everybody. At these times, the subnet mask was implicit. Then the number of computers have increased and 32 bits started to become insufficient.

So the subnet mask started to become explicit. Essentially the size of an IP address has increased.

My question is, what is the downside of continuing the addressing with the subnet masks? For example when they become insufficient as well, can't we continue with using "subnet-subnet masks" etc.?

I understand that it consumes more space than the original IPv4 (and maybe not much different than using IPv6) but aren't explicit subnet masks a sufficient solution? If not, why are they an insufficient solution?

Ron Maupin
  • 98,218
  • 26
  • 115
  • 191
Utku
  • 734
  • 1
  • 6
  • 11
  • 15
    Warning: it seems the term 'subnet mask' is used in the wrong way here. A subnet mask is i.e. `255.255.255.0` etc. What is talked about here is something else: masquerading, better known as NAT (Network Address Translation). – Sander Steffann Nov 06 '15 at 14:35
  • @SanderSteffann Actually yes. I realized later that I didn't use the correct terminology. Please feel free to edit the question. I am not completely sure which terms are correct to use. (Especially the "subnet-subnet mask" part) – Utku Nov 06 '15 at 14:38
  • It was a bit much so I put it in an answer :) – Sander Steffann Nov 06 '15 at 15:17
  • Nobody mentions how much easier IPv6 is than networking. – Jacob Evans Nov 07 '15 at 03:16
  • 1
    IPv6 is needed for the same reason as 64 bit operating systems. To overcome a limitation. – Thorbjørn Ravn Andersen Nov 07 '15 at 09:55
  • 1
    One of the problems with any questions about IPv6 is that you will find a lot of quasi-religous zealotry. I usually answer IPv6 questions only with comments, to keep the zealots from harming my reputation score. Truth is, IPv6 may catch on, or it may not. It has too many shortcomings to make it a sure bet, and there are other options out there. – Kevin Keane Nov 08 '15 at 04:23
  • @KevinKeane: zealotry is unfortunately visible sometimes, and it hurts more than it helps :( I'm curious about what you see al other options. Care to take this to chat? http://chat.stackexchange.com/rooms/31266/discussion-on-why-do-we-need-ipv6 – Sander Steffann Nov 08 '15 at 16:51
  • Sure, we can chat if you happen to be online. The other option I see is limping along on IPv4 with band-aids such as CG-NAT for many more years. May be less technically elegant, but this is more a business than a technical decision. Those band-aids are going to be needed anyway for decades to come, until the whole world has transitioned to IPv6, so many businesses may question whether investing in IPv6 on top of that even makes sense. – Kevin Keane Nov 08 '15 at 19:21

5 Answers5

82

Two things are getting confused here:

  • classful addressing vs CIDR
  • Masquerading / NAT

Going from classful addressing to Classless Inter Domain Routing (CIDR) was an improvement that made the address distribution to ISPs and organisations more efficient, thereby also increasing the lifetime of IPv4. In classful addressing an organisation would get one of these:

  • a class A network (a /8 in CIDR terms, with netmask 255.0.0.0)
  • a class B network (a /16 in CIDR terms, with netmask 255.255.0.0)
  • a class C network (a /24 in CIDR terms, with netmask 255.255.255.0)

All of these classes were allocated from fixed ranges. Class A contained all addresses where the first digit was between 1 and 126, class B was from 128 to 191 and class C from 192 to 223. Routing between organisations had all of this hard-coded into the protocols.

In the classful days when an organisation would need e.g. 4000 addresses there were two options: give them 16 class C blocks (16 x 256 = 4096 addresses) or give them one class B block (65536 addresses). Because of the sizes being hard-coded the 16 separate class C blocks would all have to be routed separately. So many got a class B block, containing many more addresses than they actually needed. Many large organisations would get a class A block (16,777,216 addresses) even when only a few hundred thousand were needed. This wasted a lot of addresses.

CIDR removed these limitations. Classes A, B and C don't exist anymore (since ±1993) and routing between organisations can happen on any prefix length (although something smaller than a /24 is usually not accepted to prevent lots of tiny blocks increasing the size of routing tables). So since then it was possible to route blocks of different sizes, and allocate them from any of the previously-classes-A-B-C parts of the address space. An organisation needing 4000 addresses could get a /20, which is 4096 addresses.

Subnetting means dividing your allocated address block into smaller blocks. Smaller blocks can then be configured on physical networks etc. It doesn't magically create more addresses. It only means that you divide your allocation according to how you want to use it.

What did create more addresses was Masquerading, better known as NAT (Network Address Translation). With NAT one device with a single public address provides connectivity for a whole network with private (internal) addresses behind it. Every device on the local network thinks it is connected to the internet, even when it isn't really. The NAT router will look at outbound traffic and replace the private address of the local device with its own public address, pretending to be the source of the packet (which is why it was also known as masquerading). It remembers which translations it has made so that for any replies coming back it can put back the original private address of the local device. This is generally considered a hack, but it worked and it allowed many devices to send traffic to the internet while using less public addresses. This extended the lifetime of IPv4 immensely.

It is possible to have multiple NAT devices behind each other. This is done for example by ISPs that don't have enough public IPv4 addresses. The ISP has some huge NAT routers that have a handful of public IPv4 addresses. The customers are then connected using a special range of IPv4 addresses (100.64.0.0/10, although sometimes they also use normal private addresses) as their external address. The customers then again have NAT router that uses that single address they get on the external side and performs NAT to connect a whole internal network which uses normal private addresses.

There are a few downsides to having NAT routers though:

  • incoming connections: devices behind a NAT router can only make outbound connections as they don't have their own 'real' address to accept incoming connections on
  • port forwarding: this is usually made less of a problem by port forwarding, where the NAT routed dedicates some UDP and/or TCP ports on its public address to an internal device. The NAT router can then forward incoming traffic on those ports to that internal device. This needs the user to configure those forwardings on the NAT router
  • carrier grade NAT: is where the ISP performs NAT. Yyou won't be able to configure any port forwarding, so accepting any incoming connections becomes (bit torrent, having your own VPN/web/mail/etc server) impossible
  • fate sharing: the outside world only sees a single device: that NAT router. Therefore all devices behind the NAT router share its fate. If one device behind the NAT router misbehaves it's the address of the NAT router that ends up on a blacklist, thereby blocking every other internal device as well
  • redundancy: a NAT router must remember which internal devices are communicating through it so that it can send the replies to the right device. Therefore all traffic of a set of users must go through a single NAT router. Normal routers don't have to remember anything, and so it's easy to build redundant routes. With NAT it's not.
  • single point of failure: when a NAT router fails it forgets all existing communications, so all existing connections through it will be broken
  • big central NAT routers are expensive

As you can see both CIDR and NAT have extended the lifetime of IPv4 for many many years. But CIDR can't create more addresses, only allocate the existing ones more efficiently. And NAT does work, but only for outbound traffic and with higher performance and stability risks, and less functionality compared to having public addresses.

Which is why IPv6 was invented: Lots of addresses and public addresses for every device. So your device (or the firewall in front of it) can decide for itself which inbound connections it wants to accept. If you want to run your own mail server that is possible, and if you don't want anybody from the outside connecting to you: that's possible too :) IPv6 gives you the options back that you used to have before NAT was introduced, and you are free to use them if you want to.

Sander Steffann
  • 6,670
  • 22
  • 33
  • 1
    Wow very through answer. Thanks. Regarding the carrier grade NAT: You stated that bit torrent would end. But I couldn't quite understand why it would happen. More precisely, I think that it should have ended even today if that's the case. Let me explain: I guess that many home users use a NAT router and this makes me think that a "leecher" cannot leech from a user who uses a NAT router, since the leecher won't know the address of the computer to connect. Since the leecher wouln't be able to find a seeder, this would mean the end if bit torrent even today. Could you clarify this for me? – Utku Nov 06 '15 at 16:04
  • 5
    Port forwardings can be configured on home routers by the user to allow incoming connections, or the local BitTorrent client uses a special protocol to make the NAT router install port forwardings automatically. A carrier grade NAT router won't allow such port forwardings. BitTorrent still works without incoming connections, but not nearly as good. – Sander Steffann Nov 06 '15 at 16:07
  • Ah that's what I thought as well. Thanks again. By the way, how does bit torrent work without incoming connections? – Utku Nov 06 '15 at 16:10
  • 4
    @Utku , the glib answer is "it doesn't". that is, you are correct that incoming connections to many NAT'd bittorrent nodes cannot be established. that said, that node can establish connections to other nodes in the network and, since the data flows both directions over a connection, they can still contribute to the network by propagating chunks that one of their peers has to others. – Rob Starling Nov 06 '15 at 17:03
  • 3
    On bittorrent & NAT: see http://superuser.com/questions/104462/how-does-bittorrent-work-with-only-outbound-connections. Summary: incoming connections piggyback on your outgoing connection; some clients use a relaying system to allow incoming connections from a new user across the connections with a shared peer. This is less efficient, and you will get lower speeds. It is impossible if all peers are behind a NAT without port forwarding. – Tim Sparkles Nov 06 '15 at 19:44
  • @Timbo So, is it as simple as: "The leechers just go and actively seek the data from non NAT'ed (or NAT'ed with port forwarding) peers? Or am I missing some things here? – Utku Nov 06 '15 at 20:26
  • @Utku: If you're not uploading, your download will be very slow. My understanding is that there is a loose trust system built into the network, and you are less likely to receive chunks if you are not sending chunks. It's "loose" largely because of the bootstrapping problem where you have nothing to upload yet. Leecher is an orthogonal question. Generally leecher is applied to folks who upload long enough to get the full item, then stop. – Tim Sparkles Nov 06 '15 at 22:10
  • 2
    on Fate Sharing, a relevant anecdote: http://techcrunch.com/2007/01/01/wikipedia-bans-qatar/ – njzk2 Nov 07 '15 at 22:53
  • I think the inability to establish incoming connections would be considered a feature by most ISPs. – Loren Pechtel Nov 08 '15 at 03:25
  • 1
    @SanderSteffann that's incorrect. PCP (Port control Protocol) allows forwarding of UPNP messages to the CGN router. The UPNP server on the user's router needs to implement the IGD2 messages however, as the original UPNP specification only allowed requests for specific ports. The ``AddAnyPortMapping`` allows the UPNP client to request any free port. – Arran Cudbard-Bell Nov 08 '15 at 16:43
  • @ArranCudbard-Bell: I know it's possible with PCP, but I haven't seen any ISPs allow that on their networks. So as far as I know it's not really useful for end-users. Do you know of any ISPs that have deployed that? – Sander Steffann Nov 08 '15 at 16:45
  • 1
    https://github.com/arr2036/miniupnp/commit/b9362f32a7ec6580d2fc80f9d8ab4ffa551f662a No. We got as far as submitting patches to miniupnpd. The major issue was were support for IGD2 is needed both in the application, and on the UPNP daemon running on the CPE. With tens or hundreds of thousands of UPNP enabled applications needing to be updated it was deemed not to be worth the effort to push the CPE manufacturers to provide updated firmware supporting IGD2. – Arran Cudbard-Bell Nov 08 '15 at 20:10
  • I'll still continue pushing it for customers who are starting to look at migration paths. It would get far greater traction if the console manufacturers added support. xbox live was one of the biggest sources of complaints. – Arran Cudbard-Bell Nov 08 '15 at 20:16
  • 1
    Calling large scale NAT "carrier grade" when one of its major effects is to _reduce_ the reliability of IPv4 connections is... – Michael Hampton Nov 09 '15 at 00:31
  • @MichaelHampton: yes, the irony... – Sander Steffann Nov 09 '15 at 01:29
  • A great answer. I'd also add that while some protocols handle NATting gracefully (e.g. HTTP, which is what the whole thing was built on), others are more limited or downright impossible (e.g. HTTPS). As more and more web servers switch to HTTPS (as well as WebSockets and other "modern" updates), each server needs one or more public IP addresses. It's not unsolvable, but the solutions are going to be trade-offs (like needing a new level of trust between web hosting providers and their users). – Luaan Nov 09 '15 at 12:41
  • @Luaan: It seems like you are confusing NAT with virtual-hosting. HTTPS needed a separate public address in the past, but https://tools.ietf.org/html/rfc3546#section-3.1 introduced SNI in 2003. The main browser that doesn't support that is Internet Explorer on Windows XP, and there are plenty more reasons that people shouldn't use that one anymore. – Sander Steffann Nov 09 '15 at 12:47
  • 1
    @SanderSteffann Some of the parts of internet are dreadfully outdated - both on client side and server side. I work with servers that still don't support SNI regularly, and the same is with the undying Windows XP. Even with american customers, we still have to reluctantly support Windows XP - try telling *them* that they can't access your website because their system is outdated :) And the only alternative is to have the HTTPS translation handled on the outside-facing endpoints, which has its own issues. Lots of things would be easy if people updated regularly. – Luaan Nov 09 '15 at 12:57
16

The Internet Protocol (IP) was designed to provide end-to-end connectivity.

The 32 bits of an IPv4 address only allow for about 4.3 billion unique addresses. Then you must subtract a bunch of addresses for things like multicast, and there is a lot of math showing that you can never use the full capacity of a subnet, so there are a lot of wasted addresses.

There are about twice as many humans as there are usable IPv4 addresses, and many of those humans consume multiple IP addresses. This doesn't even touch on the business needs for IP addresses.

Using NAT to satisfy the IP address hunger breaks the IP end-to-end connection paradigm. It becomes difficult to expose enough public IP addresses. Think for a minute what you, as a home user with only one public IP address, would do if you want to allow multiple devices using the same transport protocol and port, say two web servers, which by convention use TCP port 80, to be accessed from the public Internet. You can port forward TCP port 80 on your public IP address to one private IP address, but what about the other web server? This scenario will require you to jump through some hoops which a typical home user isn't equipped to handle. Now, think about the Internet of Things (IoT) where you may have hundreds, or thousands, of devices (light bulbs, thermostats, thermometers, rain gauges and sprinkler systems, alarm sensors, appliances, garage door openers, entertainment systems, pet collars, and who knows what all else), some, or all, of which want to use the same specific transport protocols and ports. Now, think about businesses with IP address needs to provide their customers, vendors, and partners with connectivity.

IP was designed for end-to-end connectivity so, no matter how many different hosts use the same transport protocol and port, they are uniquely identified by their IP address. NAT breaks this, and it limits IP in ways it was never intended to be limited. NAT was simply created as a way to extend the life of IPv4 until the next IP version (IPv6) could be adopted.

IPv6 provides enough public addresses to restore the original IP paradigm. IPv6 currently has 1/8 of the IPv6 addresses in the entire IPv6 address block set aside for globally routable IPv6 addresses. Assuming there are 17 billion people on earth in the year 2100 (not unrealistic), the current global IPv6 address range (1/8 of the IPv6 address block) provides over 2000 /48 networks for each and every one of those 17 billion people. Each /48 network is 65,536 /64 subnets with 18,446,744,073,709,551,616 addresses per subnet.

Ron Maupin
  • 98,218
  • 26
  • 115
  • 191
  • So NAT is essentially a "patch" right? A patch that violates an essential principle of the internet. – Utku Nov 06 '15 at 15:43
  • 7
    NAT can be called a patch, but many have called it a hack, or worse. – Ron Maupin Nov 06 '15 at 15:46
  • 9
    Your second sentence is important! NAT creates an asymmetry between people who can run servers and people who can't (easily). That's a *fundamental* breach of the core democratic principles of the Internet. Whether or not someone cares about that, is a different question, of course. Most people who sit behind a NAT don't care. Many content providers *do* care to put as many people as possible behind a NAT, because then they can control what (the majority of) the Internet sees. – Jörg W Mittag Nov 06 '15 at 17:08
  • 1
    @JörgWMittag,*"Most people who sit behind a NAT don't care."* Until their shiny new multiplayer game, application or toy doesn't work like they expect it to, then they certainly care. *"Many content providers do care to put as many people as possible behind a NAT, because then they can control what...the Internet sees."* It doesn't take NAT to control access. It can be done just as easily (if not more so) without NAT. NAT makes many things more difficult for content/service providers and of the people I know who are running such networks, I don't know one who uses NAT if they can avoid it. – YLearn Nov 08 '15 at 04:41
8

Simply put, there are no more IPv4 address available. All (or nearly all) the available IPv4 addresses have been allocated. The explosion of IP devices, laptops, phones, tablets, cameras, security devices, etc, etc, have used up all the address space.

Ron Trunk
  • 66,852
  • 5
  • 65
  • 126
  • 1
    Thats not entirely true, the vast majority of the space is wasted because it was not subnetted well to start with. Now orgs have swaths of addresses they are not using as public addresses but to give them back would require considerable effort in restructuring their networks. – JamesRyan Nov 06 '15 at 15:26
  • 7
    Yes, a lot of space is wasted. But the fact remains that the available space is exhausted. – Ron Trunk Nov 06 '15 at 15:32
  • 1
    @JamesRyan There is also the entire "Class E" range that could (at any time) be opened up for general unicast assignment. That would give the world 16 more /8's (approx 134 million more addresses). But then what? All it would do is postpone the "final depletion" of all addresses. So regardless of how many IPv4 addresses that get reclaimed, or reallocated, the depletion is inevitable. IPv6 is the permanent solution. – Eddie Nov 06 '15 at 17:04
  • 3
    @Eddie, *in theory*, the "Class E" range could be opened up. In practice, 34 years of people assuming the range is "reserved, not in use" means that anyone getting one of those addresses will have limited connectivity. – Mark Nov 06 '15 at 19:13
  • 1
    @Mark Agreed. My point was simply that there are pockets of IPv4 space we could try to use to extend its lifetime, but why bother, IPv6 is inevitable. *(I definitely **wasn't** saying we **should** extend IPv4's lifetime)*. – Eddie Nov 06 '15 at 20:05
  • @RonTrunk the explosion of devices like laptops, tablets are but all on the inside which would be on pvt addresses then and natted. – allwynmasc Nov 07 '15 at 07:46
  • 1
    This answer doesn't really address the question IMO. I think the OP understands that IPv4 addresses are running out. He wonders why, even so, we can't simply use other methods (e.g. NAT) to extend the way we use the existing number of addresses, albeit with some misunderstandings over how such things work. – JBentley Nov 09 '15 at 15:32
4

First of all the variable subnet mask technique did become insufficient. That is why people invented the Network address translation technique where you can use public IP to mask multiple private IP's. Even with this technique, we are almost out of IP's to allocate. Also NAT breaks one of the founding principles of the Internet: the end to end principle.

So the main reason for using IPv6 is that everyone will have available as many public IP's as they need and all the complexity of using NAT will disappear.

IPv6 also provides other functionality that I will not go into detail:mandatory security at the IP level, enables Stateless address auto configuration, no more broadcasting only multicasting and provides a more efficient processing by routers by simplifying the header. Also in this age of mobile devices it has explicit support for mobility in the form of mobile IPv6.

Regarding your proposal of using subnet/subnet masks:it does not sound feasible since its implementation would break all existing applications and it is not really elegant. If you have to change things why not go for something new and well thought.

dragosb
  • 226
  • 1
  • 7
  • NAT wasn't invented because of a lack of addresses or lack of variable length subnets. It became popular simply because many ISPs would charge more for "business grade" services with allocated IP space. – Alnitak Nov 06 '15 at 22:53
2

The major organization that distributes IP's to the regional orgs is completely exhausted. ARIN - the regional org in the US has been exhausted for the past few months. The only regional org that still has some IP's left is AfriNIC.

There are a lot of companies/orgs, like Ford, MIT, etc that have full Class A IP ranges. Back when they acquired them, no one thought we would run out so quick.

At this time, to buy IP's, you either wait for a company to go out of business and buy it on the gray market, or you try to buy unused IPs from another company.

IP's designed for a region, cannot be used in another region. Well they can, but it is highly discouraged (geo-IP).

At this time, a lot of companies are getting ready for IPv6. The switch isn't easy as its very expensive to buy new equipment that supports full IPv6 for those who have 10s of thousands of servers.

user1052448
  • 121
  • 1
  • 2
    IPs are not actually "designed for a region" - they were arbitrarily assigned to one of the 5 RIRs (which roughly correspond to the five continents). It is actually quite common that blocks of IPs are transferred (usually, sold) from one RIR that still has some left (today, only Africa has any left) to another. GeoIP is just a hack, not something designed into the IP protocol. – Kevin Keane Nov 08 '15 at 04:20