Puzzeling about IPv6

Kevin Darcy kcd at chrysler.com
Mon Nov 21 20:22:40 UTC 2011


On 11/19/2011 2:32 PM, 夜神 岩男 wrote:
> On 11/20/2011 04:07 AM, Matthew Seaman wrote:
>> On 19/11/2011 18:47, 夜神 岩男 wrote:
>>>> Oh, and given you've got 64bits to play with, so long as your random
>>>> numbers are up to scratch no need to worry about collisions.  You'ld
>>>> need to be assigning millions of addresses before you ran into that
>>>> problem.
>>>
>>> Not to be an ass and this is likely a decade too early, but... this is
>>> direct echoes of what I heard 20 years ago.
>>>
>>> Does systematic thinking belong in /32+ IPv6 addressing or is it in 
>>> fact
>>> safe to just random it all away willy-nilly?
>>
>> Look at http://en.wikipedia.org/wiki/Birthday_paradox
>>
>> With 64bits of host address space in a typical IPv6 network, you would
>> need to be allocating 6.1 million addresses to have a 1 in a million
>> chance of a collision.  You'ld need 5.1 billion addresses for a 1 in 2
>> chance of a collision.  If you get a collision in a typical network of
>> maybe several hundred machines, then suspect your random number
>> generator before anything else.
>
> I would appreciate the numbers more if we were talking in terms of 
> numbers of machines, as we were in the late 80's, but we're not. Now 
> everything has an address. With virtualization (which is a trend I 
> tend to buck, but is a prevalent force) it is currently normal for a 
> single machine to host tens or hundreds of IPs. With the mobile 
> environment and some concepts to simplify mobile-but-hubbed/homed 
> devices even those devices can inherit several IPs each. Is it not 
> inconceivable that complete ignorance of numeric paritioning could run 
> us into weird places quicker than we expect once again?
>
> For example, a random assignment gives me something close to the < /8 
> space of the low end of my range and/or another pre-assigned address 
> region which was initially intended for a single machine -- until that 
> machine and its IP space became all cloudy like (the way 1st year 
> drop-out CIO's are getting sold on today). Now is this range enough, 
> and is the resolution overhead worth it in the future (10+ years of us 
> thinking IP ranges are freely available enough to just ranomd 
> assignment away) to push the next bajillion addresses to the same 
> machine/cluster (as it will no doubt evolve into at some point) to a 
> totally separate random remaining range once the available random 
> addressing block is used/randomed away?
>
> The fact you cite the birthday paradox is interesting, as it predicts 
> that collisions are highly likely given the way we've grown to think 
> that every device should be multiply homed within a massively 
> multi-homed cluster and that IP assignments are totally costless today.
Instead of speculating how many addresses can dance on the head of the 
randomness pin, for goodness sakes, just read RFC 4862. Even if there's 
a collision (not bloody likely when a /64 can address quintillions of 
nodes), there is still DAD (Duplicate Address Detection). Folks, some 
pretty smart guys worked all of this out, and SLAAC is being used all 
over the place, in production. This isn't just an academic exercise any 
more, and shouldn't be treated as such: it's here, it works, deal with it.

                                                                         
                                                                         
                                                                         
         - Kevin





More information about the bind-users mailing list