Bind9 on VMWare

Mike Hoskins (michoski) michoski at cisco.com
Wed Jan 13 21:13:47 UTC 2016


On 1/13/16, 4:02 PM, "bind-users-bounces at lists.isc.org on behalf of Reindl
Harald" <bind-users-bounces at lists.isc.org on behalf of
h.reindl at thelounge.net> wrote:


>Am 13.01.2016 um 19:54 schrieb Mike Hoskins (michoski):
>> I've ran several large DNS infras over the years.  Back in 2005/6 I
>> finally drank the koolaid and migrated a large caching infra
>> (authoritative was kept on bare metal) to VMWare+Linux
>
>i would be careful compare 2005/2006 with now for a lot of reasons
>
>* before vSphere 5.0 VMkernel was a 32bit kernel while capable
>   running 64 bit guests with 10 GB RAM but still a lot of magic
>
>* 2005/2006 a large part was binary translation while now
>   you need a x86_64 host with VT-support
>
>* in 2006 vmxnet3 was not available not was it for a long time
>   included in the mainline linux kernel while now any paravirt
>   drivers are in the stock kernel


Agreed, that's what my "the past is not always the key to the future" quip
tried to express.

However, for the sake of posterity, during this and subsequent work I saw
similar issues with vmxnet3 which vmware professional services could never
fully explain.  Also ran on hosts with VT support, and tried many Linux
kernels including 3.x toward the end without complete improvement.  Note
that 2005/6 was the initial migration date, and actual operation continued
through 2012/13 for our larger environments, with some still operating
virtualized caches today (smaller environments which haven't had the same
issues).

So this is not an argument to never try virtualization by any means, and
in many cases it could work quite well (everything has pros/cons)...just a
place where I would be cautious in deployment and have a good rollback
plan.  Then again, as infrastructure operators that applies to pretty much
everything we do.  :-)



More information about the bind-users mailing list