migration to new isp - now private addresses showing up publicly?

Sten Carlsen stenc at s-carlsen.dk
Tue May 23 19:18:41 UTC 2023



> On 23 May 2023, at 19.46, Kaya Saman <kayasaman at gmail.com> wrote:
> 
> 
> 
> On 5/23/23 18:07, Sten Carlsen wrote:
>> 
>>> On 23 May 2023, at 19.00, Kaya Saman <kayasaman at gmail.com> <mailto:kayasaman at gmail.com> wrote:
>>> 
>>> 
>>>> On 5/23/23 12:47, Matus UHLAR - fantomas wrote:
>>>>> On 23.05.23 12:22, Kaya Saman wrote:
>>>>> I've got a very strange problem that has emerged somehow after migrating my isp.
>>>>> 
>>>>> 
>>>>> My setup previously used 2x servers in master/slave configuration for my public "view" and then had 3x servers for the "internal" view. This was working fine for years and I have been regularly testing using online dns healthcheck sites such as mxtoolbox etc...
>>>>> 
>>>>> 
>>>>> Now when I try to run any type of check from mxtoolbox or other site eg. https://dnschecker.org/ I am getting my private IP's showing instead of the public ones?
>>>>> 
>>>>> 
>>>>> Initially it started off by my external zone files not transferring which I managed to see that the information was trying to traverse my NAT (I know, not the best practice to have all dns servers on the same network).
>>>>> 
>>>>> 
>>>>> As a result external emails from my mail server are not working too well with a hit and miss type thing going on right now.
>>>>> 
>>>>> 
>>>>> Just to go over, my zone files are fine as the 'external' ones only have public ip addresses in them and do not include any type of internal addressing whatsoever.
>>>>> 
>>>>> 
>>>>> Here's an example of the config in named.conf for the master:
>>>>> view "external" {
>>>>>     match-clients { !internals; any; };
>>>> [...]
>>>>> view "external" {
>>>>>     match-clients { !internals; any; };
>>>> I don't see your definition of "internals".
>>>> Also, I don't see your definition of internal view.
>>>> if internal IP addresses are visible on the internet, obviously the internet sources fall into your internal view, not into this one.
>>>> 
>>>> 
>>> Finally, I understand what is going on and things get stranger....
>>> 
>>> 
>>> The internal IP addressing is being served up by the slave servers. They seem to have pulled the file domain.db and renamed it to domain-external.db???
>>> 
>>> 
>>> Of course the 'master' machine is already serving up domain-external.db to the public domain. This has the correct IP addressing with everything else such as dkim and dmarc.
>>> 
>>> 
>>> So, currently I think the whole problem is stemming from the fact that the zone transfers are not working correctly for my external view between 'master' and 'slave' servers.
>>> 
>>> 
>>> How can I do that without needing to traverse my NAT?
>>> 
>> When migrating ISP, are you sure that there is not another NAT in the ISP router?
>> That would explain this. The internet would present itself as 192.168.xx.xx and match your internals.
> 
> I can certainly ask. Though I am on a business package with multiple static public IPv4 addresses. I think I have a /28 block if memory serves me well....
> 
> 
> 
You might find that it has some kind of address translation built-in "to protect your business" or whatever. To me it still smells that way.
You might look at the IP address for the port you think is the internet - if that has an 192.168.x.x. or 172.16.x.x. or 10.x.x.x it would be clear that is what your problem is. It can still be solved but other setup details will be needed.
> The crazy thing is that I am using the DNS check tool from mxtoolbox. So far it's telling me:
> 
> 
> 
> Bad Glue Detected
> Parent server gave glue for ns2.domain.com to be int_dns2 but we resolve that hostname to ext_dns2
> 
> 
> 
> Another weird issue is that it's reading the serial from the zone file to be:
> 
> 
> Serial numbers match
> 2022022801
> 
> That's my 'internal' zone! Not the 'external' zone and should not be anywhere on the public internet at all.....
> 
> 
> 
>>> Currently I tried putting this into my master config:
>>> 
>>> 
>>>     zone "domain.com" {
>>>        type master;
>>>        file "/var/named/var/named/domain-external.db";
>>>     notify explicit;
>>>     also-notify { int_dns2; int_dns3; };
>>>         allow-transfer { ext_dns2; ext_dns3; };
>>>         allow-query { ext_dns2; ext_dns3; !internals; any; };
>>>     };
>>> 
>>> 
>>> 
>>> And this into my slave config:
>>> 
>>> 
>>> 
>>>     zone "domain.com" {
>>>        type slave;
>>>        file "/var/named/var/named/domain-external.db";
>>>     masters { ext_dns1; };
>>>         // allow-notify { ext_dns1; };
>>>        allow-query { int_dns1; !internals; any; };
>>>     };
>>> 
>>> 
>>> But it doesn't seem to mesh up?
>>> 
>>> 
>>> The general.log file is telling me this:
>>> 
>>> zone domain.com/IN/external: refresh: retry limit for master ext_dns1#53 exceeded (source 0.0.0.0#0)
>>> 
>>> -- 
>>> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list
>>> 
>>> ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.
>>> 
>>> 
>>> bind-users mailing list
>>> bind-users at lists.isc.org <mailto:bind-users at lists.isc.org>
>>> https://lists.isc.org/mailman/listinfo/bind-users
> -- 
> Visit https://lists.isc.org/mailman/listinfo/bind-users to unsubscribe from this list
> 
> ISC funds the development of this software with paid support subscriptions. Contact us at https://www.isc.org/contact/ for more information.
> 
> 
> bind-users mailing list
> bind-users at lists.isc.org
> https://lists.isc.org/mailman/listinfo/bind-users

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.isc.org/pipermail/bind-users/attachments/20230523/3b637c61/attachment.htm>


More information about the bind-users mailing list