Reverse DNS zones

Reverse lookup zones play a significant role in DNS systems. DNS is a role that not only translates domain names into IP addresses, but also IP addresses for domain names. Some services require a round trip translation. As in the case of classic zones, it is possible to delegate such zones, but the rules of their operation make this process more difficult, especially in the case of zones in VLSM dictated networks.

Reverse lookup is closely related to networks and specifically to addressing. To explain this properly, we have to use an example:

The company operates the network and the mydomain.local local domain. This means that devices that are members of this domain will receive an appropriate DNS suffix, i.e. a MYSQL server with the address whose NETBIOS name is MYSQLSRV001 will have the FQDN MYSQLSRV001.mydomain.local. This server will attempt to register both its name (A / AAAA record) and the PTR (reverse lookup) record in the DNS server.

For this purpose the DNS server must have a reverse lookup domain appropriate for this network – in the presented example it will be It is easy to see the notation of such a zone is also retrospective. If the server will be able to find such a zone and updates will be allowed, it will enter its address and name in the following form:

 10.11 PTR MYSQLSRV001.mydomain.local. 

If we now connect our 10.11 with 20.172 (zone name) and read from the back we will get (the address of our server). The purpose of delegating a part of the network to another server is, as a rule, using the class references. In the example above, we can delegate the network to another DNS server. In the server file with the delegated zone, enter the following reference:

 20.20.172 IN NS newdnsserver.local  

On the other hand, on the target server, we create the correct reverse domain that stores the required records:

Unfortunately, the situation is more complicated when classless zones are delegated. And so, the zones larger than the basic one, eg 23, should be divided into zones / 24 and delegated as separate zones to the delegated server.

It is even more problematic to bypass zones smaller than / 24, for example, holding networks / 26 or / 27. Here we have to use the document RFC2317.

Splitting consists in delegating the following zone on the main server (example for

 0 / IN NS delegatedserver.local

However, this is just the beginning, because every record that will be in the delegated server must also be referred to this server from the main zone; which means that in addition to the above entry, you should also add CNAME entries for EVERY device:

 1 IN CNAME 1.0 /
2 IN CNAME 2.0 /
5 IN CNAME 5.0 /

Therefore, it is extremely important to plan the reverse zones and the IP plan when planning sub-networks and DNS services.

DNS HA mechanisms

DNS in the context of global name resolution fulfills its role using many different mechanisms – their knowledge is crucial from the point of view of administering name resolution in the context of high availability services.

We will consider several ways to deliver HA zones to customers. Because the client can use more than one name server, the configuration of services and the exchange of information between them is crucial. We don’t want to face situation when two DNS servers responsible for maintaining zone has different records. Maintaining consistency between servers is provided in various ways – depending on the DNS solution used.

Here we touch the subject of different types of zones: Primary, Secondary, Stub, Forwarding and for Windows systems Primary AD-Integrated.

Primary or primary zone – a server that is authoritative (stores zone records and is based on name resolution for the zone) maintains the primary zone. In the simplest configuration (without providing replication mechanisms) there can be only one Primary zone – because this zone is independent and the highest in the hierarchy.

In order to ensure some reliability in services, also Secondary zones are used – spare zones that copy the configuration from Primary zones using a mechanism called zone transfer. The transfer is a feature of the DNS / Zone server configuration – we should be careful to whom (what IP addresses / DNS servers) we allow transfer our zone. Characteristic for the Secondary zone is that you can not change its records – they are pulled and overwritten from the Primary zone. Nevertheless – the server having the Secondary zone has all the records (updates them from the primary zone based on the serial number of the SOA record) and handles requests like any other DNS server.

There are also hybrid mechanisms, i.e. allowing the use of multiple primary zones. This means that they do not use the standards defined in RFC between Primary and Secondary zones – they do not use the transfer mechanism. Therefore there must be a dedicated mechanism for transporting information about changes between these zones. Microsoft based this type of solution on a product that is successfully used in many self-respecting corporations and bigger companies  – Active Directory. This is a directory service that has data about services, users and machines, as well as many other elements defining the company structure (attributes), which since its inception has grown to the point that it is now a carrier of many other Microsoft technologies. It allows creation of AD-integrated DNS zones. This means that the zone is located in the directory structure of Active Directory and is replicated between all (or explicitly selected) DC controllers. From the administrator’s point of view, this is only a matter of marking the “Integrate zone with Active Directory” option. What does it give us? The zone is highly available (replicated). It can be changed on any domain controller and the change will be replicated to the rest of the controllers. The zone is everywhere marked as Primary AD-integrated and has its own system of permissions on records.

Does this mean that if we have for example 10 DCs, then we have to replicate the zone between everyone? Of course not – there is a mechanism called Active Directory partitioning – that will enable us to create a sub-directory in catalog service for us, which will be replicated to the indicated controllers. Such a container can be created, for example, using a command

dnscmd <server> / createdirectorypartition <partition name>

Additional servers to keep the partition are added by:

dnscmd <server> / enlistdirectorypartition <partition name>

Now it remains only in the DNS service to change the scope of replication of the partition in its properties to the newly created partition and here we go!

Of course, not only Microsoft came up with the idea of ​​replicating DNS zones. Dedicated products also use this approach – for example, Infoblox – a comprehensive DDI product (DHCP / DNS / IPAM) uses a structure called GRID (a grid that consists of its machines – members, which is responsible for maintaining and replicating the database together with information). Replication under GRID works on a similar basis to AD, allowing propagation of zones between configured members.

Servers High Availiability / Failovers

Clusters – one of the most popular ways to increase reliability or performance when a single computing unit fails. I have previously written about clusters in the context of reliability and this should be extended.

It does not make much sense to generate a cluster on hardware that will not be virtualized. In other words, clustering a single service based on hardware resources is a rather poor idea leading to wasting resources. Clusters of so-called hosts are a much more frequently used approach -hypervisors that are used to virtualize and containerize resources. Clustering is carried out in the context of Microsoft’s virtualization mechanisms, for which the Microsoft Failover Clustering service is responsible. In the case of, for example, vmWare and XenServer, we will talk about pools that accumulate nodes. Focusing on the clusters of the Microsoft family that are closest to me – each cluster may consist of a minimum of two machines (nodes) and must share resources (eg must have a shared disk available, for example, via the iSCSI or SMB protocol). In the case of a cluster based on two nodes, it is also worth considering in the voting context an additional voice – the so-called witness (it can be a share or a disk, for example). In the event of a network failure, both cluster machines think that they are a “cluster”, so the decision about which one should take over its role must be dictated by access to a witness (resource).

So we have two servers with the same or similar structure – they have their own mechanical protections, eg. raid, for example redundancy of links or power redundancy. They are included in the cluster and become its elements, so they will provide functionality. One of the machines is active and serves the service, the other is hot-standby which will take over the services in the event of the failure of the first one. There are also active-active clusters where both machines are simultaneously available and the traffic is balanced – however, there are problems with resources and multiple access to disks – specifically to the file system layer (I / O operations), therefore they are used only for highly scalable solutions – most often database, e.g. MSSQL. Why should resources on cluster nodes be similar to non-identical ones? Because, for example, in the case of virtualization, virtual machines are allocated the amount of memory and vCPU – so it can not happen that in the event of 1 node failure, the other one does not have enough memory to run virtual instances! In the case of processors and their virtualization, the case is so diverse that it is possible to virtualize even the processor’s processor, but providing support is possible only within one architecture – vendor. This is due to various construction processes and the way of offering virtualization. The hypervisor is able to use different processors from the same manufacturer to migrate machines using a special option of compatibility such as powershell:

Set-VMProcessor <VMname> -CompatibilityForMigrationEnabled $true

In a cluster constructed in this way, various services can be started – most often they are virtual machines, but they can be file servers, highly available DHCP servers, etc. What is a weak element of the cluster? Resources. We have highly available services on e.g. 3 nodes, but all of them use a disk on, for example, an old NAS matrix that is not replicated. In the event of its failure, the cluster will remain without resources – so if there are 10 nodes in it – it will not work. Therefore, it is necessary to ensure high availability of resources, which is just as important as the nodes themselves.

Thing about servers

For a moment we will break away from the DDI subject and we will say a few words about server systems and ways of managing them in medium-sized enterprises.

There are many tools for managing systems. They differ in the possibilities, configuration, availability and most importantly the price. Unfortunately, most smaller companies are forced to manage servers manually – without the use of external tools or using tools based on free licenses.

There is a whole bunch of free software that perfectly fulfills its role also in large corporations. Particularly noteworthy is Bare-metal virtualization software such as Ubuntu MAAS. The Nagios monitoring systems, the Datalake Hadoop platform, the docker / hive / kubernetes containerisation platform, Apache spark, and the popular Apache WWW, MySQL, and Firebird websites, not to mention all Linux / BSD operating systems, also play a role.

This software is successfully used and provides a high level of reliability. However, it does not contain an extremely important thing – support. Some of the projects, of course, make it possible to buy it, but in principle the idea of ​​OpenSource software is to provide AS-IT-IS functionality. In large corporations, the division into smaller teams responsible for their plot is much more granular. Therefore, great emphasis is also placed on providing support for the products / integration offered.

With regard to servers – small companies use, of course, weaker equipment. They can not usually afford to buy high-end solutions such as blade servers, duplicate matrices such as NetApp. Most small businesses are satisfied with a single server from which backups are made without redundancy. This means that in the event of a failure, the service is unavailable and you must accept the fact Downtime as a risk.

At the server level, it is of course possible to provide some reliability, e.g. in the form of RAID arrays for disks. We have a choice of different levels of reliability from 10 (RAID 0 combining disk space to increase speed and RAID 1 – mirror), 3-way-mirror (RAID 1 for 3 drives), RAID 5 (requires a minimum of three disks of which one can always crash) RAID 6 – requires a minimum of 4 disks and 2 or more disks may depend on the number of disks). The choice of a RAID solution is dictated by the need to ensure a compromise between security and performance. The efficiency is getting lower and the higher the safety, therefore the most commonly used solution is RAID 1 or RAID 10 where due to the lack of so-called. parity information is easier to recover data.

Another protection that small companies can afford is redundancy at the power level along with the appropriate UPS device. Servers allow the installation of redundant n + 1 power supplies, which means that even if one of the modules fails, the second one ensures the operation of the entire server.

At this point, it is also worth mentioning the redundancy understood in the context of access to the server itself – redundancy of network ports. Currently, very often used technique is their aggregation – most often using the LACP (Link Agregation Control Protocol). It allows you to configure an active connection on both sides, increase the bandwidth with simultaneous increase of reliability – both links transfer data as active and simultaneously in case of one of them failure, communication is still maintained. Most self-respecting network equipment provides LACP protocol support (Cisco / Juniper, etc). The case with operating systems is a bit more complicated because the aggregation depends here, for example, on the card driver in the system, from the solution architecture (eg virtualization). Windows Server 2012 and above provides so-called Network Card Teaming based on LACP, in the case of Linux systems, it is sometimes a matter of installing the module for kernel bonding and configuring the / etc / network / interfaces file (this may look different depending on the version).

In this situation, it is impossible not to mention the interfaces dedicated to server management – BMI (Board Management Interface). These are dedicated interfaces on the motherboard that enable management of even disabled servers, software installation or obtaining configuration information. Depending on the motherboard manufacturer, the solution may be free or paid. For example, IPMI used in Intel / Supermicro boards is free and has full functionality, while the iDRAC software for DELL servers appears as Basic and Enterprise and only the paid version has a console for remote system installation and management. HP has its own ILO, Lenovo IMM etc.

Unfortunately, redundancy ends here. Components such as CPU or memory, even when the processors are more and the memory is distributed – are connected to one motherboard which cuts off everything in case of failure.

At this point, we reach a higher level of redundancy, which is possible to ensure by installing clusters, that is two or more machines (ideally identical or at least – from the Hyper-v Clustering point – processors of the same vendor) performing one function. Clusters in the next article.

Into DNS heart – zone definitions

In previous chapters, we familiarized ourselves with the details of the configuration file of the example DNS service based on Linux and daemon bind. It’s time to dive deeper into the zones to understand the rules of their action.

Assuming that we have introduced a zone definition in the main configuration file:

zone "" {
type master;
file "";

in file we must define structures like:

$TTL 86400

@ IN SOA (
2001062501 ; serial
21600 ; refresh after 6 hours
3600 ; retry after 1 hour
604800 ; expire after 1 week
86400 ) ; minimum TTL of 1 day

IN MX 10
IN MX 20
dns1 IN A
dns2 IN A
server1 IN A
server2 IN A
ftp IN A
mail IN CNAME server1
mail2 IN CNAME server2
www IN CNAME server1

So – what exactly is it? The dollar signs bear the so-called global variables – TTL or Time To Live is the lifetime of the zone in the cache memory of individual DNS servers and ORIGIN ie the so-called namespace in which the records works, which will complete the shortened records later in the file. For example, take a single line:

server1 IN A

He says that the server1 record will be solved to the address The ORIGIN parameter enables the use of such a short name server1 instead of because it will end up with a corresponding suffix domain at the end.

Next, we have the definition @ that is, the $ ORIGIN parameter is inserted, and the SOA record (start of authority in which we specify life times), the main dns server for the zone, the hostmaster and the serial number.

In this place, it is worth mentioning that there are different types of records, e.g.

  • A – address mapping record for the name </ li>
  • PTR – the reverse record maps the name to the address in reverse lookup zones </ li>
  • AAAA – similar to A for IPv6 only </ li>
  • MX – master exchanger or email server </ li>
  • CNAME – alias to another record </ li>
  • NS – name server – DNS server responsible for the record </ li>
  • TXT – a textual record with multiple uses – it is often used for SSL verification, for setting an SPF policy or simply for commenting on something </ li>
  • SRV – service record – especially used for Windows systems and Active Directory domain for the identification of services, eg kerberos, ldap, etc. It has a different syntax: </ span> < / li>

 <service> TTL <class> SRV <priority><weight><port><target>

There are also many other popular records, e.g.

  • DNAME – domain mapping </ li>
  • RRSIG, NSEC, DS, SIG – records related to the security of the DNS service based on DNSSEC. (we will write about it separately) </ li>
  • ISDN – obsolete regarding ISDN service </ li>
  • HINFO – host info – carrying host information (CPU / OS) </ li>

In general, the design of the record is as follows:

<record name > IN <record type> <value>

And in our example:


Means thatzone @ ( is stored on DNS server dns1, and:

dns1 IN A

says that dns1 is at So if we use a test client set to the DNS server containing the zone, after the ping we should receive the answer from (name resolved to the address).

And so on, and like that 😉 Zone files can contain thousands of records, but they should not contain mixed local and public IP addresses. We have, for example, the internal domain fafik.local, which has its DNS server that contains mappings for local services (local addresses), e.g.

service01 IN A
privatecloud IN A

Usually such a DNS server is called internal and it is responsible for resolving names for machines from within the enterprise. From the outside, in turn, no one should know and be able to solve such names as privatecloud or service01. Instead, it should resolve IP for the website and the web record. The server that is responsible for this is called external and it includes the mapping of public records to public IP addresses, for example:

www IN A

Into DNS – named.conf

Since we have experienced DNS, it is worth considering.

We mentioned that the DNS server is integrated with Windows. Today we will look at the bind demon.

Its configuration, generally speaking, can be broken down into the configuration.

The named.conf file allows you to specify the so-called ACLek, or access control lists, which can be used in the future, for example, zone transfers, sterf updates, or polls for records.

Because the named.conf file, like other configuration files on Linux, has an option -based construct, curly brackets, and semicolons that terminate the lines of the code, the access control

acl ACL_transfer_allowed {;

ACL can be used, although it is not a required configuration element. However, they have very much. DNS servers.

A key element of the configuration is the options section. This is where the global system options are configured. This is the real configuration of the system. Examples of available options:

  • allow-query – allows us to select who can query our server. You can use ACL or enter network addresses directly here. It is also allowed to use keywords such as, localnets, none, localhost
  • allow-recursion – defines which DNS service clients can perform recursive queries, i.e. requests to recognize the DNS record from A to Z and return the final result. Typically, local DNS servers in our networks are recursive, while high-level servers do not accept this type of query. Quite understandably, recursive queries require much more and resources, and root servers like root, like .com, .net, etc. can not be loaded (DNS is hierarchical). In this option, we can also use ACL.
  • blackhole – can not answer the server – ACLs or addresses will be sent to the dark * and will remain unnoticed.
  • directory – the place where the DNS daemon works – most often / var / named
  • forwarders – because we have some queries about the whole world, it must be somehow forward – this is an IP address. (it is not authorized for polling).
  • forward – here we need a bit of theory. Our DNS server can be authoritative (ie it supports our company’s domain), or it can only be a server. the speed of solving our network. The forward option can take two values ​​- only and first. First The server The server The DNS The DNS The DNS The The The. The first option means that the DNS server will do the job, it does not recognize the name.
  • listen-on – here we simply specify what what interfaces the service will listen to queries. We can not configure the port, because DNS always listens on ports 53 tcp / udp
  • notify – here again we need more information about the type of slave servers. DNS can have the following zones: Primary (authoritative primary zone); Secondary (authoritative backup zone – constituting a copy of the service’s availability; Delegation – zone delegated to another DNS server; Stub – zone containing information about glue. NS / A records – that is the view of the zone. There Since name Since name Since name Since name Since name Since name Since name Since name Since; name; Reverse Lookup Zone – Converts addresses to names. We will devote a separate chapter on DNS history to the types of zones and their operation.
  • pid-file – specifies where the named process file is located (the so-called process ID).

In addition to the options, the key element of the configuration file is defining the zones. For example, a zone can be defined as follows:

zone “” {
type master;
file “”;

Inside the zone, it is also possible to configure parameters that overwrite global values, for example: notify, allow-query, allow-update or allow-transfer. From the point of view of the key zone, the type parameter specifies the zone type. It can assume the values ​​of delegation-only, master, slave, forward, in-view, redirect, static-stub and stub. Master and slave are primary and backup zones. Stub is a previously translated zone containing only glue records, Static-stub is a stub-like zone with the difference that it is possible to configure NS addresses. Hint is the zone that defines the basic servers (root). Forward is a redirecting zone, and delegation-only is a zone that allows wildcards to be prevented for zones.

Wildcard is a special zone definition using * eg for the zone it is possible to define a wildcard * expanding all names in the domain to a specific IP address.

From the point of view of the existence of the zone, there must also be a pointer to the file with the resources of the zone defined by the file parameter and in the case of slave zones, also servers from which the data are passed by the directive masters {x.x.x.x; };

In the next part, we will look at the zone files and the DNS resoursers.

Introduction to the DNS service

Nobody needs to be convinced how the key role in the life of the Internet is served by DNS services. Silent heroes perform their functions silently while remaining in the shade and ensuring proper resolution of names and addresses.

Seemingly, the role and configuration of websites is simple, but before the insightful administrators there are much more possibilities to configure and manage the service.

At this point, it is worth mentioning that the DNS service is one of several key roles in computer networks along with the automatic DHCP addressing service and that these services are closely related to each other.

What is DNS – in the simplest terms, it is a service that translates IP addresses v4 and IPv6 into names understandable for Kowalski. If it were not for her going to, we would enter browser, possibly http: // [2a00: 1450: 401b: 803 :: 200e]: 80 instead of honest It is worth remembering that DNS also works the other way – using the so-called return zones can recognize the IP address and return the domain name.

Since memorizing websites in the form of IP addresses would be rather arduous, it is worth looking a bit closer at DNS servers. The service itself can be set based on any system. This can be, for example, the Linux bind9 daemon called named; it can be a Windows Server DNS Server service, or a completely separate product (!) set up as a virtual appliance or even a physical device!

From the point of view of the average bread eater, the role of DNSów ends with the above formula; server / DDI administrators have much more on their minds and this is not about configuring a simple DNS service.

For this is about configuring the named.conf file in the case of BIND and creating zones / zones in it, eg, and return zones with the dizzy name <back-from-back-address-network>, to then create the correct zone files containing records. Let’s use an example. Our company has its own domain lubieplacki.corp. It also has its own DNS server, which after the purchase of the domain was delegated the order <parked domain>. Therefore, there must be a configured zone and configuration of domain records on the DNS server. But in detail, we will deal with the next installment of the DNS fight.

At the beginning it is worth knowing that each zone has its identification record constituting a specific sticker for the domain (SOA record). The SOA record contains various mysterious time values ​​(TTL) that tell when the domain records are due to expire and should be forgotten by customers and then retrieved from scratch. It is necessary to know that due to the dynamics of changes in DNS naming, there is no static service like a monument from the times of the rightfully bygone era. On the other hand, it can not be fully dynamic because the number of queries about eg within a second in the whole world would clog up even G’s DNS servers.

So we used the name caching mechanism wherever it is possible. Thus, the records are cached both on the side of our client who launches the browser (resolver), as well as any DNS servers that mediate in resolving the name. Therefore, these times in the SOA record are to help in clearing data about the name from all caches, which is necessary because websites (especially those that are embedded on balanced web servers and returned to the user by roundrobin / e in random) often change the IP address. If there was no cache once saved reference page <-> IP address would exist forever, even if the site will die, the domain will expire or be relocated to another address.

Okay, but before we get back to our SOA, we have to answer the question: How does the name recognition work, for example, through a browser? A simple matter. Our computer has a DNS server address configured – at home it will be the most common ISP DNS server (provider), and in the company – the company’s DNS. We launch the browser, enter and the browser, called the resolver, contacts the nearest or configured directly in the network adapter settings with the DNS server in search of the desired page. Our DNS server is designed to carry out the whole process of recognizing the name and return with the shield or on the shield – this process is called a recursive query. Unfortunately, our DNS server does not know anything about this site – no one else from our network tried to access it, so it is not in the server cache. The server must therefore contact other DNS servers to get more information. In this situation, the so-called servers are polled. root (or simply the highest major DN servers in the world). They break our name into parts and say something like: I have no idea what it is, but I know what DNS server is responsible for the “.net” part – this is its address. Our server contacts the server which is responsible for the .net part with the next question about the whole name. Like the DNS root, the server responds that it does not know what the address is for requested name, but it has “.net” in its records the registered server name for the kubraczekdlapsa. Our local DNS address, therefore, asks the server appropriate for and this time obtains the IP address for the internet record located in and this address is saved in the cache everything along the way and is returned to the resolver – the browser, which opens the desired page.

It is worth adding here that while our local DNS server is responsible for terminating the name from A to Z and returning the finished result, intermediate servers return only a partial result (iterative). This is due to the need to optimize and prioritize DNS – otherwise the high-level servers would be the most heavily loaded.

In the next  blog entry, I will present DNS in detail.