The Men & Mice Blog

The ABC's of DNS: a select glossary from the Men & Mice training archives - Part 1

Posted by Men & Mice on 4/26/19 9:43 AM

As you’ve probably discovered by now, we have an honest passion for teaching and training. For the past 20 years, Men & Mice has been offering DNS and BIND courses across the globe. Always updated and always practical, from the start we've constructed classes to address real world challenges and solve problems that our students actually face.


Beyond this series, you can also catch us in person (outside of the training courses): we’re really proud to be sponsoring RIPE78 in Reykjavik next month!

In addition to the diversity programming, we’ll also be giving two talks, presented by Carsten Strotmann, about DNS privacy and Unwind.


And the onslaught of new challenges never stops. Public and private networks. Cloud and on-prem resources. Hybrid and multiclouds. Privacy, security, efficiency.

Being on top of our game means constantly learning.

In this new series, we'd like to give you a small taste of the Men & Mice training courses. Organized alphabetically, we'll cover a glossary of select tips, tricks, and trivia that will deepen your understanding of DNS and BIND.

Without further ado, let's get started - we have a whole alphabet to cover.

A is for "anonymizing IP addresses in logfiles"

Anonymizing IP addresses is a handy trick to know, with (DNS) privacy features often requested and businesses becoming increasingly liable for traffic to and from their servers.

ipv6loganon is a Linux command line tool for anonymizing IP addresses in HTTP server logfiles. By default your webserver (be it Apache, nginx, or something else) logs every connection.This is useful for diagnosing connection issues or find malicious actors - but during normal operations it's also a liability from a privacy standpoint.

You can type man ipv6loganon in your server terminal to see all the options. Run it as a cron job or automate some other way.

B is for "BIND features roundup"

BIND is a fantastic suite of software. Whether you consciously use it or not, it's one of the most fundamental pieces in almost any network puzzle (that's why our most popular training course is titled "DNS and BIND").

Lot of people are surprised just how many tools BIND offers. For example:

  • dig is the Swiss Army Knife of network tools. So much so, that we'll be giving it its own entry at the letter 'D' in the next post. In the meantime, read man dig in your terminal, and learn to love it.
  • delv can be used to verify DNSSEC trust. It's as easy as typing delv +v www.domain.com.
  • named-checkconf -z can be used to test manual changes to DNS zonefiles.
  • dnstap is a faster alternative to query logging. (During the training courses we go deep into how to use it.)

BIND also comes with a host of security features like DNS cookies, Response Policy Zones, Response Rate Limiting, and more. The DNSB-W and DNSB-A courses cover these in detail.

C is for "catalog zones"

C is not just for cookies, but also: catalog zones. Catalog zones are special DNS zones, used to quickly propagate DNS zones from master to slave servers. Slave servers use catalog zones to recreate member zones, and if any changes occur "upstream", they're also synced across slaves using the catalog zones.

Use catalog zones for redundancy, so if your slave servers go out of commission for any reason, you can resume normal operations by quickly spinning up backups.

Want to learn more?

In this DNS glossary series, we focus on just a handful of concepts in each post. Bite-sized, they're but the tip of the iceberg. Our training program is where all of these concepts come to exist in the right context - and you get to try your hand at putting newly learnt skills in action.

  • If you’re new to DNS, we offer the DNS & BIND Fundamentals (DNSB-F) course. It’s part of the DNS & BIND Week (DNSB-W) and serves as a shorter introduction to the world of DNS and BIND.
  • If you’re already familiar with the basics, the full five-day DNS & BIND Week (DNSB-W) course takes you deeper into DNS, including a heavy emphasis on security, stopping just short of DNSSEC (for which we offer a separate course).
  • And if you're looking for even more, we offer the DNS & BIND Advanced (DNSB-A) program, getting into the deep end of things.

Check out our training calendar for 2019, and reach out to us with any questions. 

Topics: IT best practices, DNS training, RIPE 78

Network Outages, Human Error and What You Can Do About It

Posted by Men & Mice on 12/18/17 7:14 PM

When your route leaks 

Human error. As far as mainstream reporting on network outages goes, it’s the less flamboyant sidekick to DDoS and other cyber attacks. But in terms of consequences, it’s just as effective.

Once again, beginning of November, large parts of the US found themselves unable to access the internet due to one small error: a misconfiguration at Level 3, an ISP (Internet Service Provider) that underpins other, bigger networks.

According to reports the outage was the result of what is known as a “route leak”. In short, a route leak occurs when internet traffic is routed into inefficient, or simply wrong, directions due to incorrect information provided by one, or multiple, Autonomous Systems (ASes). ASes are generally used by ISPs to keep track of IP addresses and their network locations. Packets of data are routed between ASes, which use the Border Gateway Patrol (BGP) to establish and communicate the most efficient routes so you can browse the whole internet, and not just the IP addresses on your particular ISPs network.

Route leaks can be malicious, in which case they’re referred to as “route hijacks” or “BGP hijacks”. But in this case, it seems the cause of the outage was nothing more spectacular than a simple employee blunder, when (as speculation goes) a Level 3/Century Link engineer made a policy change which was, in error, implemented to a single router while trying to configure an individual customer BGP. This particular incident constitutes what the IETF defines as a Type 6 route leak,  generally occurring when “an offending AS simply leaks its internal prefixes to one or more of its transit-provider ASes and/or ISP peers.”

Route leaks, small and large, are regular occurrences – it’s part and parcel of the internet’s dependency on the basic BGP routing protocol, which is known to be insecure. Other recent high impact route leaks include the so-called Google/Hathway leak in March 2015 and a misconfiguration at Telekom Malaysia in June 2015 which had a debilitating roll-on effect around the world.

To minimize the possibility of route leaks, ISPs use route filters that are supposed to catch any problems with the IP routes that peers and customers intend to use for the sending and receiving of packets of data.

Other ways of combating route leaks include origin validation, NTT’s peer locking and commercial solutions. Additionally, the IETF is in the process of drafting proposals on route leaks.

Factoring in the human element

Tools and solutions aside, Level 3’s unfortunate misconfiguration once again highlights the fact that, despite keeping a low profile in the news, human error still rules when it comes to causing common network outages.

In an industry focused on how to design, build and maintain machines and systems that enable interconnected entities to send and receive millions of packets of data efficiently every second of every day, it’s maybe not all that odd that the humans behind all of this activity become of secondary importance. Though, as technology advances and systems become more automated, small human errors such as misconfiguring a server prefix are likely to have ever larger knock-on effects. At increasing rates, such incidents will roll out like digital tsunamis across oceans, instead of only flooding a couple of small, inflatable IP pools in your backyard.

Boost IT best practices - focus on humans

So outside of general IT best practices, what can you do to help the humans on your team to avoid human error?

Just as with any network, human interaction is based on established relationships. And just as in any network, a weak link, or a breakdown in the lines of communication, can lead to an outage. Humans who have to operate in an atmosphere of unclear instructions, tasks, responsibilities and communication, can become ineffective and anxious. This eats away at employee morale and workflow efficiency and lays the groundwork for institutional inertia and the stalling of progress. At other times, a lack of defined task-setting and clear boundaries may resort to employees showing initiative in the wrong places and at the wrong times.

To limit outages due to human error, just distributing a general set of best practices or relying on informally communicated guidelines amongst staff are simply not enough. While networking best practices always apply, the following four steps can be very effective in establishing the kind of human relationships needed to strengthen your network and optimize network availability.

 

Define DDI-1.png

1. Define

Draw up, and keep updated, a diagram not only of your network architecture (you do have one, don’t you?), but also make sure you have a workflow diagram for your teams: who is tasked with which responsibility and where does their action fit into the overall process? What are the expected outcomes? And what alternative plans and processes are in place if something goes awry? Most importantly, match tasks and responsibilities with well-defined role-based access management.

2. Communicate

Does everyone on your team, and collaborating teams, know who is responsible for what, when and where, and how the processes flow? Is this information centrally accessible and kept up to date? Clarity, structure and effective communication empower your team members to accept responsibility and show initiative within bounds.

3. Train

Does everyone on your team know what’s expected of them, and did they receive appropriate training to complete their assignments properly and responsibly? Do they have the appropriate resources available to do what they need to do efficiently? Without training and tools in place, unintentional accidents are simply so much more likely to occur.

4. Refresh

Don’t wait until team members run into trouble or run out of steam. Check in with each other regularly, and encourage a culture of knowledge sharing where individuals with different skill sets can have ample opportunity to develop new skills and understanding.

Refresh DDI.png

Finally

The saying goes, a chain is only as strong as its weakest link. The same goes for networks.

At a time in history when we have more technological checks and balances available than ever before, it turns out the weakest networking link is, too often, a human. While we’re running systems for humans by humans, we may as well put in the extra effort to help humans do what they do, better. Our networking systems will be so much stronger for it.

 

New Call-to-action

 

Topics: DDI, DDoS, network outages, IT best practices, IP address management

Why follow Men & Mice?

The Men & Mice blog publishes educational, informational, as well as product-related material for everyone and anyone interested in IP Address Management, DNS, DHCP, IPv6, DNSSEC and more.

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all