The Men & Mice Blog

DNS training from A to Z, Part 5

Posted by Men & Mice on 9/20/19 9:25 AM

Continuing our glossary of DNS tips & tricks, we’re covering the letters M, N, and O this time.

M is for “master DNS zone”

A.k.a. the Primary Zone. Informally, The Zone Of All That Is Good and Pure. (May have made that one up.)

Simply put, the master DNS zone resides on the server which is authoritative for the zone’s data. h (As opposed to a slave zone; more on that in a bit.) When you make changes to the master DNS zone, such as adding, editing, or deleting a record, those changes will be replicated to the slave DNS zones.

Slave (or secondary) DNS zones are read-only copies of the master DNS zone, used to relieve the primary zone of query load or as a backup in case of failure. Data from the master DNS zone to the slave zone(s) is done through zone transfer

N is for “named-check*”

Namely (🙄) named-checkzone and named-checkconf. These two are helpful commands in BIND (we’ve talked about it before) to check a configuration file’s validity before pushing it live. 

The neat feature of these two commands is that not only do they report any errors in their respective configuration files, but also let you know the line number of the errors. When dealing with large files, this can save a lot of time and headache.

Use them freely.

O is for “OpCode”

A DNS opcode is a four-bit field that identifies the type of query being sent to the DNS server.

The opcode can be, per IANA’s (the Internet Assigned Numbers Authority, we’ve also talked about them before) designations:

OpCode

Name

0

Query (see RFC1035)

1

IQuery (Inverse Query, obsolete; see RFC3425)

2

Status (see RFC1035)

3

Unassigned

4

Notify (see RFC1996)

5

Update (see RFC2136)

6

DNS Stateful Operations (DSO) (see RFC8490)

7-15

Unassigned

OpCodes show up when you examine a query. (Like with dig.)

Want to learn more?

This series is byte-sized (that joke just never gets old) — but a lot more can be said and done. To learn more in-depth about DNS specifically, we offer a comprehensive DNS training program.

You can enroll in different groups depending on your skill level:

  • If you’re new to DNS, we offer the DNS & BIND Fundamentals (DNSB-F) course. It’s part of the DNS & BIND Week (DNSB-W) and serves as a shorter introduction to the world of DNS and BIND.
  • If you’re already familiar with the basics, the full five-day DNS & BIND Week (DNSB-W) course takes you deeper into DNS, including a heavy emphasis on security, stopping just short of DNSSEC (for which we offer a separate course).
  • And if you're looking for even more, we offer the DNS & BIND Advanced (DNSB-A) program, getting into the deep end of things.

To check if you can get on board with one of the remaining courses this year, check out our training calendar for 2019, and reach out to us with any questions.

Topics: Men & Mice Suite, DNS, IT best practices, DNS training

VMworld US 2019: all aboard for multicloud

Posted by Men & Mice on 9/12/19 10:46 AM

The guiding word for San Francisco between 25th and 29th August was ‘cloud.’ Everything revolved around it, from storage solutions to innovations in computing performance, just about every vendor  came set to showcase how their products provide distinct advantages in a cloud environment.

The verdict is clear: cloud adoption in one form or another is not an ‘if’, but a ‘when'. Those coming to VMworld whose companies haven’t yet invested in some kind of cloud offering, came prepared to explore all options.

Pitfalls and best practices

Cloud adoption is a complex task. And it is especially true in the area of our expertise: networks.

The show floor was abuzz with the newest advancements in technologies like storage for big data (in the cloud) and computing performance in service of machine learning (in the cloud).

Meanwhile, the stalwart Men & Mice team had a field day as scores of people came to us to learn how to do cloud better. We chatted with people running multiple data centers, on-prem, in the cloud or hybrid and multicloud, looking for better management solutions. We debated the merits of appliance-based approaches vs. overlays. (Overlays are better, of course). And we had a blast discussing the power of cloud DNS. (If you’re utilizing cloud DNS, you don’t need anything else. You’re already using the best there is. You just need to make it more transparent and compatible with your existing systems and processes.)

Cloud adoption, coupled with migration of data and existing systems, can bring with it a host of pitfalls to avoid, as well as a score of best practices to study and apply. But how do you get your network ready for cloud, or multicloud, adoption? 

On this subject, our North American Director of Sales Operations, Paul Terrill,  gave a talk at VMworld's Solutions Exchange Theater in San Francisco on future-ready network best practices. Take a look:

Cloud is a multiple choice question

We’ve arrived in an era where one cloud is not necessarily the best answer. The differentiation between services and their respective ecosystems has grown beyond simply executing similar processes along the same concept.

The quality of tools and depth of services between different cloud providers can vary considerably, and your needs may be best served by more than one. Every company has to evaluate what works for them. Networking best practices, as discussed by Paul Terrill in the above-mentioned talk, might help you decide what matters most to you. 

In this vibrant and varied landscape of the cloud market, solutions that provide a connective layer between the disparate offerings provide lasting value and position networks well for a rapidly changing network management landscape.

The Men & Mice Suite is such a solution, developed to provide an abstraction layer for cloud (and on-prem!) networks that can work with any underlying technology or service. From VMware to Azure to AWS, NS1 and Akamai -- it doesn’t matter what’s in your networks; what matters is how you see (and manage) it.

And because it’s a software-defined and API-first solution, the Men & Mice Suite can be deployed non-disruptively (no more re-buying appliances every five years) while offering advanced automation and customization tools to save valuable resources across network teams.

In short, with the Men & Mice Suite you don’t need to adapt your network to  to conform to our solution. You can continue to use the platforms you have, or want, to build the future-ready network you need. 

Get connected

IMG_6575We’ve had a great time in San Francisco (as illustrated) and answered a lot of questions from interested parties. We were also delighted to meet up with current customers and hear their success stories with the Men & Mice Suite.

From the latter, we’ll be bringing you deployment studies, white papers, and more technical content on the blog and in our podcast in the coming weeks and months.

For the former, our doors are always open for a chat, or delve deeper with a free demo.  Feel free to reach out to us and we’ll be happy to answer your questions and show you how we can help you change the way you see, and manage, your networks.

Topics: Men & Mice Suite, IPAM, DNS, DHCP, "cloud dns", vmworld

DNS & DHCP spotlight: BIND 9.14 & Kea

Posted by Men & Mice on 7/4/19 11:33 AM

While we were at RIPE 78 in Reykjavik, we got to catch up with Matthijs Mekking, a software engineer at ISC tasked with working on BIND, DNSSEC and other projects. We made a podcast of our chat, but given just how important BIND is to everyday workflows, a blog post touching on some of the topics also seemed warranted.

BIND 9.14

BIND truly is one of the most fundamental pieces of software for anyone working with DNS. (It’s not for no reason that we call our training program DNS & BIND!)

Changing the BIND release scheme

Starting with BIND 9.13, ISC has changed the release schedule for BIND, where odd numbers represent development releases, and even numbers note the stable branch. Users welcomed the opportunity to test the development branch; and since many companies build on BIND's features, these versions offer a chance to strategize. It also allows ISC to gather valuable early feedback and enables them to better focus their resources or course correct where necessary. (Find out which version of BIND 9 suits you best)

What's new in BIND 9.14 

With BIND 9.14, ISC focused on making BIND a modern nameserver again. In addition to bug fixes, this includes responding to privacy and usability requests, including:

  • a lot of modernization and code refactoring
  • 12% performance increase 
  • QNAME minimization (and enabled by default in relaxed mode) for enhancing privacy
  • mirror zones (serving a transferred copy of a zone’s contents without acting as an authority for it)

What's coming in BIND 9.15

In BIND 9.15, ISC will continue to modernize BIND's codebase, in particular refactoring the networking code. This will allow them to streamline implementations such as DNS-over-TLS and DNS-over-HTTPS and make them easier to deploy.

Making DNSSEC in BIND more intuitive is also a priority. This includes making DNSSEC easy for signing purposes as well as providing support for offline and combined signing keys.

These roadmap plans should form a solid base for BIND 9.16, which is scheduled to be the next Extended Support Version (ESV) after BIND 9.11. 

Kea

As mature and robust as ISC DHCP is, it's also old. It was started in 1995, when networks were a lot smaller and network management a lot more straightforward, and perhaps not as integral to the success of business operations as it is today. ISC DHCP code was extended through the years, but that also made it harder to maintain.

Kea DHCP came alive as the natural successor to ISC DHCP, designed for modern mission-critical environments and destined to address these issues. It's a more scalable and better performing DHCP server, with a different architecture and a somewhat different feature set. (Such as new features coming with hooks and a rich API to configure users and subnets, radius integration, and support for several database backends.)

ISC recommends, particularly for new deployments, to use Kea instead of ISC DHCP. This is not only because Kea is better adapted to modern environments, but also because support for ISC DHCP will cease in the long term, most likely any time after 2020.

To learn more about Kea and how to migrate from ISC DHCP, take a look at this webinar from ISC:

Kea's modules vary from open source to paid (freemium and subscription) but the documentation for all modules is freely available for users to look at and evaluate. Beta versions are also freely available.

Where to from here?

As BIND and Kea shows, development in the network infrastructure (DNS, DHCP, IPAM) space is not only ongoing but vibrant. RIPE78 (as with all RIPE AGMs) provided a great opportunity for a glimpse at just how vibrant this sector is.

As a company wholly dedicated to DDI, we're following developments at ISC and other major developers continuously, and share what we learn along the way. For example, both our RIPE 78 blog coverage and our newly launched podcast focus on the details and implications of major changes that are happening or are expected to happen. Follow us here on our blog, on social, and subscribe to the podcast to stay in the know.

Topics: DNS, DHCP, BIND 9, ISC, Kea

World IPv6 Day 2019 (plus a podcast!)

Posted by Men & Mice on 6/6/19 9:50 AM

June 6th, 2012 (or “6/6”) saw the World IPv6 Launch Day. Today we celebrate the 7th anniversary.

For those in need of a quick cheat sheet, here’s ours.

(Mind you, this is not ONLY a cheat sheet, but also doubles up as a lens cleaning cloth. Come by our booth at  Cisco Live in San Diego to pick one up.)

Beyond that quick reference, what’s all the fuss about this old-new networking technology? What has changed since it’s been around (from the 1990s)? What hasn’t? And where do (or should) we go from where we are now?

To IPv6 or not to IPv6?

That is the question. For what it’s worth we, and literally everybody we spoke to at RIPE 78, are for IPv6.

That said, there is legitimate criticism against it. More often than not, however, it tends to be rooted in shortcomings of implementation, misunderstandings in adoption strategies, or just general reluctance toward the work involved in the switch.

Large tech companies have adopted IPv6 whole-heartedly. ISPs, cloud providers, and data centers have been offering IPv6 for a while. Microsoft has been at work getting rid of IPv4 addresses in their offices for years. Google even keeps a public chart of IPv6 adoption amongst its users:

Screenshot 2019-06-06 at 10.28.10

Bottom line is: adoption is on the up, but it’s still spotty at best. And it is true: IPv6 isn’t perfect. But then again IPv4 isn’t, either. It will not get any better, though, if we don’t dedicate effort to perfecting it through practice.

Fun fact: IPv6 addresses are free. IPv4 addresses go for $20+ a piece and that price keeps rising.

It’s evident that, various inventions and initiatives notwithstanding, we’ll likely soon be out of IPv4 addresses. Never before have there been so many connected devices, from smartphones to cars, from smart thermostats to smart toasters. IPv6 is an inevitability.

What can we do?

Introducing ‘resolv.pod’: a DNS podcast

We can, and most definitely should, discuss and evaluate our options regarding anything and everything affecting the future of the networks we depend on. Attend conferences, read papers, draft strategies.

To that end, we’re happy to announce that we are launching a podcast aimed at sharing with you the mindshare we have access to.

resolv.podOf course, as is clear from the name of our podcast, the focus won’t be on IPv6 exclusively, but rather anything and everything related to DNS, DHCP, and IPAM. Facilitating discussions about IPv6, amongst other things, and giving listeners fuller context from experts in the field, are the DNAME of the game (OK, name - just couldn’t resist).

As luck would have it, we were fortunate enough to grab a conversation with Geoff Huston, Chief Scientist at APNIC (Asia Pacific Network Information Centre)  in the lovely Reykjavik sunshine at RIPE78.

So to celebrate World IPv6 Day, why don’t you sit back and listen to our very first episode featuring Geoff talking networking highs and lows with Men & Mice’s Carsten Strotmann? It’s sure to entertain - and inform - in equal measure. Happy  World IPv6 Day!

Find resolv.pod on your favorite podcast platform:

More interviews and discussions coming up in the next weeks! Let us know what you’d like to learn more about via the podcast email, our social media channels, or as a comment below.




Topics: IPv6, DNS, podcast, resolv.pod, IPv6 Day

Men & Mice @ Cisco Live 2019: New Best Practices for Future-ready Hybrid and Multicloud Networks

Posted by Men & Mice on 6/5/19 11:28 AM

 

Cisco Live San Diego: we’re coming! Find us at booth 2234 for all your DNS, DHCP, and IPAM needs, plus sweet swag from Iceland!

Copy of Copy of Booth #2432Whether you’re attending Cisco Live or not, chances are your enterprise or large organization is well into developing or implementing its cloud strategy. Further, you’re likely capitalizing on a number of cloud services across multiple platforms.

This year at Cisco Live, we’ll have Paul Terrill, our North American Director of Sales Operations, taking the Think Tank stage for a look into what best practices you can adopt today to get your environment ready for the hybrid and multicloud networks of tomorrow.

With more than a decade of experience delivering software solutions that meet the diverse IP infrastructure needs of some of the world’s largest multinational enterprises and government organizations, Paul is an expert in identifying, and solving, large scale network management challenges.

Here’s a sneak peek at Paul’s talk.

Adopting best practices for a future-ready network

Scheduled for Monday, June 10, 03:30 PM - 04:00 PM, PDT, at SDCC - World of Solutions, Think Tank 2,  Paul’s talk will focus on the challenges organizations face in a cloud-native world, the solutions that transform networks into a future-ready state, and the pitfalls to avoid along the way.

During the session, Paul will explore new best practices and the advantages to adapting hybrid network strategies to take advantage of service-native features in all IP infrastructure solutions, whether on-premise, cloud or multicloud. Specific attention will be given to some of the common pain points in adapting hybrid and multicloud network strategies, such as the potential loss of access control assignments, lost time and staff resources during migration processes and compatibility hurdles between multiple services (and how to overcome them).

Additionally, Paul will describe in detail the advantage of Cisco IOS DHCP against other solutions, as well as where most hybrid and multicloud migration strategies go off the rails. He’ll also be speaking about why IT decision makers need not fear APIs (in fact, why they should embrace them) and why homegrown solutions are no longer acceptable.

Made with your infrastructure in mind

We understand the importance of visibility, control, automation, and security — and also how challenging those can be in complex, hybrid IP infrastructures. Men & Mice provides API-driven DNS, DHCP, and IPAM software solutions to global enterprise, education, and government organizations.

Men & Mice also recognizes the widespread presence and critical importance of Cisco hardware in enterprise networks. With products like Umbrella, Cisco is continuing to bring network infrastructure innovation to larger audiences. By utilizing the Men & Mice Suite with Umbrella, Cisco customers gain the advantage of being able to control internal DNS resolvers, numbering anywhere between dozens to hundreds, in one fell swoop. In addition, proper visibility quickly highlights servers not properly configured.

Questions?

While in San Diego next week, come and listen to Paul’s talk, and/or visit us at booth 2234 throughout the event. You’re welcome to fire away with whatever questions come to mind - our experts will be on hand to help you solve your unique enterprise networking pain points.

Topics: DNS, Cisco Live, hybrid network, Cisco IOS, multicloud

The ABCs of DNS: a select glossary from the Men & Mice training archives - Part 2

Posted by Men & Mice on 5/31/19 7:46 AM

dns a-z 2-1Continuing our glossary of DNS tips & tricks, we’re covering the letters D, E, and F this time.

DNS ALERT

Our popular DNS & BIND Week, DNS Fundamentals and DNS Advanced courses are all registered to run June 20th to June 24th, in Reston, Virginia, USA. Still want to join in? All info on our training page

D is for “dig”

Dig is the Swiss army knife of network tools. It's got so much functionality, it’d be next to impossible to cover it all, but here’s a taste:

  • find your IP address using: dig @ns3.google.com +short o-o.myaddr.l.google.com txt
  • relatedly, you can make an alias in your .bashrc file: alias myip='dig o-o.myaddr.l.google.com -t txt +short @ns3.google.com'
  • you can use dig +trace <domain-name> to follow all delegation from the root down.

And if dig isn't available, you can use one with a web interface (sometimes called a DNS Looking Glass), such as https://dns.bortzmeyer.org/[URL]/[TYPE] (for example https://dns.bortzmeyer.org/menandmice.com/AAAA).

Remember, friends don’t let friends use nslookup.

E is for “error-free config files”

To err is to be human. Sometimes a typo sneaks into your configuration files. (Unless you’re using Men & Mice, in which case validation is automatic.)

A quick way to make sure everything’s in order is to run named-checkconf -z to test all zones inside the named.conf file. (Note that the command checks the validity of the master zones, and not the configuration file itself. To check the file itself use named-checkconf <path to named.conf>.)

F is for “FQDN”

FQDN stands for ‘Fully Qualified Domain Name’ and you need it for a number of things. It’s the human-readable address that the DNS resolver translates into its corresponding IP address.

The FQDN is made up of three or more parts (called labels):

  • root (the trailing dot at the end)
  • TLD (such as .com, .net, etc.)
  • domain (such as menandmice)
  • host (such as www, info, etc.)

Each label is a string between 1 and 63 characters (letters, numbers, and dashes), and the total length of the FQDN is capped at 255 characters.

To find the FQDN of your machine:

  • on Windows: Start > Programs > Administrative Tools > Active Directory Domains and Trusts (or echo %COMPUTERNAME%.%USERDNSDOMAIN% in the command line)
  • on Linux & MacOS: hostname -f (on Linux you can also use hostname --fqdn)

Want to learn more?

This series is bite-sized (almost fitting a DNS query) — but it’s just the tip of the iceberg. A lot more is said (and done) in our DNS training program:

  • If you’re new to DNS, we offer the DNS & BIND Fundamentals (DNSB-F) course. It’s part of the DNS & BIND Week (DNSB-W) and serves as a shorter introduction to the world of DNS and BIND.
  • If you’re already familiar with the basics, the full five-day DNS & BIND Week (DNSB-W) course takes you deeper into DNS, including a heavy emphasis on security, stopping just short of DNSSEC (for which we offer a separate course).
  • And if you're looking for even more, we offer the DNS & BIND Advanced (DNSB-A) program, getting into the deep end of things.

Check out our training calendar for 2019, and reach out to us with any questions.

Topics: DNS, networking best practices, dig

The RIPE-javik logs: Day 5

Posted by Carsten Strotmann on 5/26/19 11:06 AM

ripe day 5carsten@menandmice:~$ cat ~/ripe/ripejavik-day5.txt | blog-publish

As RIPE 78 came to a close, it was time to reflect and to forge plans for the future.

The last day of RIPE 78

In the final plenary session of RIPE 78, Theódór Gíslason from Icelandic security company Syndis, talked about current threats on the Internet and that many users underestimate the security issues. He underpinned this statement with some examples of how attackers can find detailed information on the victim through public information like commits on Github or Facebook, and that data breaches are getting more and bigger.

One could say that most of the information in the presentation wasn't new to the RIPE audience, and Theódór Gíslason was somewhat surprised when he asked the audience who is using Facebook and only a few hands went up. The RIPE audience is a special case.

Later on, it was Roland van Rijswijk-Deij’s turn again to take the stage. Today he was reporting on historical data on RPKI, the Resource Public Key Infrastructure securing the Internet’s routing system. The RIPE NCC has archived historical RPKI repositories and Roland used the "Routinator" tool to analyze how RPKI usage has changed over time. For example, he found that the average prefix size in RPKI is decreasing over time for both IPv4 and IPv6.

Richard Nelson from the Faucet Foundation presented on the open source OpenFlow controller with the name "Faucet". Faucet is targeted at enterprises that want to move router and switch management away from closed network equipment vendors into OpenFlow Hardware/Software. Richard reported on their real world implementation of the Faucet system at the Super-Computer Conference 2018 in Dallas, TX.

Before RIPE Chair Hans Petter Holen officially closed RIPE 78, there was a challenging online quiz titled  "Are you up to the Level of RIPE 78?" which was organized by Fernando Garcia. RIPE meetings are often exhausting, quite challenging, but also lots of fun!

A final note

RIPE 78 was the second largest RIPE meeting ever, and for me personally it was one of the best RIPE meetings I've attended. It had great presentations, a good location (Hotel Nordica) and food and very nice weather in Reykjavik. I have been told this has been one of the warmest weeks in May for years. Must’ve been the hot topics at RIPE 78.

And then there was the "Group of Secrets" (aka Secret Working Group), but the report from that group is a secret and I'm not allowed to tell you anything about it. If you want to know what is going on in that Working Group, you will have to come in person to RIPE 79 in October in Rotterdam, NL. See you there!

A note from the editors: RIPE-javik may be over, but not done

Thus concludes our RIPE 78 coverage, but not our investigation of issues raised or following up on conversation started.

In the coming weeks and months, we’ll be returning to these topics frequently. We’ll deep-dive into issues on the blog, and we’re also preparing a podcast series, starting with interviews (conducted by Carsten) with prominent speakers and attendees at RIPE.

We’ve learned a ton this past week. But we’re also interested to hear your feedback: what did you find the most interesting? What new development are you the most excited for? We’re listening!

Topics: DNS, Open Source, Security, network security

The RIPE-javik logs: Day 4 - Part 2

Posted by Carsten Strotmann on 5/25/19 2:25 PM

ripe day 4_2carsten@menandmice:~$ cat ~/ripe/ripejavik-day4.txt | blog-publish

Because it was so filled with information, our coverage of Day 4 of RIPE78 has been divided into two parts. Read Part 1 here.  

DNS Working Group

During the DNS Working Group session, I gave two talks: first one on an Overview of the DNS Privacy Software Landscape and then another “lightning” talk on unwind, a validating DNS recursive nameserver.

At the beginning of May 2019, I started a survey of software projects implementing the new DNS privacy protocols: DNS-over-TLS (DoT) and DNS-over-HTTPS (doH). My questions were:

  • Are there software projects out there that use DoT and/or DoH?
  • In which year did development start?
  • Which programming languages have been used to implement the software?

The results of the survey were presented to the RIPE DNS Working Group and are as follows:

  • Of the 46 projects I found on Gitlab and Github, 21 were implementing DoT, 32 support DoH and 7 projects support both protocols.
  • The majority (29) of the projects implemented the new protocols in 2018 and there are 9 new projects this year. At the moment, I am finding 1-2 new projects every month.
  • A large number of newly created projects use the Go programming language (17), while most established projects that added DoT/DoH to their products use C/C++ (15). There were also projects using Rust, Python, Ruby, Java and JavaScript (via NodeJS).

I was also interested in the liveliness of the projects and looked if there was any activity in the project over the last half a year, e.g. new code checking or issues tracker activity. The majority of projects are active (32) while some are dormant (14). The complete list of projects can be found here.

In the second talk, I provided some information on "unwind", a DNSSEC validating resolver for laptops running OpenBSD. For mobile computers, it is a challenge to get a secure DNS name resolution, as most DNS resolvers in wireless networks don't do DNSSEC validation and are not trustworthy. Unwind implements a DNS resolver that runs on the local machine, listens to the loopback IP addresses and either does direct DNS resolution into the Internet, or forwards the request to a trusted resolver via DoT or classic DNS (UDP/TCP).

Many WiFi networks have a captive portal that prevents direct access to the Internet, and therefore to the DNS of the Internet. unwind can be configured to detect such a situation and will switch to the DHCP supplied resolvers, with the sole purpose of getting through the portal. Once the direct access to the Internet is available, unwind switches back to secure DNS communication.

DNSSEC keytags

In the next presentation, Roland van Rijswijk-Deij (NLNetLabs) presented his research on the DNSSEC keytags he found in the OpenINTEL dataset.

Every DNSSEC key has a 16-bit number (between 0 and 65525) that helps DNS resolvers to find the correct key in a DNSSEC signed zonefile. The keytags are generated by applying a simple and fast algorithm (first standardized in RFC 2535).

In 2016, Roy Arends from ICANN already noted that the keytag numbers are not evenly distributed. This is because RSA DNSSEC keys have a structure, and some parts of the key are not random. Roland took the large data collection he has in OpenINTEL and looked at the RSA DNSSEC keys seen there.

The real world data confirms the observations in the initial experiments with DNSSEC keytags: for RSA DNSSEC keys, some keytags are never generated, and the numbers follow a structure. Also, the crypto-library used to generate the RSA keys influence the key tags, as OpenSSL has some safeguards for weak keys that other libraries don't implement.

Next, Roland looked if there are keytag collisions, where the same keytag appears in a zone twice or more for different keys. He found very few collisions, actually less than predicted by theoretical probability.

Then he tested what would happen if keytags are generated by a different algorithm that guarantees uniform distribution of the numbers (in this case CRC16). Turns out then there would be more collisions (still very few, but more nonetheless). Possibly, and without aiming to, the authors of the DNSSEC RFC chose a better algorithm.

RIPE-javik winding down

This concludes the second to last of our daily reports on RIPE78, but by no means does it signal the end of our RIPE-related coverage. Stay tuned for Day 5 tomorrow, and much more deep industry talk in the weeks to come.

Topics: DNSSEC, DNS, DNS privacy, DNS-over-HTTPS, unwind

The RIPE-javik logs: Day 4 - Part 1

Posted by Carsten Strotmann on 5/24/19 12:03 PM

ripe day 4carsten@menandmice:~$ cat ~/ripe/ripejavik-day4.txt | blog-publish

Day 4 of RIPE78 was so jam-packed, we had to split it in two. Here’s Part 1!

IPv6 Working Group

Day 4 of RIPE78 started with the IPv6 working group. Geoff Huston discussed his measurement engine that focuses on using "ads" delivered to browsers in order to look into the reliability of IPv6 connections. While the IPv6 failure rate has gone down since early 2017 with 4%, it is now at 1.4%. Somewhat better, but still pretty bad.

Geoff found out that mobile networks deploying 464XLAT usually have more stable and reliable IPv6, than others using NAT64/DNS64 or other stateful IPv4-to-IPv6 translation mechanisms. IPv6 reliability appears to be exceptionally bad in Vietnam with a 6-10% failure rate.

Because of the Happy Eyeballs implementations in browsers, end users possibly don't notice the breakage except for a slight delay in establishing the connection. This is both good and bad: while the users are shielded from experiencing the issues in their ISP’s networks, the provider is also not incentivized to fix the issues. (As “it works”). Other countries with non-optimal IPv6 networks are Panama, Venezuela, Morocco, Bangladesh, and Turkey. Even China, with its experience in IPv6 networking, has a higher than average failure rate.

Another artifact Geoff found during his research is the fact that some networks route their IPv6 traffic differently (and often worse) than the IPv4 traffic. At some point between November 2016 and December 2016, all IPv6 traffic from and to India was routed via networks in Great Britain.

In the next talk, Enno Rey and Christopher Werny from ENRW shared their experience with IPv6 on WiFi hotspots. They are working on a project to provide IPv6 on up to 3,000 wifi hotspots in supermarkets and shopping malls all across Europe. After their evaluation of IPv6 support in common applications used by customers of these hotspots, they decided to deploy IPv6-only with NAT64/DNS64. (WLAN 100% IPv6, IPv6 to IPv4 translation at the gateway to the Internet.)

MulticastDNS and other multicast IPv6 communication are a problem for wireless networks, as the access point needs to distribute the multicast message to all clients in range, and needs to use the oldest WiFi protocol to be able to reach legacy clients. Using the older wifi protocols blocks the air for other traffic for a longer time. Enno and Christopher recommend tuning and throttling multicast traffic in IPv6 enabled WiFi networks to minimize this effect. The IETF is aware of the issues and is working on adjustments to the IPv6 protocol family to make IPv6 more WiFi-friendly.

IoT

Over in the IoT working-group, Jan Zorz reported on his attempt to build a "smart home" and gave insight into his design choices. Being an engineer, and also because of privacy concerns, he does not want to use "off the shelf" smart home devices that send sensitive data into the cloud and whose functionality is dictated by the vendor.

So Jan started to build his own smart devices. As the central management hub, he first started his experiments with a Raspberry Pi, but will switch to a more powerful 64bit x86 desktop mini-PC for production. Jan reported that he did not initially really know what he wanted and would expect from a "smart home" system: that insight developed over time while experimenting with different smart devices.

He encourages everyone to do some experimentation before deciding on a particular smart home technology. If you want to hear about the differences between Z-Wave vs. Zigbee, or which home automation software might be the best, have a look at the video recording of his talk.

Next up on the stage was Jelte Janssen from SIDN Labs talking about the SPIN ("Security and Privacy for In-home Networks”) project. The project is developing software tools that help end users to get insight into the network communication of IoT devices in the home and enable the user to better protect the home network. One tool from the SPIN project is the traffic monitor that shows DNS queries and data traffic in the local network, showing graphically to whom the IoT devices talk.

The goal is to get the SPIN tools in the default install of CPE (customer premise equipment, such as home routers) devices. In the same talk, Peter Steinhäuser from Swiss CPE Firmware developer Embedd reported on his company’s work on integrating SPIN in OpenWRT (a popular open source home router firmware based on Linux).

SIDN Labs has installation instructions for people who would like to test-drive SPIN on their existing OpenWRT based routers. Please note that SPIN is still in development and has some rough edges. However, the project would be happy to get feedback (and pull requests) from actual users.

Part 2 coming soon

Watch this space for the second part of our day 4 coverage on RIPE 78.

Topics: IPv6, IPv4, DNS, IoT

The RIPE-javik logs: Day 2

Posted by Carsten Strotmann on 5/22/19 8:31 AM

ripe day 2

carsten@menandmice:~$ cat ~/ripe/ripejavik-day2.txt | blog-publish

The second day of RIPE 78 started with the plenary, and three presentations on the topic of Distributed Denial of Service (DDoS).

DDoS

DDoS attacks are an increasing risk on the Internet. Mattijs Jonker from the University of Twente explained how DDoS attacks work. His research has revealed that many businesses have all their Internet services (website, mailserver, etc.) in a single network. In case of a DDoS attack, all services are impacted. He counted 31 thousand websites, 3.5 thousand mailservers, and 323 DNS servers that are on a single network and would suffer in case of an attack. An alternative IP address from a different network (autonomous system/AS) would make the services more resilient.

Matthias Wichtlhuber from the German Internet Exchange DE-CIX found that DDoS attackers only use certain protocols for their amplification attacks:

  • unspecified (Port 0)
  • NTP (Port 123)
  • LDAP (Port 389)
  • DNS (Port 53)
  • Chargen (Port 19)
  • Memcache (Port 11211)

Filtering these ports (in transport networks) will stop most DDoS attacks. The problem is that most ISPs cannot do fine-grained filtering. Most can only filter on networks or IP addresses, which blocks all traffic from or to a certain machine. DE-CIX has developed a new fine-grained black-holing system for DDoS attacks that is currently in beta testing.

Koen van Hove, also from the University of Twente, presented the DDOS clearinghouse: a project to collect data of DDoS attacks in a central place. The aim is to be able to research DDoS attacks and develop fast responses to them. The DDoS clearinghouse collects network measurements, identifies DDoS attacks across networks with unique fingerprints, and stores this data in a database (DDoSDB). From the database, attack information and metadata can be retrieved to help users feed fingerprint signatures into their network systems to stop DDoS attacks.

DNS

After the morning break, the main topic was DNS. David Huberman from ICANN discussed the root server system. After talking about the history of the DNS root server system, he explained that there has been no process so far for selecting new root server operators.

With over 1.120 root server instances in the world, 340 of which are in the RIPE region, the root server system is stable and there is currently no need to add additional root server operators beyond the 12 that run the 13 logical root-server addresses. ICANN is not working on a defined governance model for the root server system.

OpenINTEL

Next on the stage was Roland van Rijswijk (NLnet Labs) presenting the OpenINTEL project he has contributed to. OpenINTEL is a massive active measurement system that sends 218 million DNS queries per day from several vantage points on the Internet, resolving a defined set of DNS names. The results will be collected in a big database (Big Data helps to get research funds these days), which so far contains 3.1 trillion results since the start of the project in 2015.

The OpenINTEL system allows researchers to search for various kinds of interesting data: parent-child TTL mismatches, distribution of authoritative DNS-Servers in different AS networks, or even silly things stored in DNS TXT records (like funny IPv6 addresses or private cryptographic keys). The project can be found at https://openintel.nl

KSK roll

Like always, Geoff Huston (APNIC) delivered a highly entertaining talk, this time about the KSK roll in October 2018.

Officially, there was no impact seen for the DNSSEC validating resolvers. But some operators, like EIR in Ireland, have missed all notices about the roll in the two years leading up to it and failed to change the trust anchor of their DNS resolvers which lead to a full-day outage of their DNS resolver services. Other smaller operators were affected as well, some of which fixed the issue by disabling DNSSEC. All except two have re-enabled DNSSEC after fixing their DNS resolver configurations. Geoff also noted that the DNSSEC trust state signaling (RFC 6975 and RFC 8145) does not work reliably to detect broken KSK rolls in the root zone.

Migration from IPv4 to IPv6

In "Get Ready for Mixed World: Economic Factors Affecting IPv6 Deployment", Brenden Kürbis and Milton Mueller from the Georgia Institute of Technology talked about the economics behind network migration from IPv4 to IPv6.

The problem of IPv6 is that it is not possible to switch off IPv4 right away. Instead, IPv4 must be kept enabled for some amount of time (dual stack deployment). The cost generated due to IPv4 depletion will stay, the cost of introducing IPv6 will come on top. Only after some years will the cost benefits be visible. Depending on the growth pattern of the company and the networks, the first cost savings can appear as early as 4 to 10 years. Larger companies will have more benefit from IPv6, while smaller companies will not see economic benefits. In the following Q&A session, people from the audience challenged some of the assumptions in the research that generated this report.

DNS flag day

In the last DNS talk of the day, Petr Špaček from CZ.NIC and Ondřej Surý from ISC gave some insight into the DNS flag day in February 2019.

DNS vendors (Bind 9, Knot, PowerDNS, Unbound, and others) and large DNS resolver operators (Google, Cloudflare, Quad9, etc.) disabled workarounds for broken EDNS implementations. The workarounds were developed to help with DNS servers on the internet that had faulty implementations of the DNS protocol. However, because the workarounds existed, the operators of these faulty servers had no motivation to fix their systems. The cost of developing and maintaining the workarounds fell to the vendors of the DNS products.

For the February 2019 flag day, there was an estimated breakage of 5.68% of all DNS servers. Two large DNS operators were responsible for 66% of this breakage. The flag day was considered a success, as the pressure generated compelled the operators to fix their systems, and no other significant breakage was reported on that day.

Motivated by the success of this first flag day, the DNS server vendors plan another in 2020. No exact date has been set at the moment. On the next flag day, new DNS software releases will change the default settings for EDNS buffer size from today's 4096 bytes to a value around 1220 bytes. The goal is to prevent fragmentation of IP packets, which is known to be broken in some networks and can be a security risk. For this change, authoritative servers and DNS resolvers must be able to operate over TCP in addition to UDP. The main problem is misconfigured firewalls that block DNS over port 53/TCP.

The flag day website will be updated with detailed information about the date and will include online tests so that DNS administrators can test their systems.

More tomorrow!

RIPE 78 is a busy event, with much more going on than we were able to report here. Do visit the session archives to check the other presentations - there are plenty more good talks to dig into. We’ll be back with more RIPE coverage tomorrow!

Topics: IPv6, IPv4, DNS, DDoS, RIPE 78, OpenINTEL, KSK roll

Why follow Men & Mice?

The Men & Mice blog publishes educational, informational, as well as product-related material for everyone and anyone interested in IP Address Management, DNS, DHCP, IPv6, DNSSEC and more.

Subscribe to Email Updates

Recent Posts

Posts by Topic

see all