Report from IETF 90 in Toronto 2014
The IETF, Internet Engineering Task Force, those that are working on new Internet Standards, met in Toronto in July 2014.
Mr. Carsten Strotmann from the Men & Mice Services team gives an overview of interesting developments from the working groups inside the IETF, after attending online at the IETF 90 in Toronto.
Hear more on:
- and new RFCs that have been published since the last IETF in March 2014
Fetch the slides, webinar recording and link collection.
Report from RIPE 68 in Warsaw, Poland
A RIPE Meeting is a five-day event where Internet Service Providers (ISPs), network operators and other interested parties from all over the world gather.
In this webinar, Carsten Strotmann from the Men & Mice Services team reports about what was new at the RIPE 68 meeting.
Hear what he had to say on:
- Amplification DDoS Attacks – Defenses for Vulnerable Protocols
- news from DNS-OARC meeting (DNS measurements, open resolver stats)
- Selective Blackholing: Cheap & Effective DDoS Damage Control
- Strengthening the Internet Against Pervasive Monitoring
- What Went Wrong With IPv6?
- IPv6 troubleshooting procedures for helpdesks
- Using DDoS to Trace the Source of a DDoS Attack
- Measuring DNSSEC from the End User Perspective
- Google DNS Hijacking in Turkey
- The Rise and Fall of BIND 10
- Knot DNS Update – DNSSEC and beyond
- Bundy-DNS – the new life of BIND 10
Have a look at the slides and recording from the webinar to learn more.
Men & Mice, a leading provider of DDI solutions, announces the release of the Men & Mice Suite version 6.6 along with added functionality for the Men & Mice Appliances.
The new release focuses on usability enhancements and DNS Security features, further ensuring that the Men & Mice Suite retains its position as one of the most reliable and user - friendly solutions available.
Highlights in version 6.6:
Utilization of static subnets displayed in the Management Console and the Web UI
Real-time utilization of static subnets is now displayed in the user interfaces, which allows users and administrators to quickly see the utilization percentage of the subnets. The utilization information can be copied out from the console for easy reports and the users can furthermore sort and filter by the utilization. Filtering of the utilization can be combined with other filters so the user can, as an example, get a list of all subnets of a specific size, or of a specific type that are more than 85% utilized.
With the addition of Smart-filters one of the most popular features of the Men & Mice Suite has now become even more powerful. The user can save filters and place them in "smart" folders. The user can also right-click the filter, change its name and the filter statement. Filters created by the "administrator" user are global, i.e. they are visible by all users.
Support for RPZ (DNS Firewall)
Men & Mice now support the Response Policy Zone framework in BIND, which is the underlying mechanism of DNS Firewalls. Administrators can create and define RPZ zones with the Men & Mice Suite or, via the tool, configure the DNS servers to subscribe to RPZ feeds from trusted sources.
SNMP Profiles and SNMPv3
Multiple SNMP profiles can now be created. This allows enterprises with complex networks that use the Men & Mice Suite to pull discovery information from routers that belong to different security realms to create a profile for each realm. With the addition of SNMPv3, profiles can be for version 1, 2c or 3 and can contain different settings, such as authentication and community strings.
Appliance diagnostic access
The Men & Mice Appliances, both DNS/DHCP and Caching, now have a read-only diagnostic shell access that can be used to run troubleshooting commands (such as dig, drill, etc.) and to gather information and logs from the appliances. The access is read-only so no changes can be made to the configuration or data on the appliances using the diagnostic access.
Backup and restore of appliances
The Men & Mice Central now stores a backup of full configuration and data from all the managed appliances. Full backups are taken daily and incremental backups are taken each time a change is made on the appliances. If an appliance were to become unavailable for some reason, a new appliance can be configured with the same IP address as that appliance. All configurations, including DNS and DHCP data, network configuration and other settings is restored from the backup to the new appliance making it identical to the previous one.
By David Beck, a Men & Mice trainer and course developer
Delivering IP packets within a subnet (on-link) is different than delivering between subnets (off-link). This article examines those differences for unicast, multicast, and anycast destination addresses. The goal is to shine a light on the differences for anycasts and to explain on-link anycasting. Everything is applicable to both IPv4 and IPv6, with one exception that is clearly noted. However, that difference is why this is fundamentally an IPv6 article. The IPv6 terminology "link" will be used in lieu of "subnet." "Node" will be used in lieu of "host."
A unicast address uniquely identifies one interface on one node. It is used to deliver a packet to only one interface. It is the type of address that initially comes to mind, when one hears "IP address." A sender with a unicast destined packet looks up the packet's destination address in its routing table, and selects the longest-match.
The longest-match entry may indicate that the destination is on-link (OL). For an OL destination the sender uses the data link layer (DLL) protocol, e.g. Ethernet, to deliver the packet to the final destination. Alternatively, the longest-match may indicate that the destination is off-link (XL, external-link). The routing table entry for an XL destination includes an OL router that is closer to the destination. The sender uses the DLL protocol to deliver the packet to the router. The router repeats the same process sending the packet toward the final destination.
An internet is two or more links (networks) joined by a router. It is the need to deliver unicast packets to XL nodes, that spurred the creation of internet protocols such as IP. If all destinations were OL, packet delivery could be done with only the DLL protocol and DLL addresses. (Note that the Internet is a huge internet with links throughout the world joined by routers.)
A multicast address identifies multiple interfaces on multiple nodes. A packet is delivered to all identified interfaces. In both IPv4 and IPv6, multicast addresses are distinct from unicasts and easily identifiable. IPv4 multicasts are 184.108.40.206/4 (addresses starting with 224-239). IPv6 multicasts are ff00::/8. Multicasting was not always a part of IP. It was added at the end of the 1980s.
Delivery of OL IP multicasts relies on the capability of the underlying DLL protocol to support multicasting, or to at least support broadcasting. Happily, Ethernet and 802.11 wireless have multicasting functionality and there are multicast MAC addresses. This makes OL multicasting delivery simple. IP multicast addresses are mapped to MAC multicast addresses through a formula, for IPv4 that is defined in RFC "Host Extensions for IP Multicasting" (http://www.rfc-editor.org/rfc/rfc1112.txt), and for IPv6 in the RFC “Transmission of IPv6 Packets over Ethernet Networks” (http://www.rfc-editor.org/rfc/rfc2464.txt). A node assigned a multicast IP address configures its network interface to listen for the corresponding MAC multicast address. When a packet is sent to an OL IP multicast address, it is packed in a frame destined for the corresponding MAC multicast. All nodes listening for the MAC address, receive and process the frame and the packet. OL multicasting is easily implemented, and widely used.
Both IPv4 and IPv6 routing protocols rely on OL multicasting. A router sends one packet to notify all others of routing protocol information, such as a network topology change. (OL delivery of IP multicasts, with underlying DLL protocols that don't support multicasting or broadcasting, can be very challenging. That little nightmare is happily well outside the scope of this article.)
XL multicasting, with a multicast address assigned to nodes on different links, is far more challenging. A special multicasting routing protocol must be implemented. The sender generates one packet; multicasting routers must duplicate the packet to deliver it to nodes on different links. XL multicasting is not widely used. The mass majority of organizations doesn't implement multicasting routing protocols. Multicasting routing protocols are not used in the open Internet.
Now we're reaching the heart of this article.
Like a multicast, an anycast is an address assigned multiple interfaces on multiple nodes. Unlike a multicast, an anycast packet is delivered to only one node. The sender doesn't care which node receives the packet, all the destinations are equivalent.
Like multicasting, there is a wide chasm between OL and XL anycasting. For multicasting, OL is easy and widely used. XL multicasting is challenging and rare. It is the opposite for anycasting. XL anycasting is easy.
XL anycasts are used for root and TLD (Top Level Domain) DNS servers. For example, the F root server (f.root-servers.net) has the IPv4 address 220.127.116.11. The address is an anycast. It is currently assigned to fifty-five DNS servers throughout the world (http://www.isc.org/f-root/). For DNS, all these servers are identical. When a packet is sent to 18.104.22.168, the natural process of unicast routing delivers it to only one node (one DNS server). Since all the servers have the same information, it is unimportant which is reached. In order for administrators to manage individual servers, each also has a unicast address, but that address is not intended for answering DNS queries.
XL anycast addresses are indistinguishable from unicast addresses. When a unicast address is assigned to a second node, the address becomes an anycast. The various nodes assigned the address don't know that it is an anycast and not a unicast. No special handling is required by nodes assigned an anycast address. No special handling is required by a sender generating a packet to an anycast destination. There is no protocol supporting anycasts. None is needed. Normal unicast routing handles delivery of XL anycasts. The only requirements are that the nodes sharing an address are on different links, and routing is properly configured. There isn't even an RFC defining the technical specifics for XL anycasting. Note that several RFCs discuss anycasting. For example the RFC "Operation of Anycast Services" is a Best Current Practices (BCP) document (http://www.rfc-editor.org/rfc/rfc4786.txt).
While XL anycasting is found in IPv4 and IPv6, OL anycasting is only implemented in IPv6. Nodes on the same link share an address. A packet addressed to an OL anycast is still delivered to only one interface. However, because the nodes are on the same link, routing can't handle delivery. Unlike XL anycasting, OL anycasting requires a technical specification. RFC "IP Version 6 Addressing Architecture" (http://www.rfc-editor.org/rfc/rfc4291.txt) specifies OL anycasting.
A node assigned an OL anycast sends Neighbor Advertisements (NAs). The NAs associate the anycast address with the node's own DLL unicast address. The NAs are fundamentally the same as the node would send for an assigned unicast address. Mapping an OL anycast, to the DLL unicast of a node, was a fundamental implementation decision taken by the IPv6 designers. Without prejudice as to whether it was a good or bad decision, it is noteworthy that there were other options. For example, First Hop Redundancy Protocols (FHRPs), essentially implement an OL anycast address, but with a completely different approach. The IETF standardized FHRP is RFC specified: "Virtual Router Redundancy Protocol (VRRP) Version 3 for IPv4 and IPv6" (http://www.rfc-editor.org/rfc/rfc5798.txt).
By definition, an anycast address is assigned to multiple nodes. Each node assigned an OL anycast sends NAs associating the address with its own DLL unicast. These NAs are basically competing with each other. Other nodes on the link associate the OL anycast with one DLL unicast address, with one of the nodes assigned the OL anycast. If, three nodes on an Ethernet share an anycast address, another OL node will use the unicast MAC address of one of the three. When a packet is sent to the anycast address, the DLL deliveries it to one node. In this way, the anycast requirement of delivery to a single node, is met.
It is worth considering why DLL unicasting is used for OL anycast delivery. While it isn't ideal, it is necessary because DLL protocols don't have anycast capabilities. If, for example, Ethernet had anycasting addresses and anycast functionality, an OL anycast could be mapped to a MAC anycast. OL anycasting would then be as trivial as OL unicasting, and OL multicasting. Since DLL anycasting doesn't exist, OL anycasts must be mapped to what DLLs provide, unicasts, broadcasts, or multicasts. DLLs broadcasting and multicasting functionality cannot be used, because they violate the anycast requirement of deliver to only one node. So delivery of an OL anycast is necessarily implemented by a DLL unicast.
XL anycast addressing require no special handling. OL anycast addressing does. A node assigned an OL anycast must be explicitly informed that the address is an anycast, requiring IPv6 systems to implement a technique to indicate anycasts. The OL anycast indication suppresses Duplicate Address Detection (DAD). When an IPv6 unicast is assigned, DAD tests if the address is assigned to another node on the link. The address isn't used if DAD reports duplication. Anycasts are purposely assigned to multiple nodes, so DAD is disabled for OL anycasts. Additionally, OL anycast NAs are sent with a slight delay, and, more importantly, with the Override flag cleared in the NA message.
A set Override flag tells the receiver to replace a previously cached entry for the advertised IPv6 address. For a unicast NA, the Override flag is set, so that a receiver uses the new advertisement. This makes sense since the unicast NA is coming from the same node that sent the older cached information. Newer unicast information is better. For an OL anycast, each node assigned the address sends an NA with its own DLL address. The cleared Override flag in these NAs means a receiver will use the DLL address from the first NA that arrives. Older anycast information is better.
The cleared Override flag prevents oscillating between different DLL addresses for the anycast destination. It additionally means that, at any given time, senders on a link are reaching different nodes that share the anycast address. This can be viewed positively, as it provides load balancing. However, it can't be controlled and all senders could also be sending to one DLL address (to one node). Unfortunately, if a cached OL anycast destination becomes unreachable, it can't be replaced until the cached entry times out. So for the unlucky nodes on the link with the unreachable address cached, the OL anycast is unreachable, but for all other nodes on the link the anycast is reachable.
So unlike an XL anycast, an OL anycast requires both special configuration and special handling on the nodes assigned the address. Neither OL or XL anycasts require special handling by senders.
RFCs define IPv6 OL anycast addresses. Being defined distinguishes them from other anycasts, which are purposely indistinguishable from unicasts. The most common OL anycast is the Subnet-Router anycast (SRA). Every link has an SRA address. All the interface identifier bits set to 0, identifies the SRA. For the link 2001:db8:cafe:fee::/64, the SRA is 2001:db8:cafe:fee::/128. The SRA is defined in RFC 4291. RFC "Reserved IPv6 Subnet Anycast Addresses" (http://www.rfc-editor.org/rfc/rfc2526.txt) reserves the highest 128 addresses on each link for OL anycasts.
Initially it may seem that the SRA would be an ideal way to implement a FHRP. However, the widely used FHRPs, HSRP, VRRP, GLBP and CARP all support IPv6 without using the SRA. Where is the SRA used? Saying it has not been widely implemented, is overstating its current uses. Depending on feedback, perhaps another article will delve deeper into the SRA.
So it ends on a whimper. IPv6 has added OL anycasting, but it is unused by any significant application, and its usefulness is questionable.
IP unicasting, both OL (on-link) and XL (external-link, off-link), is easy and widely implemented.
OL IP multicasting is easy and common. XL multicasting is more involved and uncommon.
OL IP anycasting is extremely rarely used and limited to IPv6. XL anycasting is easy, and although not common, widely used for DNS.
Texas Woman's University (TWU) is a major multi-campus U.S. public university, primarily for women. Its campuses in Denton, Dallas and Houston are joined by an e-learning campus offering innovative online degree programs in business, education and general studies.
This University needed better control and increased flexibility for a variety of network administration tasks, with the immediate need being a smooth transition from Windows to Open Source DHCP.
"Managing the Transition to Open Source DHCP “A Major Selling Point.”
“I am a proponent of open source technology,” said the College’s Lead Network Administrator, “and converting to Linux had been on my list of goals for a long while. I’d built a test Linux DHCP server, but I ran into some difficulties modeling the database migration.” So he did what any network administrator might do: he went looking for help online. “Basically I was trying to convert things cleanly and completely from Windows to Linux, and I was looking for a tool that would help me do that. I posted my needs on message boards and got a recommendation from another network administrator: “Try Men & Mice”. Not only did it help him make a smooth transition to Linux, but he adds, “I was able to get much better control over my Windows DHCP server right away. We are now running all of the DNS servers through the Men & Mice management console, as well. I am very happy with it.”
While helping the University accomplish their transition to Open Source DHCP, Men & Mice also expedited their ROI by also helping them to get a handle on their IP address management which was still being handled on an excel spreadsheet. “Men & Mice Really Simplifies the IPAM Management Piece.”
In addition, the robust API that Men & Mice includes was used to mitigate DNS coding errors and security concerns by automating DNS procedures previously not possible in the home grown application they were using. To accelerate the shortened ROI, the University also used the tool to help clean up stale PTR records in a minimal amount of time which was “a huge benefit and time savings”.
“A Significant Benefit for us.”
From a network administrator’s perspective, success can be measured in a number of ways, and for the University Office of Technology, one of the most meaningful measures is user satisfaction. “We are here to facilitate people’s education. Our students are here to better their lives, and we are here to support them. It’s an important mission, and when technology problems interfere with that, people will let you know quickly. Since we installed the Men & Mice Suite, I haven’t heard a thing.”
Read the full case study on how TWU, with the Men & Mice Suite, put their focus on facilitating people’s education instead of mundane network management and troubleshooting tasks.
How DNS wildcards really work & how to prevent that DNS wildcard bite!
The domain name system includes a function called "DNS wildcards". DNS wildcards are created using special domain names in DNS zones, such as "*.example.com.". DNS wildcards look similar to Unix shell globbing, Windows command.com wildcards or regular expressions. However, DNS wildcards have their own rules.
Mr. Carsten Strotmann from the Men & Mice Services team hosted a 30 minutes webinar with a Q&A session at the end, where he explained how DNS wildcards really work and how to prevent that DNS wildcard bite!
Tailored for DNS administrators on Unix and Windows operating authoritative DNS Servers with one or more zone-files, as well as all those interested in the topic.
Have a look at the slides and recording from the webinar to learn more.
By Mr. Carsten Strotmann, one of Men & Mice experts.
BIND 9.10 is the new version of the BIND 9 DNS server from ISC (not to be confused with BIND 10, which is a different DNS server product). We will report in a series of articles about the new features in BIND 9.10. The first beta version of BIND 9.10 was released this week and can be found at ftp://ftp.isc.org/isc/bind9/9.10.0b1/.
BIND 9.10 contains a new command-line tool to test DNSSEC installations. The tool is called delve and it works very much like the well-known dig, but with special DNSSEC validation powers.
delve checks the DNSSEC validation chain using the same code that is used by the BIND 9 DNS server itself. Compared with the DNSSEC testing function in dig +sigchase, delve is much closer to what really happens inside a DNS server.
1.1 A simple lookup
Without extra arguments, delve will query the local DNS server (taken from /etc/resolv.conf) for an IPv4-Address record at the given domain name. It tries to validate the answer received, prints the result of the validation, the requested data and the RRSIG Record (DNSSEC signature) used to verify the data.
As with dig, resource record types and network classes can be given in almost any order on the commandline. The switch +multi (for multiline) enables pretty printing; human readable output that is neatly formatted for a 78 column screen.
1.3 tracing DNSSEC validation
delve comes with a set of trace switches that can help troubleshoot DNSSEC validation issues. The first switch, +rtrace, prints the extra DNS lookups delve performs to validate the answer:
In this example, in addition to the MX-Record (Mail-Exchanger) Record, the DNSKEY record (DNSSEC public key) and the DS record (Delegation signer) for dnsworkshop.org, as well as the DNSKEY and DS records for ORG and the DNSKEY for the root-zone "." have been requested. The trust-anchor for the Internet Root-Zone is compiled into delve and acts as the starting trust anchor for the validation.
The switch +mtrace prints the content of any additional DNS records that have been fetched for validation.
+vtrace prints out the DNSSEC chain of validation:
delve is a very useful tool, not only for BIND 9 admins, but for everyone who needs to troubleshoot and fix DNS- and DNSSEC related issues.
Men & Mice, a leading provider of DNS, DHCP and IP Address Management (IPAM) solutions, announces the release of the Men & Mice Suite version 6.5.
The new release focuses on providing operational security and the ability to expand customer infrastructure.
Traditionally the Men & Mice Suite has been deployed as an overlay management solution for core DNS and DHCP services. As more customers become reliant on the Men & Mice Suite for the automation and control of their critical network infrastructure, any potential downtime can affect provisioning systems and other automated processes that must operate without interruption. To address the need for this absolute reliability, version 6.5 of the Men & Mice Suite comes with even more complete High Availability functionality.
Cloud environments have become an important part of the enterprise network, and traditionally the visibility into the DDI component of the cloud has been limited. Version 6.5 of the Men & Mice Suite now enables customers to manage core infrastructure services in the OpenStack cloud environment as seamlessly and easily as they manage their internal networks.
Version 6.5 of the Men & Mice Suite enables customers to configure and run Men & Mice Suite (Central) in a HA mode. This means that multiple copies of the Men & Mice Central can be run simultaneously on the network, and at any given time one of them will be the active instance. If an active instance of the Men & Mice Suite fails or is taken down for any reason, one of the other instances will assume the active role. When that happens all clients, whether they be regular user interfaces or script APIs, will automatically fail over to the new Central. With the new HA setup customers can run their critical automation processes without fear of interruption from possible downtime.
Software Defined Networking (SDN) and Cloud stack solutions that act as an IaaS platform are increasingly becoming a common part of the enterprise infrastructure. The Men & Mice Suite version 6.5 contains integration with OpenStack, an open source project for service providers, enterprises, government agencies and academic institutions that want to build public or private clouds. Multiple teams within an organization, each with their own cloud instances and multiple networks and subnets, are faced with the problem of limited visibility into their cloud environment. Men & Mice integrates the software defined networks with the traditional networks that exist in the enterprise environment enabling a global view into every aspect of the network infrastructure. The "good citizen" nature of the Men & Mice Suite continues to be preserved so the OpenStack networks can be created and configured through the Suite but the solution will also adapt to changes done outside of the Men & Mice Suite, either through the Horizon UI or through the OpenStack API.
Additionally, changes to OpenStack networking can be done through the Men & Mice Suite SOAP API, which can utilize the authentication, authorization and activity logging in Men & Mice. The result is gaining the flexibility of a cloud environment while still retaining all the security and control possible through the Men & Mice Suite.
In this new release the documentation and help has been moved from the operational manual format to a web based format. This change will ensure that all users get guaranteed access to the latest version of the help and documentation.
As in previous releases of the Men & Mice Suite, the new version contains various other enhancements that are intended to improve ease-of-use, stability and performance.
By Mr. Carsten Strotmann, one of Men & Mice experts.
BIND 9 and how a security issue demonstrates quality
Recently ISC issued a security warning (CVE-2014-0591) for several BIND versions.
The issue was that BIND 9 detects wrong data while working on NSEC3 records, and because the data is wrong, it opts to terminate itself instead of working with the wrong data (which could expose more serious security issues, esp. when handling DNSSEC data).
Shane Kerr of ISC described this behavior of BIND in the blog post "BIND 9′s Security Record": "The manner in which BIND 9 reacts to software bugs is to terminate. While unpleasant for administrators, the idea is to avoid the system running in an invalid state and causing more damage."
ISC's Michael McNally gave some background information on the security issue on the BIND users mailing list. The security issue has been caused by a change in the fundamental operating system library, the "libc". The implementation of the memcpy function has been changed in a recent update of the glibc library used on Linux systems. This change of implementation has triggered the bug to become visible. So far, the same bug has not been seen on other operating systems, or with other libc implementations. However, that does not mean that these systems are safe, just that the security issue does not show (but might still be there).
I'm happy about how BIND 9 handles this issue (terminating instead of ignoring the issue). This way the administrator notices (one hopes) and updates to a fixed version of BIND 9 and as binary installer packages for RedHat, Debian and Solaris from Men & Mice.
What scares me is all the other software out there (open source or commercial) that might be affected by this bug, but does not have the security net that BIND 9 has.
There could be similar security issues lurking in other software products. Stay vigilant! Monitor your servers.
As developers, we should scan our code for this error pattern (memcpy vs. memmove).
It is in this spirit we say,
simply but sincerely…
Thank you for your business
We wish you a happy holiday season,
and a new year of health,
happiness and prosperity.
Men & Mice staff