The Men & Mice Blog

Everything’s changed in DNS. Nothing’s different in DNS.

Posted by Greg Fazekas on 5/10/18 7:56 AM

globe_menmicelogo

The history of DNS (Domain Name System) starts with the earliest of early networked systems: ARPANET. DNS has often been characterized as the “phone book” for the internet — that analogy was, of course, invented  in an era where phonebooks were a thing.

It may be more fitting to liken it to a phone company switchboard. Even in the earliest days of ARPANET, however, the required communication was to send an email to Stanford Research Institute at Stanford University, where the hosts.txt file was maintained, to get a new Hostname into the list. Then, all Internet hosts updated the hosts.txt file twice a week via FTP file transfer. Twice a week... by FTP!

Interesting fact: whatever platform you’re using, chances are you can find a hostsfile somewhere on your computer. This is a remnant of the early ARPANET days. Back then, a simple static text file controlled the entirety of the network.

The Making of a Network (a.k.a. “everything has changed”)

As the networks grew, the need to wait became cumbersome, if not unbearable. Business was increasingly conducted outside of bank hours, and computers were moving data faster for us. So, why not use computing to handle IP assignments as well?

In 1983, the standard for DNS was accepted by the ARPANET community. By 1984, at UC Berkeley, we saw the advent of "Open Source" Berkeley Unix Distribution (BSD) and ported TCP/IP to Unix, making Unix a networked OS (under a DARPA grant) resulting in the first version of the Berkeley Internet Name Domain (BIND). To this day BIND serves as the de facto DNS software of the internet.

Thereafter, the Internet Engineering Task Force (IETF) was founded, and with it came new formal processes that have shaped the backend of the internet as we know it today.

Fast Forward To Now (a.k.a. “nothing’s different”)

You may be thinking: that’s interesting and all, but what does that have to do with my DNS network?

Well:  DNS hasn’t changed much in the last 4 decades. Of course, the explosive growth of the internet has changed the ways we map, scale and secure our networks. But the fundamental operating principles of DNS haven’t changed since its inception-- it’s still the switchboard of the internet. Instead of humans making a request through email, however, systems can call on DNS services, any time of day, to assign a multitude of IP addresses at a time.

The introduction of IPv6 (which has yet to be fully realized), and the dawn of cloud computing and IoT (Internet of Things) brought  significantly increased device requests and IP traffic.However, all that has not changed what DNS does; rather only how it does it.

We’ve Seen It Change and Stay the Same

DNS has changed very little; but the way we utilize it changed immensely. We've seen it: since the 1990’s, Men & Mice has serviced enterprise companies with DNS, DHCP and IPAM solutions.

We proactively evolve our overlay network management solutions to meet the needs of enterprise customers, and now high growth IoT companies as well. (Note: the two are not mutually exclusive.)

We are working with perhaps the most fundamental building (scaling) block of the internet. Our expertise is focused on the importance of adaptation. Network infrastructures have become hybrid, or have moved to the cloud completely. Multitudes of DNS services and environments have come to market introducing greater choices, but also complexities for network managers.

Men & Mice and the future of DNS

menandmice_unified_console_

Men & Mice has evolved its DNS, DHCP and IPAM solutions to cater to these changing environments. We adapted to become more flexible, so that the networks of our clients can migrate across network vendors easier. We created a  unified network management console to manage, in one place, all of the diverse platforms that make up a company’s network.

We’ve introduced new services such as xDNS in 2017 to help companies manage all external DNS. Likewise, we added deeper functionality with Microsoft Azure and Azure DNS for Microsoft customers with large domain portfolios.

We’ve streamlined our sales and customer journey processes, to reflect the same ease of use customers experience in our software solutions. Get a Live Demo directly from our website, for example.

We continue to offer some of the most sought-after training courses for companies and individuals who wish to learn or sharpen their understanding of DNS, enabling them to significantly increase expertise levels across their teams.

Meet our team

Join us in Berlin on May 15th, for a special event with the Embassy of Iceland in Germany. We will discuss the “State of Network Management” and the new challenges of DNS, DHCP and IPAM.

Or, meet us at Managed Service Hosting Summit, Cisco Live, Microsoft Inspire,VMWorld and Microsoft Ignite in the coming months.

Interesting fact: Bob Metcalfe, who invented the standard of the ethernet, predicted in 1995 that the internet would collapse in a year. He  also envisioned an end to wireless technologies, and that computers would stay wired.To his credit, he did — as per his promise — eat his words, literally, after none of those things happened.

menandmice_dns_CLUS_ linkedin

Topics: TechEd, DDI, DNS

Thinking of doing DNS better?

Posted by Men & Mice on 3/20/18 10:27 AM

I train, therefore I am

Or that’s what Descartes may have said if he’d been thinking his thoughts in 2018.

Mind you, this blog is not about thinking and it’s not about physical training either, like running or wife carrying or stuff like that. It’s more about training as in training for the mind. Learning useful things. Like how to configure BIND, debug DNS, figure out TSIGs or what DNSSEC can do for your network. Basically, the kind of training that helps you build a leaner, stronger, fitter network, and create the system resilience needed to deliver those constantly surging numbers of packets to their right destination, faster and more securely.

DNS sync.png

Getting DNS skills in sync

Since 1999, Men & Mice has been known for running effective and efficient DNS & BIND training courses worldwide. Previous offerings included open, public courses in a number of locations, as well as private on-site training on request.

Beginning in 2018, we are putting a little extra effort and logging a few more air-miles, making it much easier for you to attend, wherever you are.  We’re extending our public offerings into new destinations, with upcoming courses scheduled in California, New York, Switzerland, England and Israel, with additional courses to be added as the year progresses. See the schedule at menandmice.com/training/

To get the hang of running a better network, sign up for the 3-day DNS & BIND Fundamentals, or take our most popular course and spend 5 days sinking your teeth deeper into the subject matter in DNS & BIND Week. A range of on-site training options is also on offer.

Reach out to Men & Mice Training to register for a course, ask questions, log comments, or to recommend additional locations for future public offerings.

In the meantime, check out the dates and feast your eyes on the list of topics covered by our hands-on courses, taught by DNS experts.

Happy training!

Topics: Men & Mice, DNS, BIND, DNS training

Version 8.3 – Faster, Leaner, Fitter DHCP

Posted by Johanna E. Van Schalkwyk on 1/11/18 11:16 AM

Doing DHCP

The beauty of DHCP is the speed at which it functions. Basically, DHCP (Dynamic Host Configuration Protocol) does what administrators can do manually, but DHCP just does it automatically, more efficiently, and in a fraction of the time.

Size can trump speed

Yet the bigger a network gets, the more DHCP servers and scopes are needed to dynamically assign, or lease, IP addresses and related IP information to network clients. The number of servers and scopes and the way the load is distributed and processed affect the speed at which networks can keep DHCP data fresh and IP leases available for use. On large networks, how efficiently DHCP lease data is documented, processed and synchronized becomes just as important as the initial matchmaking between DHCP clients and servers.

The relationship between DHCP client and server

DHCP does the hard work of handling communication between servers on a network, and client computers trying to access that network. If the series of messages between a DHCP server and a client computer would be illustrated as a conversation, it would probably look something like this.

DHCP conversation.png

Mind you, at any given moment on a large network, hundreds, or even thousands, such conversations can be occurring simultaneously. On top of that, the client computer sends its DHCPDISCOVER broadcast packet to all available servers, and all available servers can respond with a DHCPOFFER. The client is not programmed to be picky and always accepts the first offer it receives. Once they detect that their offers were not accepted, the other DHCP servers will withdraw their offers. In short, there’s a whole lot of to-and-fro action behind the scenes that is invisible to network administrators and users, but still finds its way into DHCP servers’ lease history. 

To complicate matters – or simplify it – these DHCP client-server relationships, or leases, are mostly temporary arrangements. Both parties know it will end. The server will revoke the lease once it’s expired. The client, on the other hand, can attempt to keep the lease by renewing it, or start looking for another IP address lease if the one they had had expired.

Apart from doing matchmaking between clients and servers, DHCP also ensures that each network client has a unique IP address and appropriate subnet masks. If two clients were to try and use the same IP address, neither of them would be able to communicate on the network.

These rotating relationships make the way DHCP lease data is documented, processed and synchronized so much more critical. If this is not done fast and efficiently, the whole process of dynamically assigning IP addresses can become slowed down, leaving DHCP clients, servers and ultimately network users, frustrated and ineffective.

Making DHCP management faster, leaner and fitter

Once networks run to hundreds, or thousands of DHCP scopes and servers, one needs to re-assess the way DHCP data is processed, and develop ways to improve speed and efficiency. This is exactly what Men & Mice developers set out to achieve in Version 8.3 of the Men & Mice Suite.

DHCP optimizations in Version 8.3 include:

  • Reduced network traffic, especially between the Central server and a DHCP server controller 
  • Improved database performance when processing data from a DHCP server
  • Reduced load on a DHCP server while it is being synced

Optimizing processes in these areas has resulted in lightening the often heavy load on DHCP servers, making DHCP server management considerably faster and more efficient – and more pleasurable for the people in charge of keeping it all going, all the time.

To dig into the more technical aspects of these enhancements and get the lowdown on what this boost in DHCP performance and scalability could mean for you or your network, get in touch with one of our sales engineers to walk you through the details.

 New Call-to-action

Topics: Men & Mice Suite, IPAM, DHCP, CLOUD, Akamai, Performance

Network Outages, Human Error and What You Can Do About It

Posted by Men & Mice on 12/18/17 7:14 PM

When your route leaks 

Human error. As far as mainstream reporting on network outages goes, it’s the less flamboyant sidekick to DDoS and other cyber attacks. But in terms of consequences, it’s just as effective.

Once again, beginning of November, large parts of the US found themselves unable to access the internet due to one small error: a misconfiguration at Level 3, an ISP (Internet Service Provider) that underpins other, bigger networks.

According to reports the outage was the result of what is known as a “route leak”. In short, a route leak occurs when internet traffic is routed into inefficient, or simply wrong, directions due to incorrect information provided by one, or multiple, Autonomous Systems (ASes). ASes are generally used by ISPs to keep track of IP addresses and their network locations. Packets of data are routed between ASes, which use the Border Gateway Patrol (BGP) to establish and communicate the most efficient routes so you can browse the whole internet, and not just the IP addresses on your particular ISPs network.

Route leaks can be malicious, in which case they’re referred to as “route hijacks” or “BGP hijacks”. But in this case, it seems the cause of the outage was nothing more spectacular than a simple employee blunder, when (as speculation goes) a Level 3/Century Link engineer made a policy change which was, in error, implemented to a single router while trying to configure an individual customer BGP. This particular incident constitutes what the IETF defines as a Type 6 route leak,  generally occurring when “an offending AS simply leaks its internal prefixes to one or more of its transit-provider ASes and/or ISP peers.”

Route leaks, small and large, are regular occurrences – it’s part and parcel of the internet’s dependency on the basic BGP routing protocol, which is known to be insecure. Other recent high impact route leaks include the so-called Google/Hathway leak in March 2015 and a misconfiguration at Telekom Malaysia in June 2015 which had a debilitating roll-on effect around the world.

To minimize the possibility of route leaks, ISPs use route filters that are supposed to catch any problems with the IP routes that peers and customers intend to use for the sending and receiving of packets of data.

Other ways of combating route leaks include origin validation, NTT’s peer locking and commercial solutions. Additionally, the IETF is in the process of drafting proposals on route leaks.

Factoring in the human element

Tools and solutions aside, Level 3’s unfortunate misconfiguration once again highlights the fact that, despite keeping a low profile in the news, human error still rules when it comes to causing common network outages.

In an industry focused on how to design, build and maintain machines and systems that enable interconnected entities to send and receive millions of packets of data efficiently every second of every day, it’s maybe not all that odd that the humans behind all of this activity become of secondary importance. Though, as technology advances and systems become more automated, small human errors such as misconfiguring a server prefix are likely to have ever larger knock-on effects. At increasing rates, such incidents will roll out like digital tsunamis across oceans, instead of only flooding a couple of small, inflatable IP pools in your backyard.

Boost IT best practices - focus on humans

So outside of general IT best practices, what can you do to help the humans on your team to avoid human error?

Just as with any network, human interaction is based on established relationships. And just as in any network, a weak link, or a breakdown in the lines of communication, can lead to an outage. Humans who have to operate in an atmosphere of unclear instructions, tasks, responsibilities and communication, can become ineffective and anxious. This eats away at employee morale and workflow efficiency and lays the groundwork for institutional inertia and the stalling of progress. At other times, a lack of defined task-setting and clear boundaries may resort to employees showing initiative in the wrong places and at the wrong times.

To limit outages due to human error, just distributing a general set of best practices or relying on informally communicated guidelines amongst staff are simply not enough. While networking best practices always apply, the following four steps can be very effective in establishing the kind of human relationships needed to strengthen your network and optimize network availability.

 

Define DDI-1.png

1. Define

Draw up, and keep updated, a diagram not only of your network architecture (you do have one, don’t you?), but also make sure you have a workflow diagram for your teams: who is tasked with which responsibility and where does their action fit into the overall process? What are the expected outcomes? And what alternative plans and processes are in place if something goes awry? Most importantly, match tasks and responsibilities with well-defined role-based access management.

2. Communicate

Does everyone on your team, and collaborating teams, know who is responsible for what, when and where, and how the processes flow? Is this information centrally accessible and kept up to date? Clarity, structure and effective communication empower your team members to accept responsibility and show initiative within bounds.

3. Train

Does everyone on your team know what’s expected of them, and did they receive appropriate training to complete their assignments properly and responsibly? Do they have the appropriate resources available to do what they need to do efficiently? Without training and tools in place, unintentional accidents are simply so much more likely to occur.

4. Refresh

Don’t wait until team members run into trouble or run out of steam. Check in with each other regularly, and encourage a culture of knowledge sharing where individuals with different skill sets can have ample opportunity to develop new skills and understanding.

Refresh DDI.png

Finally

The saying goes, a chain is only as strong as its weakest link. The same goes for networks.

At a time in history when we have more technological checks and balances available than ever before, it turns out the weakest networking link is, too often, a human. While we’re running systems for humans by humans, we may as well put in the extra effort to help humans do what they do, better. Our networking systems will be so much stronger for it.

 

New Call-to-action

 

Topics: DDI, DDoS, network outages, IT best practices, IP address management

Secure Your DNS Across Multiple DNS Service Platforms with Men & Mice xDNS Redundancy

Posted by Men & Mice on 7/10/17 12:50 PM

DNS (Domain Name System) is the most critical aspect of any network’s availability. When DNS services are halted, or slowed down significantly, networks become inaccessible, leading to damaging losses in revenue and reputation for enterprises.

To ensure optimal network availability, many enterprises depend on top-tier managed DNS service providers for their external DNS needs. The basic “table stakes” characteristics of an enterprise-class managed DNS service are high reliability, high availability, high performance and traffic management. However, even the most robust DNS infrastructure is not immune to outages.

Outages may be localized, in which only certain DNS servers in the network are not responding, or, less commonly, system-wide. A system-wide DNS failure can take an entire business offline - the equivalent of power failure in every one of their data centers.

To prevent this, top-tier managed DNS systems have a great deal of built-in redundancy and fault tolerance, yet the danger of a single point of failure remains for enterprises that rely solely on a single-source DNS service.

If no system of DNS is failure proof, this begs the question: what should an enterprise do about it?

Using multiple DNS service providers for ultimate DNS redundancy

DNS availability statistics for managed DNS providers shows that the industry norm exceeds 5 nines (99.999%) uptime. This is the equivalent of about 5 minutes per year downtime. However, this top line number does not provide any detail on the impact of degraded performance, or the cascading effect of a system-wide outage of various duration, on individual enterprises.

To discover the true impact of a potential loss of DNS availability, enterprises need to properly assess the business risk associated with relying on a sole source provider, and compare that with the cost of a second source DNS service. What would a 30-minute loss of DNS cost the business in terms of revenue loss, reputation damage, support costs and recovery? What does it cost to maintain a second source DNS service?

Research amongst enterprises for whom online services are mission critical generally concludes that the cost ratios are in the range of 10:1 – one order of magnitude. Put another way, the cost of one outage is roughly estimated to be ten times the annual cost of a maintaining a second service. A business would have to have second source DNS for ten years to equal the cost of one major DNS outage.

Looking at the odds and costs of outages, many enterprises are opting to bring in a second, or even a third, DNS service to hold copies of critical DNS master zones.

This system of external DNS redundancy boosts DNS availability by:

External-DNS-Redundancy.png

1. removing the danger of exposure to a single point of DNS failure.

2. reducing traditional master-slave DNS redundancy vulnerabilities, where slave zones can’t be changed if the master becomes unavailable.

3. improving infrastructure resilience by hosting critical zones with multiple providers, ensuring continued service availability and updates of changes if one DNS service provider becomes unavailable.

The risky business of maintaining DNS redundancy across platforms

In theory, DNS redundancy across multiple DNS service provider platforms should be the best solution for optimal DNS high reliability, high availability and high performance. In practice, however, the complexity of tasks and scope for error involved in replicating and maintaining identical DNS zones on multiple platforms pose additional threats to DNS availability. The situation is made worse by:

  • A lack of centralized views
  • A lack of workflow automation
  • The difficulty of coordinating multiple platform APIs

This inability to view, synchronize and update identical zones’ data simultaneously can, in itself, lead to errors and conflicts in DNS configuration and result in a degradation of network performance, or even a network outage – the very events that multi-provider DNS redundancy is intended to prevent.

Protect your DNS on multiple platforms with Men & Mice xDNS Redundancy

Breaking new ground in the battle against DNS disruption, the Men & Mice xDNS Redundancy feature provides the abstraction level necessary to replicate and synchronize critical DNS master zones across multiple DNS service provider platforms, on-premises, in the cloud, or in hybrid or multi-cloud environments.

Men & Mice xDNS provides a unified view and centralized management of DNS data, regardless of the DNS service provider platform. Network administrators and other authorized users can use xDNS to perform necessary updates to their network’s DNS, as well as benefit from building automation with the powerful Men & Mice API, instead of having to dig around in different DNS platforms and deal with coordinating conflicting APIs. DNS-redundancy-and-Men-and-mice-suite.png

Combined with the flexibility of building automation on top of the Men & Mice Suite, xDNS offers you the freedom to better distribute your DNS load based on zone priority, performance requirements and accompanying costs. With xDNS, you are better equipped to steer the tiered price points of externally hosting, for example, critical high-performance or less essential low-performance zones, and utilize the DNS service best suited to your situation at a given time.

 


How xDNS Redundancy Works

Using the Men & Mice xDNS feature, create a zone redundancy group by selecting critical zones from DNS servers and services such as BIND, Windows DNS, Azure DNS, Amazon Route 53, NS1, Dyn and Akamai Fast DNS.

Once an xDNS zone redundancy group has been created, xDNS assists the administrator in creating identically replicated zone content, resulting in multiple identical master zones. Additional zones can be added or removed from the xDNS group as required.

All changes initiated by the user through Men & Mice, both the UI and API, will be applied to all zone instances in the group. All changes made externally to zones existing in the xDNS group, will be synchronized to all zones in that particular xDNS group. However, if DNS record conflicts arise, xDNS will alert the user and provide an option on how to resolve conflicts before the group is re-synchronized.

If an xDNS zone is not available for updating, for instance if one DNS service provider experiences an outage, that zone will be marked as out-of-sync. Once the zone becomes available again, it will be automatically re-synchronized and will receive all updates that were made while the DNS service was unavailable.

Men & Mice and NS1

NS1, the leading intelligent DNS and traffic management provider, recognizes the growing need for diverse application resiliency. NS1 has joined forces with Men & Mice in improving the efficacy of external DNS redundancy. Kris Beevers, Co-founder and CEO, says:

"Leveraging multiple managed DNS networks is the clear best practice for maintaining 100% uptime in today's rapidly evolving operational environment.  Configuring and operating multiple managed DNS services can be a complex, time-consuming process.  NS1 is excited to partner with Men & Mice to help enterprises minimize management overhead and seamlessly enable redundant DNS. xDNS Redundancy is well-suited to enable multi-network DNS without the usual headaches."

Men & Mice xDNS – making external DNS redundancy truly resilient

DNS redundancy is a great concept on paper, but a daunting challenge in practice. With xDNS, enterprises can seek out second, or even third source DNS services, confident in the knowledge that their DNS, and ultimately their business, will truly be safer that way.

Magnus Bjornsson, Men & Mice CEO, considers xDNS an important step towards providing enterprises with greater, and more reliable, network availability.
“Recent prominent network outages once again illustrate the critical importance of building more effective network resiliency through a powerful and secure system of DNS redundancy. Men & Mice xDNS provides a simple way for companies to manage their DNS on multiple external platforms, with the Men & Mice Suite software automatically taking care of the replication and synchronization of data in a reliable and consistent manner. We are looking forward to cooperating with NS1 on developing xDNS and extending DNS redundancy offerings.”

Men & Mice xDNS takes the ‘daunt’ out of maintaining external DNS redundancy, providing the centralized views and control necessary to reduce the risk of network exposure to a single point of failure, improve network reliability and performance and bolster the successful mitigation of DDoS attacks and other potentially harmful DNS incidents.

To learn more about xDNS Redundancy, check out the xDNS webinar, jointly presented by Men & Mice and NS1.

Check out the video to discover how it DDI all comes together:

Or try it out in the Men & Mice Suite:

New Call-to-action

Topics: DNS, Security, High availability, DNS redundancy, DDoS, External DNS, Failover

Men & Mice Breaks New DDI Ground with xDNS Redundancy and Multi-Cloud IPAM

Posted by Men & Mice on 6/29/17 1:30 PM

The joke goes: “How did God create the universe in seven days? No legacy infrastructure.”

Funny (or not) as that may be, how to make the most of legacy infrastructure in the age of accelerating technological disruption and rapid cloud services adoption, is the harsh reality most enterprises face today.

Well-known for its fast, reliable and efficient performance on large enterprise networks, the Men & Mice Suite already has a reputation as the go-to, enterprise-class, software overlay DNS, DHCP and IP Address Management (DDI) solution. With the release of Version 8.2 of the Suite, Men & Mice further solidifies our position as the commercial DDI solution best equipped to help large enterprises capitalize on legacy infrastructure, while adopting cloud services to advance business agility and scalability.

The Men & Mice Suite – IP wherever you are 

architecture.png

Almost three decades of expert innovation in DNS, DHCP and IP Address Management has given Men & Mice unique insight and expertise into creating solutions that confidently mitigate the shocks of technological disruption.

Built as an enterprise-grade, back-end agnostic solution and deployed on top of DNS and DHCP infrastructure, the Men & Mice DDI Suite pulls together critical network data from wherever it is kept, on-premises, in the cloud, hybrid cloud or multi-cloud, and turns a potential hot mess into a comprehensive overview, accessed and controlled from a single pane of glass.

The Men & Mice Suite provides consistent administrative controls on heterogeneous networks, with unparalleled support for Windows DNS and DHCP, BIND, Unbound, PowerDNS, ISC DHCP, Kea DHCP, Cisco IOS, OpenStack and Azure DNS and Amazon Route 53.

Designed to integrate seamlessly with the VMware Orchestrator framework, the Men & Mice Suite VMware vRealize Orchestrator plug-in allows for fast and efficient provisioning of virtual machines.

The first DDI solution to fully integrate with Microsoft Active Directory (AD), the Men & Mice Suite incorporates management of users and groups through AD, while granting access rights and building up roles and responsibilities through the Men & Mice Suite, ensuring advanced and secure granular role-based access management.

Offering you the flexibility to control your network as it suits you best, the Men & Mice Suite provides three powerful interfaces: the Men & Mice management console, the Men & Mice web interface, and, the strong and consistent Men & Mice API, communicating in SOAP, JSON-RPC and REST. The Men & Mice API, especially popular with our customers, provides the robust abstraction tools necessary to build and extend automation.

New in Men & Mice Suite Version 8.2

From Version 8.2, the Men & Mice Suite’s back-end agnostic capabilities are extended to include advanced, multi-cloud IP Address Management and integrated support for external DNS service providers.

Building on the flexibility of its architecture, Men & Mice Suite Version 8.2 consolidates on-premises and cloud networks in one view and point of access through support for IPAM in Azure and AWS, and by adding support for DNS service providers NS1 and Dyn to existing Men & Mice support for Azure DNS and Amazon Route 53.

Unique on the DDI market, and new in Version 8.2, the Men & Mice xDNS redundancy feature enables multi-platform DNS redundancy for ultimate network high availability, and successful mitigation of the fallout from DDoS attacks and other DNS failures.

xDNS redundancy provides the abstraction level necessary to replicate and synchronize critical DNS zones across multiple DNS service provider platforms, eliminating the possibility of a single point of failure resulting from dependency on one external DNS service provider.

Men & Mice - Changing the way the world sees networks

As IT matures into a key element for easily scalable business development and product delivery, and ultimately a driver of business growth, the need for high network availability, reliability and performance escalates.

For Magnus Bjornsson, Men & Mice CEO, delivering DDI products that boost business performance by bridging the gap between on-premises, cloud, hybrid cloud and multi-cloud network environments, is a challenge happily accepted. “We live in a world that’s getting more complicated by the minute. Cloud vendors are continuously bringing powerful new services online and enterprises are wrestling with how and when to best utilize them. Men & Mice Suite Version 8.2 is a landmark release, tackling this great challenge with innovative new features. Consolidating hybrid and multi-cloud IP Address Management in a single view and bolstering DNS availability across service provider platforms with xDNS redundancy, are great steps towards strategically improving the most critical of a company’s IT assets – its network. The Men & Mice Suite, used to run some of the largest corporate networks on the planet, is designed to give you the freedom and flexibility to use the back-end platform you want, to build the network you need.”

Looking for more?

Follow these links for more information on Men & Mice xDNS redundancy feature, or multi-cloud IP Address Management.

To see Men & Mice xDNS redundancy in action, check out the xDNS Redundancy webinar, jointly presented by Men & Mice and NS1.

Curious about how the Men & Mice Suite can benefit your network? Get in touch with one of our Men & Mice Sales Engineersor get your free Version 8.2 license for a complimentary 30-day trial experience.

New Call-to-action

Topics: IPAM, DNS, Security, CLOUD, High availability, DNS redundancy

Keep IT outages off your network with redundant DNS

Posted by Men & Mice on 5/31/17 11:43 AM

British Airways is still reeling after a weekend IT system outage that affected more than 1,000 flights and stranded approximately 75,000 passengers at Heathrow and Gatwick airports. Some sources speculate that the compensation costs could be similar, if not considerably more than the $100 million that last year’s crippling IT failure cost Delta Airways.

Statements from British Airways blamed the IT meltdown on a power supply issue at a data center, while ruling out any possibility of a cyber attack. Though it’s far too early to speculate on exactly how a power supply problem could knock a thousand flights off schedule, one thing is certain: British Airways’ Disaster Recovery Plan failed spectacularly - where system redundancy should’ve kicked in, there was none.

British Airways’ woes serve as an unpleasant, but urgent, reminder that the way we back up our systems is sometimes even more critical than how we run it day-to-day. As it goes with life insurance or a last will and testament, there’s no point in waiting until your plane goes down (or fails to go up) before you start getting your house in order.

The most effective way of providing ‘life insurance’ for your network, is to make sure that exactly mirrored copies of critical parts, such as DNS, are replicated to other locations away from your own data centers, thereby providing system redundancy. That way, if your data centers are knocked out, due to power failure, human error or malicious cyber activity, this critical service is still active, ensuring service continuity and retaining critical operational data – and keeping your passengers happy in the air, instead of sleeping on yoga mats in conference centers.

So how do you make your DNS redundant?

In a traditional DNS setup, a DNS master-slave deployment is used to maintain network availability, with one DNS server as the single writable source, or the master (see Diagram 1). Other DNS servers, or slaves, serve as back-ups, but rely on the availability of the master for new data. If the master becomes unavailable, critical DNS zones cannot be changed, and as ‘inferior’ entities, slaves can only serve zones temporarily in absence of their master.

Reduntant-dns-1.png

(Diagram 1)

Depending exclusively on a master-slave deployment poses a significant risk to a company in the event of any DNS outage. The risk is compounded when automation has been built on top of the DNS infrastructure, as the automation piece will halt until the master has been restored, or a slave has been manually promoted to the status of master. However, manual change, especially on networks serving hundreds of thousands internal and external customers, is not only very complicated, but carries a huge potential for error. When combined with the time factor and the complexities related to siloed teams and applications, reverting to manual change can too easily lead to disaster.

DNS redundancy is the process of expanding the choice of available DNS nameservers and distributing them between separate networks - basically keeping your DNS servers replicated in a lot of places, and pointing it at a lot of places.

To further limit risk, companies are increasingly turning to storing their critical external DNS zones on-premises, as well as with more than one specialized DNS or cloud provider that possesses the security, equipment and expertise to handle large amounts of DNS traffic from a variety of sources successfully. Ideally, the most effective redundant DNS architecture will have multiple masters, each possessing the advanced functionality to act as a primary server responding to DNS queries (see Diagram 2). Keeping the multiple master DNS records up to date and in sync can prove a challenge, but one that is totally outweighed by the ultimate benefits of continuous high availability.

Reduntant-DNS.png

(Diagram 2)

Why make your DNS redundant?

Sensible as it may seem, maintaining DNS redundancy is an IT expense that many enterprises try to avoid in order to keep operational costs down – a bit like putting off getting life insurance because it feels like such a waste to spend on the what ifs of tomorrow when all systems seem to be running just fine today. Yet these kinds of short-term savings can too easily turn into a “save a million, lose a billion” scenario, as (quite possibly) several airline bosses have recently discovered the hard way.

Keeping the running of your DNS diverse and distributed is an essential backup mechanism for any company wishing to stay connected, providing services and generating income 24/7/365.

For more information on how to manage redundant DNS complexity from one point of access, gain secure versatility and keep down unexpected expenses.

Topics: Redundant DNS, High availability

Ready for another look at DNSSEC?

Posted by Men & Mice on 4/12/17 8:32 AM

dnssec.pngSince the dawn of DNS, it has been a system regularly experiencing phases of increased vulnerability. Yet never before has it been as vulnerable to the escalating size of DNS attacks as in recent years, most notably in 2016.

Advice on how to prevent, or at least mitigate, all manner of attacks on DNS proliferates, and every security vendor and his uncle promises heaven and earth, if only you bought into their solutions. While you should investigate all options and carefully devise a wholescale security strategy, together with overhauling your network’s architecture design to close unnecessary gaps and eliminate weak links, it is critical that you don’t leave one of the most obvious DNS security stones unturned – DNSSEC. 

After Dyn went down so spectacularly last October during the biggest DDoS attack recorded to date, Geoff Huston gave an excellent talk at RIPE 73, speculating on possible ways to mitigate DNS attacks. In the process, he also managed to remind the audience that one of the ways to make DNS (and conversely, the internet) safer would be to fully implement DNSSEC. Fully deployed, DNSSEC ensures that the end user is connecting to the intended, and verified, website or service corresponding to a specific domain name. In this way, DNSSEC protects the directory lookup and complements other security technologies, such as TLS (https:). DNSSEC is not a magic bullet and won’t solve all internet security issues, but in a world of constantly multiplying mutations of attacks on DNS availability, it sure can’t hurt to add it to your DNS security repertoire.

That said, DNSSEC would be a much happier prospect for most of us if it were not so tedious to set up. Still, like all things worthwhile, a little bit of initial effort can take you a long way. To help you get a grip on the ins and outs of DNSSEC, Men & Mice’s DNS expert Carsten Strotmann recently added a DNSSEC zone signing tutorial to our useful selection of DNSSEC resources, all bound to help you take steps towards DNSSEC with greater confidence. The DNSSEC zone signing tutorial follows on from Carsten’s highly rated November 2016 webinar on DNS and DNSSEC monitoring – Strategy and Tools. An added bonus is the scripts of 15 essential DNS and DNSSEC monitoring tests which can come in pretty handy once you’ve set the DNSSEC wheels in motion.

In the greater scheme of dealing with DNS vulnerabilities, it’s reassuring to know that organizations such as the IETF are dedicated to coming up with solutions to better protect the internet at the top levels of design. The DNS PRIVate Exchange Working Group (DPRIVE – a simply brilliant acronym, as they go) is tasked with developing mechanisms to enable the confidentiality of DNS transactions. While DNSSEC revolves around ensuring that data remains unchanged during communication, the data itself remains open, so to speak. DPRIVE is working towards concealing the data, primarily focusing on providing confidentiality between DNS Clients and Iterative Resolvers, but perhaps later on progressing towards providing end-to-end confidentiality of DNS transactions. In practice, these developments mean that somewhere down the road, it will hopefully be possible to:

  1. provide DNS servers with knowledge on how the structure of the internet works so DNS queries will have a straighter and narrower path, only asking for the data that is really required and not having to put in full requests that have to go all the way to the root name servers.

  2. encrypt communication between the DNS resolver (usually on the internet provider’s network) and authoritative servers on the internet so that data transmitted can’t be harvested by ill-intentioned entities.

One of the side benefits of this type of encryption is that the underlying transport protocol will likely switch from UDP to TCP, thereby providing the ‘handshake’ required for secure communication and making spoofing so resource intensive that it will take the easy fun out of the kind of DoS attacks we’ve seen escalating in recent years.  

With all new and generic top level domains, as well as country code top level domains DNSSEC signed today, the implementation of DNSSEC to make the internet more robust and secure is quickly turning into the rule, rather than the exception. Which begs the question: why wait till tomorrow when you can begin implementing DNSSEC on your domain today?

Free trial of the Men & Mice Suite

 

Topics: DNSSEC, DNS, DANE

Men & Mice Suite Version 8.1 – Loving you long time

Posted by Men & Mice on 1/24/17 10:10 AM

It’s January, so it must be time for the annual Men & Mice Suite LTS release, aka long term support release.

A version upgrade of the Men & Mice Suite is scheduled for release three times a year. The versions are differentiated as Long Term Support (LTS) releases, and feature releases.

The first release in January of every year is an LTS release. By LTS we mean this version will be supported for two years after its initial release date. The two feature releases have a shorter LTS.pngsupport cycle.

While the primary focus of the feature releases is to introduce new functionality and features, the primary focus of the LTS releases is to fine-tune and improve newly introduced features, as well as to improve the stability and performance of the Men & Mice Suite in general. We like to see our annual LTS release as the prime example of our commitment to quality, superior functionality and keeping our solution as fast, simple and stable as our customers have become accustomed to.

To have a peek at what good features found their way into the Suite in 2016 and are fine-tuned in Version 8.1, check out details on our Windows Server 2016 support, REST API and VMware plug-in here. If you want to sink your teeth into the REST API, read our detailed article on the subject. And if you’re curious about support for ISC Kea DHCP and Windows Server 2016 Response Rate Limiting, look no further than here.

Finally, read more on how Men & Mice also made inroads into the cloud in 2016 with support for Azure DNS, developed in close cooperation with the Microsoft Azure Team.

One brand new tidbit added to 8.1. is a beautiful new look to the console. A new, fresher font and some easy-to-follow icons are sure to make the superior Men & Mice Suite ergonomic experience all that much more visually pleasing. Enjoy!

All further information on Men & Mice Suite Version 8.1 is obtainable from the Documentation Release Notes.

New Call-to-action


If you’d like to meet up with Men & Mice in person, please come and visit us at Booth E54 at Cisco Live Berlin at the end of February.

If you can’t make it to Berlin, let Men & Mice come to you - sign up for the Bind 9 Logging Best Practices webinar on February 2nd!

Happy January all the way from a not-so-chilly Iceland,

The Men & Mice Team

 

Topics: Men & Mice Suite, DDI

5 ways to have fun while not doing IPAM this Christmas

Posted by Men & Mice on 12/23/16 7:06 AM

At the end of a year overshadowed by Mirai botnets, leaked emails, late-night Twitter rants and talk of upgrading the dormant Cold War to Version 2.0,  perhaps this Christmas is the ideal time to sit back, pop that (nut) roast in the oven and relax with a little something different. Have your pick from this short collection of fun IPAM-like things to enjoy this festive season.jolakort2016.jpg

  1. First up, a highly entertaining TED talk by Mikko Hypponen, well-known security specialist from F-Secure. Hilarious anecdotes, most notably when tracing the makers of the first PC virus (Brain A), help to make Mikko’s talk on all things cybercrime just as relevant today as it was when he first delivered it in 2011.
  2. If Mikko’s talk set in motion a nostalgic longing for the good old days of ‘hobby’ viruses, what better place to visit than the Malware Museum? Take a walk on this ‘formerly’ wild side and rediscover the almost cutesy retroviruses of yore. OK, not quite so yore, only the 80s and 90s, but still tech yore, really.

It’s hard to believe Casino and Walker may have paved the way for the massive effects of a Mirai botnet or bizarre developments such as ransomware as a service, but hey, everything’s got to start somewhere, doesn’t it?   

  1. Speaking of Mirai. Not satisfied with taking the DNS out of Dyn in October in the biggest DDoS attack witnessed so far, a new Mirai strain set its sights on routers and modems in November, causing an outage affecting 900,000 Deutsche Telekom users and possibly leaving up to 5 million devices vulnerable With commercial routers biting the malware dust in such spectacular fashion, perhaps it’s just better to build your own. This handy Ars Technica guide to building a Linux router makes it look easy. Well, at least for some of us.
     
  2. Lets face it. All things DDI arent very funny. Actually, very little is. But that doesnt stop some of us from trying to make it funny. And the rest of us from trying to explain the trying to everyone who doesnt get it. If you are really at a loss for fun things to do this Christmas, then maybe this SysAdmin thread will liven it up for you. Or maybe not. Worth a shot for some of the comments on the comments, though!
  1. Last but not least. Some useful DDI tips and tricks as described by the 13 Icelandic Yule Lads. Monitoring DNSSEC, doing IPAM subnet discovery or sniffing out rogue IP addresses take on a whole new meaning if you do it with the help of the ogress Grylas boys. What can possibly go wrong if Doorway Sniffer, Pot Scraper and Sausage Swiper try to find ways to do DDI better? This seasonal eBook compilation makes for easy bed-time reading.
    Be warned: not for the faint of heart!

Merry Christmas from the very merry bunch at Men & Mice! 

Topics: IPAM