Latest news of the domain name industry

Recent Posts

Ship explosion takes ICANN gear out of action

Kevin Murphy, October 3, 2016, Domain Tech

An explosion and fire aboard a cargo ship has caused hardware destined for the ICANN’s upcoming meeting in Hyderabad to be impounded.

A welding accident caused the explosion aboard the mega container vessel as it was docked in Hamburg, on September 1 according to reports.

The resulting fire took four days for firefighters to put out, according to ICANN.

ICANN had two containers — a 40-footer and 20-footer — on the ship, moving gear from June’s Helsinki meeting to next month’s ICANN 57 in India, ICANN said.

The smaller of the two containers was close to the fire and has been “detained” in Germany where it may not be released for months or years.

It held “printers, remote participation computers, camera kits, digital signage equipment, and all network hardware and wireless equipment, including over 5 miles (8 km) of cabling”, ICANN said in a blog post.

While replacements have been secured for much of the equipment — likely at a cost of many thousands of dollars — some of the gear cannot be replaced in time for Hyderabad.

The main impact of this will be that remote meeting hubs will not be able to broadcast live into the Hyderabad venue, according to ICANN.

On-site participants may also experience slower than expected downloads due to the unavailability of the Akamai content delivery network servers the meetings usually use.

ICANN ships about 100 tonnes of kit to each of its meetings.

ICANN 57 will run from November 3 to November 9 at the International Convention Centre.

Are .mail, .home and .corp safe to launch? Applicants think so

Kevin Murphy, August 28, 2016, Domain Tech

ICANN should lift the freeze on new gTLDs .mail, .home and .corp, despite fears they could cause widespread disruption, according to applicants.

Fifteen applicants for the strings wrote to ICANN last week to ask for a risk mitigation plan that would allow them to be delegated.

The three would-be gTLDs were put on hold indefinitely almost three years ago, after studies determined that they were at risk of causing far more “name collision” problems than other strings.

If they were to start resolving on the internet, the fear is they would lead to problems ranging from data leakage to systems simply stopping working properly.

Name collisions are something all new TLDs run the risk of creating, but .home, .corp and .mail are believed to be particularly risky due to the sheer number of private networks that use them as internal namespaces.

My own ISP, which has millions of subscribers, uses .home on its home hub devices, for example. Many companies use .corp and .mail on their LANs, due to longstanding advice from Microsoft and the IETF that it was safe to do so.

A 2013 study (pdf) showed that .home received almost 880 million DNS queries over a 48-hour period, while .corp received over 110 million.

That was vastly more than other non-existent TLDs.

For example, .prod (which some organizations use to mean “production”) got just 5.3 million queries over the same period, and when Google got .prod delegated two years it prompted an angry backlash from inconvenienced admins.

While .mail wasn’t quite on the same scale as the other two, third-party studies determined that it posed similar risks to .home and .corp.

All three were put on hold indefinitely. ICANN said it would ask the IETF to consider making them officially reserved strings.

Now the applicants, noting the lack of IETF movement to formally freeze the strings, want ICANN to work on a thawing plan.

“Rather than continued inaction, ICANN owes applicants for .HOME, .CORP, and .MAIL and the public a plan to mitigate any risks and a proper pathway forward for these TLDs,” the applicants told ICANN (pdf) last Wednesday.

A December 2015 study found that name collisions have occurred in new gTLDs, but that no truly serious problems have been caused.

That does not mean .home, .corp and .mail would be safe to delegate, however.

ICANN to flip the secret key to the internet

Kevin Murphy, July 20, 2016, Domain Tech

ICANN is about to embark on a year-long effort to warn the internet that it plans to replace the top-level cryptographic keys used in DNSSEC for the first time.

CTO David Conrad told DI today that ICANN will rotate the so-called Key Signing Key that is used as the “trust anchor” for all DNSSEC queries that happen on the internet.

Due to the complexity of the process, and the risk that something might go wrong, the move is to be announced in the coming days even though the new public key will not replace the existing one until October 2017.

The KSK is a cryptographic key pair used to sign the Zone Signing Keys that in turn sign the DNS root zone. It’s basically at the top of the DNSSEC hierarchy — all trust in DNSSEC flows from it.

It’s considered good practice in DNSSEC to rotate keys every so often, largely to reduce the window would-be attackers have to compromise them.

The Zone Signing Key used by ICANN and Verisign to sign the DNS root is rotated quarterly, and individual domain owners can rotate their own keys as and when they choose, but the same KSK has been in place since the root was first signed in 2010.

Conrad said that ICANN is doing the first rollover partly to ensure that the procedures in has in place for changing keys are effective and could be deployed in case of emergency.

That said, this first rotation is going to happen at a snail’s pace.

Key generation is a complex matter, requiring the physical presence of at least three of seven trusted key holders.

These seven individuals possess physical keys to bank-style strong boxes which contain secure smart cards. Three of the seven cards are needed to generate a new key.

Each of the quarterly ZSK signing ceremonies — which are recorded and broadcast live over the internet — takes about five hours.

The first step in the rollover, Conrad said, is to generate the keys at ICANN’s US east coast facility in October this year. A copy will be moved to a facility on the west coast in February.

The first time the public key will appear in DNS will be July 11, 2017, when it will appear alongside the current key.

It will finally replace the current key completely on October 11, 2017, by which time the DNS should be well aware of the new key, Conrad said.

There is some risk of things going wrong, which could affect domains that are DNSSEC-signed, which is another reason for the slowness of the rollover.

If ISPs that support DNSSEC do not start supporting the new KSK before the final switch-over, they’ll fail to correctly resolve DNSSEC-signed domains, which could lead to some sites going dark for some users.

There’s also a risk that the increased DNS packet sizes during the period when both KSKs are in use could cause queries to be dropped by firewalls, Conrad said.

“Folks who have things configured the right way won’t actually need to do anything but because DNSSEC is relatively new and this software hasn’t really been tested, we need to get the word out to everyone that this change is going to be occurring,” said Conrad.

ICANN will conduct outreach over the coming 15 months via the media, social media and technology conferences, he said.

It is estimated that about 20% of the internet’s DNS resolvers support DNSSEC, but most of those belong to just two companies — Google and Comcast — he said.

The number of signed domains is tiny as a percentage of the 326 million domains in existence today, but still amounts to millions of names.

Verisign says new gTLDs put millions at risk

Kevin Murphy, May 26, 2016, Domain Tech

Verisign has revived its old name collisions security scare story, publishing this week a weighty research paper claiming millions are at risk of man-in-the-middle attacks.

It’s actually a study into how a well-known type of attack, first documented in the 1990s, might become easier due to the expansion of the DNS at the top level.

According to the paper there might be as many as 238,000 instances per day of query traffic intended for private networks leaking to the public DNS, where attackers could potentially exploit it to all manner of genuinely nasty things.

But Verisign has seen no evidence of the vulnerability being used by bad guys yet and it might not be as scary as it first appears.

You can read the paper here (pdf), but I’ll attempt to summarize.

The problem concerns a virtually ubiquitous protocol called WPAD, for Web Proxy Auto-Discovery.

It’s used by mostly by Windows clients to automatically download a web proxy configuration file that tells their browser how to connect to the web.

Organizations host these files on their local networks. The WPAD protocol tries to find the file using DHCP first, but fails over to DNS.

So, your browser might look for a wpad.dat file on, depending on what domain your computer belongs to, using DNS.

The vulnerability arises because companies often use previously undelegated TLDs — such as .prod or .global — on their internal networks. Their PCs could belong to domains ending in .corp, even though .corp isn’t real TLD in the DNS root.

When these devices are roaming outside of their local network, they will still attempt to use the DNS to find their WPAD file. And if the TLD their company uses internally has actually been delegated by ICANN, their WPAD requests “leak” to registry or registrant.

A malicious attacker could register a domain name in a TLD that matches the domain the target company uses internally, allowing him to intercept and respond to the WPAD request and setting himself up as the roaming laptop’s web proxy.

That would basically allow the attacker to do pretty much whatever he wanted to the victim’s browsing experience.

Verisign says it saw 20 million WPAD leaks hit its two root servers every single day when it collected its data, and estimates that 6.6 million users are affected.

The paper says that of the 738 new gTLDs it looked at, 65.7% of them saw some degree of WPAD query leakage.

The ones with the most leaks, in order, were .global, .ads, .group, .network, .dev, .office, .prod, .hsbc, .win, .world, .one, .sap and .site.

It’s potentially quite scary, but there are some mitigating factors.

First, the problem is not limited to new gTLDs.

Yesterday I talked to Matt Larson, ICANN’s new vice president of research (who held the same post at Verisign’s until a few years ago).

He said ICANN has seen the same problem with .int, which was delegated in 1988. ICANN runs one of .int’s authoritative name servers.

“We did a really quick look at 24 hours of traffic and saw a million and a half queries for domain names of the form, and that’s just one name server out of several in a 24-hour period,” he said.

“This is not a new problem, and it’s not a problem that’s specific to new gTLDs,” he said.

According to Verisign’s paper, only 2.3% of the WPAD query leaks hitting its root servers were related to new gTLDs. That’s about 238,000 queries every day.

With such a small percentage, you might wonder why new gTLDs are being highlighted as a problem.

I think it’s because organizations typically won’t own the new gTLD domain name that matches their internal domain, something that would eliminate the risk of an attacker exploiting a leak.

Verisign’s report also has limited visibility into the actual degree of risk organizations are experiencing today.

Its research methodology by necessity was limited to observing leaked WPAD queries hitting its two root servers before the new gTLDs in question were delegated.

The company only collected relevant NXDOMAIN traffic to its two root servers — DNS queries with answers typically get resolved closer to the user in the DNS hierarchy — so it has no visibility to whether the same level of leaks happen post-delegation.

Well aware of the name collisions problem, largely due to Verisign’s 11th-hour epiphany on the subject, ICANN forces all new gTLD registries to wildcard their zones for 90 days after they go live.

All collision names are pointed to, a reserved IP address picked in order to catch the attention of network administrators (DNS uses TCP/IP port 53).

Potentially, at-risk organizations could have fixed their collision problems shortly after the colliding gTLD was delegated, reducing the global impact of the vulnerability.

There’s no good data showing how many networks were reconfigured due to name collisions in the new gTLD program, but some anecdotal evidence of admins telling Google to go fuck itself when .prod got delegated.

A December 2015 report from JAS Advisors, which came up with the idea, said the effects of name collisions have been rather limited.

ICANN’s Larson echoed the advice put out by security watchdog US-CERT this week, which among other things urges admins to use proper domain names that they actually control on their internal networks.

It’s official: new gTLDs didn’t kill anyone

Kevin Murphy, December 2, 2015, Domain Tech

The introduction of new gTLDs posed no risk to human life.

That’s the conclusion of JAS Advisors, the consulting company that has been working with ICANN on the issue of DNS name collisions.

It is final report “Mitigating the Risk of DNS Namespace Collisions”, published last night, JAS described the response to the “controlled interruption” mechanism it designed as “annoyed but understanding and generally positive”.

New text added since the July first draft says: “ICANN has received fewer than 30 reports of disruptive collisions since the first delegation in October of 2013. None of these reports have reached the threshold of presenting a danger to human life.”

That’s a reference to Verisign’s June 2013 claim that name collisions could disrupt “life-supporting” systems such as those used by emergency response services.

Names collisions, you will recall, are scenarios in which a newly delegated TLD matches a string that it is already used widely on internal networks.

Such scenarios could (and have) led to problems such as system failure and DNS queries leaking on to the internet.

The applied-for gTLDs .corp and .home have been effectively banned, due to the vast numbers of organizations already using them.

All other gTLDs were obliged, following JAS recommendations, to redirect all non-existent domains to, an IP address chosen to put network administrators in mind of port 53, which is used by the DNS protocol.

As we reported a little over a year ago, many administrators responded swearily to some of the first collisions.

JAS says in its final report:

Over the past year, JAS has monitored technical support/discussion fora in search of posts related to controlled interruption and DNS namespace collisions. As expected, controlled interruption caused some instances of limited operational issues as collision circumstances were encountered with new gTLD delegations. While some system administrators expressed frustration at the difficulties, overall it appears that controlled interruption in many cases is having the hoped-for outcome. Additionally, in private communication with a number of firms impacted by controlled interruption, JAS would characterize the overall response as “annoyed but understanding and generally positive” – some even expressed appreciation as issues unknown to them were brought to their attention.

There are a number of other substantial additions to the report, largely focusing on types of use cases JAS believes are responsible for most name collision traffic.

Oftentimes, such as the random 10-character domains Google’s Chrome browser uses for configuration purposes, the collision has no ill effect. In other cases, the local system administrators were forced to remedy their software to avoid the collision.

The report also reveals that the domain name, which is owned by long-time ICANN volunteer Mikey O’Connor, receives a “staggering” 30 DNS queries every second.

That works out to almost a billion (946,728,000) queries per year, coming when a misconfigured system or inexperienced user attempts to visit a .corp domain name.