Latest news of the domain name industry

Recent Posts

Dotless domains “dangerous”, security study says

Kevin Murphy, August 6, 2013, Domain Tech

An independent security study has given ICANN a couple dozen very good reasons to continue outlaw “dotless” domain names, but stopped short of recommending an outright ban.

The study, conducted by boutique security outfit Carve Systems and published by ICANN this morning, confirms that dotless domains — as it sounds, a single TLD label with no second-level domain and no dot — are potentially “dangerous”.

If dotless domains were to be allowed by ICANN, internet users may unwittingly send their private data across the internet instead of a local network, Carve found.

That’s basically the same “internal name collision” problem outlined in a separate paper, also published today, by Interisle Consulting (more on that later).

But dotless domains would also open up networks to serious vulnerabilities such as cookie leakage and cross-site scripting attacks, according to the report.

“A bug in a dotless website could be used to target any website a user frequents,” it says.

Internet Explorer, one of the many applications tested by Carve, automatically assumes dotless domains are local network resources and gives them a higher degree of trust, it says.

Such domains also pose risks to users of standard local networking software and residential internet routers, the study found. It’s not just Windows boxes either — MacOS and Unix could also be affected.

These are just a few of the 25 distinct security risks Carve identified, 10 of which are considered serious.

ICANN has a default prohibition on dotless gTLDs in the new gTLD Applicant Guidebook, but it’s allowed would-be registries to specially request the ability to go dotless via Extended Evaluation and the Registry Services Evaluation Process (with no guarantee of success, of course).

So far, Google is the only high-profile new gTLD applicant to say it wants a dotless domain. It wants to turn .search into such a service and expects to make a request for it via RSEP.

Other portfolio applicants, such as Donuts and Uniregistry, have also said they’re in favor of dotless gTLDs.

Given the breadth of the potential problems identified by Carve, you might expect a recommendation that dotless domains should be banned outright. But that didn’t happen.

Instead, the company has recommended that only certain strings likely to have a huge impact on many internet users — such as “mail” and “local” — be permanently prohibited as dotless TLDs.

It also recommends lots of ways ICANN could allow dotless domains and mitigate the risk. For example, it suggests massive educational outreach to hardware and software vendors and to end users.

Establish guidelines for software and hardware manufacturers to follow when selecting default dotless names for use on private networks. These organizations should use names from a restricted set of dotless domain names that will never be allowed on the public Internet.

Given that most people have never heard of ICANN, that internet standards generally take a long time to adopt, and allowing for regular hardware upgrade cycles, I couldn’t see ICANN pulling off such a feat for at least five to 10 years.

I can’t see ICANN approving any dotless domains any time soon, but it does appear to have wiggle-room in future. ICANN said:

The ICANN Board New gTLD Program Committee (NGPC) will consider dotless domain names and an appropriate risk mitigation approach at its upcoming meeting in August.

NTIA alarmed as Verisign hints that it will not delegate new gTLDs

Kevin Murphy, August 5, 2013, Domain Tech

Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.

The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.

In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.

The letters (pdf and pdf) were published by ICANN over the weekend, over two months after the first was sent.

Verisign senior VP Pat Kane wrote in the May letter:

we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.

We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.

we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.

Kane’s concerns were first outlined by Verisign in its March 2013 open letter to ICANN, which also expressed serious worries about issues such as internal name collisions.

Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.

These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:

the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program

That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.

Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.

NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:

NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.

Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.

This would be “sufficient for the delegation of new gTLDs”, she wrote.

Could Verisign block new gTLDs?

It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.

Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.

NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.

Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.

In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.

To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.

That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.

What’s next?

Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.

According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.

These documents are likely to be published within weeks, according to the NTIA letter.

Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.

Two banks go down after forgetting to renew domains

Kevin Murphy, July 31, 2013, Domain Tech

Two UK banks suffered downtime over the weekend after apparently failing to renew their domain name registrations.

Clydesdale Bank and Yorkshire Bank, which offer online banking services at cbonline.co.uk and ybonline.co.uk respectively, both blamed a “systems update” for the downtime.

But some customers reported seeing a registrar’s renewal page when they attempted to access the sites, and others are reportedly still seeing difficulties consistent with DNS propagation delays.

Both domain names have expiry dates of July 26, according to Whois records.

Thankfully, the banks, both of which are owned by National Australia Bank, managed to retain control of their domains. If they’d fallen into third party hands things could have been a lot worse.

Combined, the banks have revenue of a couple of billion pounds.

“Risky” gTLDs could be sacrificed to avoid delay

Kevin Murphy, July 20, 2013, Domain Tech

Google and other members of the New gTLD Applicant Group are happy to let ICANN put their applications on hold in response to security concerns raised by Verisign.

During the ICANN 46 Public Forum in Durban on Thursday, NTAG’s Alex Stamos — CTO of .secure applicant Artemis — said that agreement had been reached that about half a dozen applications could be delayed:

NTAG has consensus that we are willing to allow these small numbers of TLDs that have a significant real risk to be delayed until technical implementations can be put in place. There’s going to be no objection from the NTAG on that.

While he didn’t name the strings, he was referring to gTLDs such as .home and .corp, which were highlighted earlier in the week as having large amounts of error traffic at the DNS root.

There’s a worry, originally expressed by Verisign in April and independent consultant Interisle this week, that collisions between new gTLDs and widely-used internal network names will lead to data leakage and other security problems.

Google’s Jordyn Buchanan also took the mic at the Public Forum to say that Google will gladly put its uncontested application for .ads — which Interisle says gets over 5 million root queries a day — on hold until any security problems are mitigated.

Two members of the board described Stamos’ proposal as “reasonable”.

Both Stamos and ICANN CEO Fadi Chehade indirectly criticised Verisign for the PR campaign it has recently built around its new gTLD security concerns, which has led to somewhat one-sided articles in the tech press and mainstream media such as the Washington Post.

Stamos said:

What we do object to is the use of the risk posed by a small, tiny, tiny fraction — my personal guess would be six, seven, eight possible name spaces that have any real impact — to then tar the entire project with a big brush. For contracted parties to go out to the Washington Post and plant stories about the 911 system not working because new TLDs are turned on is completely irresponsible and is clearly not about fixing the internet but is about undermining the internet and undermining new gTLDs.

Later, in response to comments on the same topic from the Association of National Advertisers, which suggested that emergency services could fail if new gTLDs go live, Chehade said:

Creating an unnecessary alarm is equally irresponsible… as publicly responsible members of one community, let’s measure how much alarm we raise. And in the trademark case, with all due respect it ended up, frankly, not looking good for anyone at the end.

That’s a reference to the ANA’s original campaign against new gTLDs, which wound up producing not much more than a lot of column inches about an utterly pointless Congressional hearing in late 2011.

Chehade and the ANA representative this time agreed publicly to work together on better terms.

.home gets half a billion hits a day. Could this put new gTLDs at risk?

Kevin Murphy, July 17, 2013, Domain Tech

New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.

ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.

His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.

That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.

Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.

The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.

Some of its findings are surprising:

  • Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
  • Three percent of all requests were for strings that have been applied for in the current round.
  • A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
  • .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com

Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.

Most Queried TLDs

If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.

Here’s the most-queried applied-for strings:

Most Queried TLDs

Chapin was quick to point out that big numbers do not necessarily equate to big security problems.

“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.

“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.

For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.

If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.

In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.

The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.

There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.

Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.

As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.

For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.

Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.

There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.

Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.

One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.

So what are Interisle’s recommendations likely to be?

Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.

For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.

It seems to me that if the second-level request data was available, more mitigation options would be opened up.

ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.

“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”

Without prompting, he addressed the risk of delay to the new gTLD program.

“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”

The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.

Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.

ICANN certainly likes to play things close to the whistle.