Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.
The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.
In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.
Verisign senior VP Pat Kane wrote in the May letter:
we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.
We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.
we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.
Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.
These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:
the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program
That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.
Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.
NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:
NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.
Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.
This would be “sufficient for the delegation of new gTLDs”, she wrote.
Could Verisign block new gTLDs?
It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.
Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.
NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.
Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.
In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.
To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.
That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.
Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.
According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.
These documents are likely to be published within weeks, according to the NTIA letter.
Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.
Two UK banks suffered downtime over the weekend after apparently failing to renew their domain name registrations.
Clydesdale Bank and Yorkshire Bank, which offer online banking services at cbonline.co.uk and ybonline.co.uk respectively, both blamed a “systems update” for the downtime.
But some customers reported seeing a registrar’s renewal page when they attempted to access the sites, and others are reportedly still seeing difficulties consistent with DNS propagation delays.
Both domain names have expiry dates of July 26, according to Whois records.
Thankfully, the banks, both of which are owned by National Australia Bank, managed to retain control of their domains. If they’d fallen into third party hands things could have been a lot worse.
Combined, the banks have revenue of a couple of billion pounds.
Google and other members of the New gTLD Applicant Group are happy to let ICANN put their applications on hold in response to security concerns raised by Verisign.
During the ICANN 46 Public Forum in Durban on Thursday, NTAG’s Alex Stamos — CTO of .secure applicant Artemis — said that agreement had been reached that about half a dozen applications could be delayed:
NTAG has consensus that we are willing to allow these small numbers of TLDs that have a significant real risk to be delayed until technical implementations can be put in place. There’s going to be no objection from the NTAG on that.
While he didn’t name the strings, he was referring to gTLDs such as .home and .corp, which were highlighted earlier in the week as having large amounts of error traffic at the DNS root.
There’s a worry, originally expressed by Verisign in April and independent consultant Interisle this week, that collisions between new gTLDs and widely-used internal network names will lead to data leakage and other security problems.
Google’s Jordyn Buchanan also took the mic at the Public Forum to say that Google will gladly put its uncontested application for .ads — which Interisle says gets over 5 million root queries a day — on hold until any security problems are mitigated.
Two members of the board described Stamos’ proposal as “reasonable”.
Both Stamos and ICANN CEO Fadi Chehade indirectly criticised Verisign for the PR campaign it has recently built around its new gTLD security concerns, which has led to somewhat one-sided articles in the tech press and mainstream media such as the Washington Post.
What we do object to is the use of the risk posed by a small, tiny, tiny fraction — my personal guess would be six, seven, eight possible name spaces that have any real impact — to then tar the entire project with a big brush. For contracted parties to go out to the Washington Post and plant stories about the 911 system not working because new TLDs are turned on is completely irresponsible and is clearly not about fixing the internet but is about undermining the internet and undermining new gTLDs.
Later, in response to comments on the same topic from the Association of National Advertisers, which suggested that emergency services could fail if new gTLDs go live, Chehade said:
Creating an unnecessary alarm is equally irresponsible… as publicly responsible members of one community, let’s measure how much alarm we raise. And in the trademark case, with all due respect it ended up, frankly, not looking good for anyone at the end.
That’s a reference to the ANA’s original campaign against new gTLDs, which wound up producing not much more than a lot of column inches about an utterly pointless Congressional hearing in late 2011.
Chehade and the ANA representative this time agreed publicly to work together on better terms.
New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.
ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.
His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.
That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.
Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.
The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.
Some of its findings are surprising:
- Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
- Three percent of all requests were for strings that have been applied for in the current round.
- A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
- .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com
Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.
If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.
Here’s the most-queried applied-for strings:
Chapin was quick to point out that big numbers do not necessarily equate to big security problems.
“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.
“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.
For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.
If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.
In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.
The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.
There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.
Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.
As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.
For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.
Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.
There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.
Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.
One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.
So what are Interisle’s recommendations likely to be?
Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.
For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.
It seems to me that if the second-level request data was available, more mitigation options would be opened up.
ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.
“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”
Without prompting, he addressed the risk of delay to the new gTLD program.
“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”
The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.
Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.
ICANN certainly likes to play things close to the whistle.
The Internet Architecture Board believes dotless domain names would be “inherently harmful to Internet security.”
The IAB, the oversight committee which is to internet technical standards what ICANN is to domain names, weighed into the debate with an article apparently published yesterday.
In it, the committee states that over time dotless domains have evolved to be used only on local networks, rather than the internet, and that to start delegating them at the top level of the DNS would be dangerous:
most users entering single-label names want them to be resolved in a local context, and they do not expect a single name to refer to a TLD. The behavior is specified within a succession of standards track documents developed over several decades, and is now implemented by hundreds of millions of Internet hosts.
By attempting to change expected behavior, dotless domains introduce potential security vulnerabilities. These include causing traffic intended for local services to be directed onto the global Internet (and vice-versa), which can enable a number of attacks, including theft of credentials and cookies, cross-site scripting attacks, etc. As a result, the deployment of dotless domains has the potential to cause significant harm to the security of the Internet
The article also says (if I understand correctly) that it’s okay for browsers to interpret words entered into address bars without dots as local resources and/or search terms rather than domain names.
It’s pretty unequivocal that dotless domains would be Bad.
The article was written because there’s currently a lot of talk about new gTLD applicants — such as Google, Donuts and Uniregistry — asking ICANN to allow them to run their TLDs without dots.
There’s a ban in the Applicant Guidebook on the “apex A records” that would be required to make dotless TLDs work, but it’s been suggested that applicants could apply to have the ban lifted on a case by case basis.
More recently, ICANN’s Security and Stability Advisory Committee has stated almost as unequivocally as the IAB that dotless domains should not be allowed.
But for some reason ICANN recently commissioned a security company to look into the issue.
This seems to have made some people, such as the At Large Advisory Committee, worried that ICANN is looking for some wiggle room to give its new gTLD paymasters what they want.
Alternatively, ICANN may just be looking for a second opinion to wave in the faces of new gTLD registries when it tells them to take a hike. It was quite vague about its motives.
It’s not just a technical issue, of course. Dotless TLDs would shake up the web search market in a big way, and not necessarily for the better.
Donuts CEO Paul Stahura today published an article on CircleID that makes the case that it is the browser makers, specifically Microsoft, that are implementing DNS all wrong, and that they’re objecting to dotless domains for competitive reasons. The IAB apparently disagrees, but it’s an interesting counterpoint nevertheless.