Latest news of the domain name industry

Recent Posts

ICANN’s name collision plan “creates risk of abuse”

Kevin Murphy, August 27, 2013, Domain Services

One of ICANN’s proposed methods of reducing the risk of name collisions in new gTLDs actually may create its own “significant risk for abuse”, according to RIPE NCC.
Asking registry operators to send a notification to the owner of IP address blocks that have done look-ups of their TLD before it is delegated risks creating a “backlash” against ICANN and registry operators, RIPE said.
Earlier this month, ICANN said that for the 80% of applied-for strings that are categorized as low risk, “the registry operator will notify the point of contacts of the IP addresses that issue DNS requests for an un-delegated TLD or names under it.”
The proposal is intended to reduce the risk of harms caused by the collision of new gTLDs and matching names that are already in use on internal networks.
For example, if the company given .web discovers that .web already receives queries from 100 different IP blocks, it will have to look up the owners of those blocks with the Regional Internet Registries and send them each an email telling them than .web is about to hit the internet.
RIPE is the RIR for Europe, responsible for allocating IP addresses in the region, so its view on how effective a mitigation plan this is cannot be easily shrugged off.
Chief scientist Daniel Karrenberg told ICANN today that the complexity of the DNS, with its layers of recursive name servers and such, makes the approach pointless:

The notifications will not be effective because they will typically not reach the party that is potentially at risk.

In addition, it will be trivial for mischief-makers to create floods of useless notifications by conducting deliberately erroneous DNS queries for target TLDs, he said:

anyone can cause the registry operator to send an arbitrary amount of mandatory notifications to any holder of IP address space. It will be highly impractical to detect such attacks or find their source by technical means. On the other hand there are quite a number of motivations for such an attack directed at the recipient or the sender of the notifications. The backlash towards the registry operator, ICANN and other parties in the chain will be even more severe once the volume increases and when it turns out that the notifications are for “non-existing” queries.

With a suitably large botnet, it’s easy to see how an attacker could generate the need for many thousands of mandatory notifications.
If the registry has a manual notification process, such a flood would effectively DDoS the registry’s ability to send the notices, potentially delaying the gTLD.
Even if the process were to be automated, you can imagine how IP address block owners (network admins at ISPs and hosting companies, for example) would respond to receiving notifications, each of which creates work, from hundred of affected gTLD operators.
It’s an interesting view, and one that affected new gTLD applicants (which is most of them) will no doubt point to in their own comments on the name collisions mitigation plan.

Bank takes blame for gTLD name collision

Kevin Murphy, August 23, 2013, Domain Registries

The Commonwealth Bank of Australia, which has applied for the new gTLD .cba, has told ICANN that its own systems are to blame for most of the error traffic the string sees at the DNS root.
The company wants ICANN to downgrade its gTLD application to “low risk” from its current delay-laden “uncalculated” status, saying that it can remediate the problem itself.
Since the publication of Interisle Consulting’s name collisions report, CBA said it has discovered that its own systems “make extensive use of ‘.cba’ as a strictly internal domain.”
Leakage is the reason Interisle’s analysis of root error traffic saw so many occurrences of .cba, the bank claims:

As the cause of the name collision is primarily from CBA internal systems and associated certificate use, it is within the CBA realm of control to detect and remediate said systems and internal certificate use.

One has to wonder how CBA can be so confident based merely on an “internal investigation”, apparently without access to the same extensive and highly restricted data set Interisle used.
There are many uses of the string CBA and there can be no guarantees that CBA is the only organization spewing internal DNS queries out onto the internet.
CBA’s comment is however notable for being an example of a bank that is so unconcerned about the potential risks of name collision that it’s happy to let ICANN delegate its dot-brand without additional review.
This will surely help those who are skeptical about Interisle’s report and ICANN’s response to it.

Artemis plans name collision conference next week

Kevin Murphy, August 16, 2013, Domain Tech

Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.
Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.
The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.
Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.
That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.
The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.
The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.
NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.
The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.

On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.

NTAG rubbishes new gTLD collision risk report

Kevin Murphy, August 15, 2013, Domain Policy

The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.
The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.
But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.
NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.
The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.
The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.
NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:

We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.

The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.
That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.
If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.
NTAG said:

None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.

the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.
Verisign said:

This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.

The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).
NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.

Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.

While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.
Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.
For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.
In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)
The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.

Verisign confirms .gov downtime, blames algorithm

Kevin Murphy, August 15, 2013, Domain Tech

Verisign this morning confirmed yesterday’s reports that the .gov top-level domain went down for some internet users due to a DNSSEC problem, which it said was related to an algorithm change.
In a posting to various mailing lists, Verisign principal engineer Duane Wessels said:

On the morning of August 14, a relatively small number of networks may have experienced an operational disruption related to the signing of the .gov zone. In preparation for a previously announced algorithm rollover, a software defect resulted in publishing the .gov zone signed only with DNSSEC algorithm 8 keys rather than with both algorithm 7 and 8. As a result .gov name resolution may have failed for validating recursive name servers. Upon discovery of the issue, Verisign took prompt action to restore the valid zone.
Verisign plans to proceed with the previously announced .gov algorithm rollover at the end of the month with the zone being signed with both algorithms for a period of approximately 10 days.

This clarifies that the problem was slightly different to what had been assumed yesterday.
It was related to change of the cryptographic algorithm used to create .gov’s DNSSEC keys, a relatively rare event, rather than a scheduled key rollover, which is a rather more frequent occurrence.
The problem would only have made .gov domains (and consequently web sites, email, etc) inaccessible for users of networks where DNSSEC validation is strictly enforced, which is quite small.
The US ISP with the strongest support for DNSSEC is Comcast. Since turning on its validators it has reported dozens of instances of DNSSEC failing — mostly in second-level .gov domains, where DNSSEC is mandated by US policy.
On two other occasions Comcast has blogged about the whole .gov TLD failing DNSSEC validation due to problems keeping keys up to date.
The general problem is widespread enough, and the impact severe enough, that Comcast has had to create an entirely new technology to prevent borked key rollovers making web sites go dark for its customers.
Called Negative Trust Anchors, it’s basically a Band-Aid that allows the ISP to deliberately ignore DNSSEC on a given domain while it waits for that domain’s owner to sort out its key problem.
The technology was created following the widely reported nasa.gov outage last year.
It’s really little wonder that so few organizations are interested in deploying DNSSEC today.
Yesterday’s .gov problem may have been minor, lasting only an hour or two, but had the affected TLD been .com, and had DNSSEC deployment been more widespread, everyone on the planet would have noticed.
Under ICANN contract, DNSSEC is mandatory for new gTLDs at the top level, but not the second level.

Donuts, Uniregistry and Famous Four respond to ICANN’s new gTLD security bombshell

Kevin Murphy, August 6, 2013, Domain Registries

Following the shock news this morning that ICANN wants to delay hundreds of new gTLD applications due to potential security risks, we pinged a few of the biggest applicants for their initial reactions.
Donuts, Uniregistry and Famous Four Media, which combined are responsible for over a fifth of all applications, have all responded so far, so we’re printing their statements here in full.
As a reminder, two reports published by ICANN today a) strongly warn against delegating so-called “dotless” domains and b) present significant evidence that “internal name collisions” are a real and present danger to the security and stability of many private networks.
ICANN, in response to the internal name collision issue, proposed to delay 20% of all new gTLD applications for three to six more months while more research is carried out.
It also wants to ask new gTLD registries to conduct outreach to internet users potentially affected by their delegated gTLD strings.
Of the three, Donuts seems most upset. It sent us the following statement:

One has to wonder about the timing of these reports and the motivations behind them. Donuts believes, and our own research confirms satisfactorily to us, that dotless domains and name collision are not threatening to the stability and security of the domain name system.
Name collisions, such as the NxD (in the technical parlance) collisions studied in this report, happen every day in .com, yet the study did not quantify those and Verisign does not block those names from being registered.
We’re concerned about false impressions being deliberately created and believe the reports are commercially or competitively motivated.
There is little reason to pre-empt dotless domains now when there are ICANN processes in place to evaluate them in due course. We don’t believe that ICANN resources need to be deployed at this point on understanding the potential innovations of possible uses nor any security harms.
We also think that name collision is an overstated issue. Rather than take the overdone step of halting or delaying these TLDs, if the issue really is such a concern, it would be wiser to focus on the second-level names where a conflict could occur.
As the NTIA recently wrote, Verisign’s inconsistencies on technical issues are very troubling. These issues have been thoroughly studied for some time. It’s far past due to conclude this eight-year process an move to delegation

As I haven’t previously heard any reason to doubt Interisle Consulting’s impartiality or question its motivation in writing the name collisions report I asked Donuts for clarification, but the company declined to elaborate.
Interisle has been working with ICANN for some time on various technical studies and is also one of the new gTLD program’s independent evaluators, responsible for registry services evaluations.
Uniregistry CEO Frank Schilling was also unhappy with the report. He sent the following statement:

We are deeply dismayed by this new report, both by its substance and its timing. On the substance, the concerns addressed by the report relate, primarily if not solely, to solvable problems created by third-parties using the DNS in non-standard ways. We expect that any problems will be addressed quickly by the companies and individuals that caused them in the first place.
On ICANN’s timing, it is, come just as the first new gTLDs are prepared to launch, very late and, quite obviously, highly disruptive to the long-standing business plans of the companies that relied on ICANN’s guidebook and stated timelines. Uniregistry believes that the best approach is to move forward with the launch of all new gTLDs on the existing schedule.

Finally, Famous Four Media is slightly more relaxed about the situation, judging by the statement it sent us:

Famous Four Media’s primary concern is the security and stability of the Internet. Since this is in the interest of all parties involved in the new gTLD program from registries to registrants and all in between Famous Four Media welcomes these proposals.
Whilst the latest report, and the consequent ICANN proposals, will inevitably cause delays and additional costs in the launches of new gTLDs, Famous Four Media does not believe it will impact its go-to-market plans significantly. The majority of our TLD strings are considered “low risk” and see this in a very positive light although other applicants might not afford to be as sanguine.

According to the DI PRO New gTLD Application Tracker, which has been updated with the risk levels ICANN says each applied-for gTLD poses, 18 of Famous Four’s 60 original applications are in the riskiest two categories, compared to 23 of Uniregistry’s 54 and 102 of Donuts’ of 307.

New gTLDs are the new Y2K: .corp and .home are doomed and everything else is delayed

Kevin Murphy, August 6, 2013, Domain Registries

The proposed gTLDs .home and .corp create risks to the internet comparable to the Millennium Bug, which terrorized a burgeoning internet at the turn of the century, and should be rejected.
Meanwhile, every other gTLD that has been applied for in the current round could be delayed by months in order to mitigate the risks they pose to internet users.
These are the conclusions ICANN has drawn from Interisle Consulting’s independent study into the problems that could be caused when new gTLDs clash with widely-used internal naming systems.
The extensive study, which drew on 8TB of traffic data provided by 11 of the 13 DNS root server operators, is 197 pages long and absolutely fascinating. It was published by ICANN today.
As Interisle CEO Lyman Chapin reported at the ICANN meeting in Durban a few weeks ago, the large majority of TLDs that have been applied for in the current round already receive large amounts of error traffic:

Of the 1,409 distinct applied-for TLD strings, 1,367 appeared at least once in the 2013 DITL [Day In the Life of the Internet] data with the string at the TLD position.

We’ve previously reported on the volume of queries new gTLDs get, such as the fact that .home gets half a billion hits a day and that 3% of all requests were for strings that have been applied for in the current round.
The extra value in Interisle’s report comes when it starts to figure out how many end points are making these requests, and how many second-level domains they’re looking for.
These are vitally important factors for assessing the scale of the risk of each TLD.
Again, .home and .corp appear to be the most dangerous.
Interisle capped the number of second-level domains it counted in the 2013 data at 100,000 per TLD per root server — 1,100,000 domains in total — and .home was the only TLD string to hit this cap.
Cisco Systems’ proposed .cisco TLD came close, failing to hit the cap in only one of the 11 root servers providing data, while .box and .iinet (both also used widely on home routers) hit the cap on at least one root server.
The lowest count of second-level domains of the 35 listed in the report came from .hsbc, the bank brand, but even that number was a not-inconsiderable 2,000.
Why are these requests being made?
Surprisingly, interactions between a security feature in Google’s own Chrome browser and common residential routers appear to be the biggest cause of queries for non-existent TLDs.
That issue, which impacts mainly .home, accounts for about 46% of the requests counted, according to the report.
In second place, with 15% of the queries, are requests for real domain names that appear to have had a non-existent TLD — again, usually .home — appended by a residential router or cable modem.
Apparent typos — where a user enters a URL but forgets to type the TLD — were a relatively small percentage of requests, coming in at under 1% of queries.
The study also found that bad requests come from many thousands of sources. This table compares the number of requests to the number of sources.
[table id=14 /]
The “Count” column is the number, in thousands, of requests for each TLD string. The “Prefix Count ” column refers to the number of sources providing this traffic, counted by the /24 IP address block (each of which is up to 256 potential hosts).
As you can see, there’s not necessarily a correlation between the number of requests a TLD gets and the number of people making the requests — .google gets queried by more sources than the others, but it’s only ranked 24 in terms of overall query volume, for example.
Interisle concluded from all this that .corp and .home are simply too dangerous to delegate, comparing the problem to the year 2000 bug, where a global effort was required to make sure software could support the four-digit dating scheme required by the turn of the century.
Here’s what the report says about .corp:

users could be taken to the wrong web site (and possibly be exposed to phishing attacks) or told that web sites do not exist when they do, depending on how the .corp TLD is resolved. A corporate mail system might attempt to deliver email to the wrong server, and this could expose sensitive or confidential information to someone who was not supposed to receive it. In essence, everything deployed in the private network would need to be checked.
There are no easy solutions to these problems. In an ideal world, the operators of these private networks would get a timely notification of the new TLD’s delegation and then take action to address these issues. That seems very improbable. Even if ICANN generated sufficient publicity about the new TLD’s delegation, there is no guarantee that this will come to the attention of the management or operators of the private networks that could be jeopardized by the delegation.

It seems reasonable to estimate that the amount of effort involved might be comparable to a wholesale renumbering of the internal network or the Y2K problem.

It notes that applied-for TLDs such as .site, .office, .group and .inc appear to be used in similar ways to .home and .corp, but do not appear to present as broad a risk.
To be clear, the risk we’re talking about here isn’t just people typing the wrong things into browsers, it’s about the infrastructure on many thousands of private networks starting to make the wrong security assumptions about domain names.
ICANN, in response, has outlined a series of measures sure to infuriate many gTLD applicants, but which are consistent with its goal to protect the security and stability of the internet.
They’re also consistent with some of the recommendations put forward by Verisign over the last few months in its campaign to show that new gTLDs pose huge risks.
First, .corp and .home are dead. These two strings have been categorized “high risk” by ICANN, which said:

Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk

Given the Y2K-scale effort required to mitigate the risks, and the fact that the eventual pay-off wouldn’t compensate for the work, I feel fairly confident in saying the two strings will never be delegated.
Another 80% of the applied-for strings have been categorized “low risk”. ICANN has published a spreadsheet explaining which string falls into which category. Low risk does not mean they get off scot-free, however.
First, all registries for low-risk strings will not be allowed to activate any domain names in their gTLD for 120 days after contract signing.
Second, for 30 days after a gTLD is delegated the new registries will have to reach out to the owners of each IP address that attempts to query names in that gTLD, to try to mitigate the risk of internal name collisions.
This, as applicants will no doubt quickly argue, is going to place them under a massive cost burden.
But their outlook is considerably brighter than that of the remaining 20% of applications, which are categorized as “uncalculated risk” and face a further three to six months of delay while ICANN conducts further studies into whether they’re each “high” or “low” risk strings.
In other words, the new gTLD program is about to see its biggest shake-up since the GAC delivered its Advice in Beijing, adding potentially millions in costs and delays for applicants.
ICANN’s proposed mitigation efforts are now open for public comment.
One has to wonder why the hell ICANN didn’t do this study two years ago.

NTIA alarmed as Verisign hints that it will not delegate new gTLDs

Kevin Murphy, August 5, 2013, Domain Tech

Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.
The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.
In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.
The letters (pdf and pdf) were published by ICANN over the weekend, over two months after the first was sent.
Verisign senior VP Pat Kane wrote in the May letter:

we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.
We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.

we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.

Kane’s concerns were first outlined by Verisign in its March 2013 open letter to ICANN, which also expressed serious worries about issues such as internal name collisions.
Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.
These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:

the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program

That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.
Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.
NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:

NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.

Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.
This would be “sufficient for the delegation of new gTLDs”, she wrote.
Could Verisign block new gTLDs?
It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.
Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.
NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.
Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.
In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.
To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.
That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.
What’s next?
Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.
According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.
These documents are likely to be published within weeks, according to the NTIA letter.
Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.

“Risky” gTLDs could be sacrificed to avoid delay

Kevin Murphy, July 20, 2013, Domain Tech

Google and other members of the New gTLD Applicant Group are happy to let ICANN put their applications on hold in response to security concerns raised by Verisign.
During the ICANN 46 Public Forum in Durban on Thursday, NTAG’s Alex Stamos — CTO of .secure applicant Artemis — said that agreement had been reached that about half a dozen applications could be delayed:

NTAG has consensus that we are willing to allow these small numbers of TLDs that have a significant real risk to be delayed until technical implementations can be put in place. There’s going to be no objection from the NTAG on that.

While he didn’t name the strings, he was referring to gTLDs such as .home and .corp, which were highlighted earlier in the week as having large amounts of error traffic at the DNS root.
There’s a worry, originally expressed by Verisign in April and independent consultant Interisle this week, that collisions between new gTLDs and widely-used internal network names will lead to data leakage and other security problems.
Google’s Jordyn Buchanan also took the mic at the Public Forum to say that Google will gladly put its uncontested application for .ads — which Interisle says gets over 5 million root queries a day — on hold until any security problems are mitigated.
Two members of the board described Stamos’ proposal as “reasonable”.
Both Stamos and ICANN CEO Fadi Chehade indirectly criticised Verisign for the PR campaign it has recently built around its new gTLD security concerns, which has led to somewhat one-sided articles in the tech press and mainstream media such as the Washington Post.
Stamos said:

What we do object to is the use of the risk posed by a small, tiny, tiny fraction — my personal guess would be six, seven, eight possible name spaces that have any real impact — to then tar the entire project with a big brush. For contracted parties to go out to the Washington Post and plant stories about the 911 system not working because new TLDs are turned on is completely irresponsible and is clearly not about fixing the internet but is about undermining the internet and undermining new gTLDs.

Later, in response to comments on the same topic from the Association of National Advertisers, which suggested that emergency services could fail if new gTLDs go live, Chehade said:

Creating an unnecessary alarm is equally irresponsible… as publicly responsible members of one community, let’s measure how much alarm we raise. And in the trademark case, with all due respect it ended up, frankly, not looking good for anyone at the end.

That’s a reference to the ANA’s original campaign against new gTLDs, which wound up producing not much more than a lot of column inches about an utterly pointless Congressional hearing in late 2011.
Chehade and the ANA representative this time agreed publicly to work together on better terms.

.home gets half a billion hits a day. Could this put new gTLDs at risk?

Kevin Murphy, July 17, 2013, Domain Tech

New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.
ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.
His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.
That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.
Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.
The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.
Some of its findings are surprising:

  • Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
  • Three percent of all requests were for strings that have been applied for in the current round.
  • A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
  • .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com

Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.
Most Queried TLDs
If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.
Here’s the most-queried applied-for strings:
Most Queried TLDs
Chapin was quick to point out that big numbers do not necessarily equate to big security problems.
“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.
“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.
For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.
If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.
In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.
The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.
There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.
Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.
As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.
For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.
Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.
There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.
Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.
One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.
So what are Interisle’s recommendations likely to be?
Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.
For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.
It seems to me that if the second-level request data was available, more mitigation options would be opened up.
ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.
“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”
Without prompting, he addressed the risk of delay to the new gTLD program.
“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”
The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.
Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.
ICANN certainly likes to play things close to the whistle.