Latest news of the domain name industry

Recent Posts

New gTLD applicants get a way to avoid name collision delay

Kevin Murphy, October 9, 2013, Domain Tech

ICANN has given blessed relief to many new gTLD applicants by wiping potentially months off their path to delegation.

Its New gTLD Program Committee this week adopted a new “New gTLD Collision Occurrence Management Plan” which aims to tackle the problem of clashes between new gTLDs and names used on private networks.

The good news is that the previous categorization of strings according to risk, which would have delayed “uncalculated risk” gTLDs by months pending further study, has been scrapped.

The two “high risk” strings — .home and .corp — don’t catch a break, however. ICANN says it will continue to refuse to delegate them “indefinitely”.

For everyone else, ICANN said it will conduct additional studies into the risk of name collisions, above and beyond what Interisle Consulting already produced.

The study will take into account not only the frequency that new gTLDs currently generate NXDOMAIN traffic in the DNS root, but also the number of second-level domains queried, the diversity of requesting sources, and other factors.

Any new gTLD applicant that does not wish to wait for this study will be able to proceed to delegation without delay, but only if they block huge numbers of second-level domains at launch.

The registries will have to block every SLD that was queried in their gTLD according to the Day in the Life of the Internet data that Interisle used in its study.

This list will vary by TLD, but in the most severe cases is likely to extend to tens of thousands of names. In many cases, it’s likely to be a few thousand names.

Fortunately, studies conducted by the likes of Donuts and Neustar indicate that many of these SLDs — maybe even the majority — are likely to be invalid strings, such as those with an underscore or other non-DNS character, or randomly generated 10-character strings of gibberish generated by Google Chrome.

In other words, the actual number of potentially salable domains that registries will have to block may turn out to be much lower than it appears at first glance.

Each SLD will have to be blocked in such a way that it continues to return NXDOMAIN responses, as they all do today.

Because the DITL data represented a 48-hour snapshot in May 2013, and may not include every potentially affected string, ICANN is also proposing to give organizations a way to:

report and request the blocking of a domain name (SLD) that causes demonstrably severe harm as a consequence of name collision occurrences.

The process will allow the deactivation (SLD removal from the TLD zone) of the name for a period of up to two (2) years in order to allow the affected party to effect changes to its network to eliminate the DNS request leakage that causes collisions, or mitigate the harmful impact.

One has to wonder if any trademark lawyers reading this will think: “Ooh, free defensive registration!” It will be interesting to see if any of them give it a cheeky shot.

I’ve got a feeling that most new gTLD applicants will want to take ICANN up on its offer. It’s not an ideal solution for them, but it does give them a way to get into the root relatively quickly.

There’s no telling what ICANN’s additional studies will find, but there’s a chance it could be negative for their string(s) — getting delegated at least mitigates the risk of never getting delegated.

The new ICANN proposal may in some cases interfere with their plans to market and use their TLDs, however.

Take a dot-brand such as .cisco, which the networking company has applied for. Its block list is likely to have about 100,000 strings on it, increasing the chances that useful, brandable SLDs are going to be taken out of circulation for a while.

ICANN is also proposing to conduct an awareness-raising campaign, using the media, to let network operators know about the risks that new gTLDs may present to their networks.

Depending on how effective this is, new registries may be able to forget about getting positive column inches for their launch — if a journalist is handed a negative angle for a story on a plate, they’ll take it.

Verisign targets bank claims in name collisions fight

Kevin Murphy, September 15, 2013, Domain Tech

Verisign has rubbished the Commonwealth Bank of Australia’s claim that its dot-brand gTLD, .cba, is safe.

In a lengthy letter to ICANN today, Verisign senior vice president Pat Kane said that, contrary to CBA’s claims, the bank is only responsible for about 6% of the traffic .cba sees at the root.

It’s the latest volley in the ongoing fight about the security risks of name collisions — the scenario where an applied-for gTLD string is already in broad use on internal networks.

CBA’s application for .cba has been categorized as “uncalculated risk” by ICANN, meaning it faces more reviews and three to six months of delay while its risk profile is assessed.

But in a letter to ICANN last month, CBA said “the cause of the name collision is primarily from CBA internal systems” and “it is within the CBA realm of control to detect and remediate said systems”.

The bank was basically claiming that its own computers use DNS requests for .cba already, and that leakage of those requests onto the internet was responsible for its relatively high risk profile.

At the time we doubted that CBA had access to the data needed to draw this conclusion and Verisign said today that a new study of its own “shows without a doubt that CBA’s initial conclusions are incorrect”.

Since the publication of Interisle Consulting’s independent review into root server error traffic — which led to all applied-for strings being split into risk categories — Verisign has evidently been carrying out its own study.

While Interisle used data collected from almost all of the DNS root servers, Verisign’s seven-week study only looked at data gathered from the A-root and J-root, which it manages.

According to Verisign, .cba gets roughly 10,000 root server queries per day — 504,000 in total over the study window — and hardly any of them come from the bank itself.

Most appear to be from residential apartment complexes in Chiba, Japan, where network admins seem to have borrowed the local airport code — also CBA — to address local devices.

About 80% of the requests seen come from devices using DNS Service Discovery services such as Bonjour, Verisign said.

Bonjour is an Apple-created technology that allows computers to use DNS to automatically discover other LAN-connected devices such as printers and cameras, making home networking a bit simpler.

Another source of the .cba traffic is McAfee’s antivirus software, made by Intel, which Verisign said uses DNS to check whether code is virus-free before executing it.

While error traffic for .cba was seen from 170 countries, Verisign said that Japan — notable for not being Australia — was the biggest source, with almost 400,000 queries (79% of the total). It said:

Our measurement study reveals evidence of a substantial Internet-connected infrastructure in Japan that lies beneath the surface of the public-facing internet, which appears to rely on the non-resolution of the string .CBA.

This infrastructure appear hierarchical and seems to include municipal and private administrative and service networks associated with electronic resource management for office and residential building facilities, as well as consumer devices.

One apartment block in Chiba is is responsible for almost 5% of the daily .cba queries — about 500 per day on average — according to Verisign’s letter, though there were 63 notable sources in total.

ICANN’s proposal for reducing the risk of these name collisions causing problems would require CBA, as the registry, to hunt down and warn organizations of .cba’s impending delegation.

Verisign reiterates the point made by RIPE NCC last month: this would be quite difficult to carry out.

But it does seem that Verisign has done a pretty good job tracking down the organizations that would be affected by .cba being delegated.

The question that Verisign’s letter and presentation does not address is: what would happen to these networks if .cba was delegated?

If .cba is delegated, what will McAfee’s antivirus software do? Will it crash the user’s computer? Will it allow unsafe code to run? Will it cause false positives, blocking users from legitimate content?

Or will it simply fail gracefully, causing no security problems whatsoever?

Likewise, what happens when Bonjour expects .cba to not exist and it suddenly does? Do Apple computers start leaking data about the devices on their local network to unintended third parties?

Or does it, again, cause no security problems whatsoever?

Without satisfactory answers to those questions, maybe name collisions could be introduced by ICANN with little to no effect, meaning the “risk” isn’t really a risk at all.

Answering those questions will of course take time, which means delay, which is not something most applicants want to hear right now.

Verisign’s study targeted CBA because CBA singled itself out by claiming to be responsible for the .cba error traffic, not because CBA is a client of rival registry Afilias.

The bank can probably thank Verisign for its study, which may turn out to be quite handy.

Still, it would be interesting to see Verisign conduct a similar study on, say, .windows (Microsoft), .cloud (Symantec) or .bank (Financial Services Roundtable), which are among the 35 gTLDs with “uncalculated” risk profiles that Verisign promised to provide back-end registry services for before it decided that new gTLDs were dangerous.

You can read Verisign’s letter and presentation here. I’ve rotated the PDF to make the presentation more readable here.

Artemis plans name collision conference next week

Kevin Murphy, August 16, 2013, Domain Tech

Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.

Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.

The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.

Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.

That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.

The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.

The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.

NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.

The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.

On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.

NTAG rubbishes new gTLD collision risk report

Kevin Murphy, August 15, 2013, Domain Policy

The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.

The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.

But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.

NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.

The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.

The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.

NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:

We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.

The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.

That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.

If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.

NTAG said:

None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.

the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.

Verisign said:

This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.

The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).

NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.

Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.

While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.

Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.

For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.

In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)

The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.

Donuts, Uniregistry and Famous Four respond to ICANN’s new gTLD security bombshell

Kevin Murphy, August 6, 2013, Domain Registries

Following the shock news this morning that ICANN wants to delay hundreds of new gTLD applications due to potential security risks, we pinged a few of the biggest applicants for their initial reactions.

Donuts, Uniregistry and Famous Four Media, which combined are responsible for over a fifth of all applications, have all responded so far, so we’re printing their statements here in full.

As a reminder, two reports published by ICANN today a) strongly warn against delegating so-called “dotless” domains and b) present significant evidence that “internal name collisions” are a real and present danger to the security and stability of many private networks.

ICANN, in response to the internal name collision issue, proposed to delay 20% of all new gTLD applications for three to six more months while more research is carried out.

It also wants to ask new gTLD registries to conduct outreach to internet users potentially affected by their delegated gTLD strings.

Of the three, Donuts seems most upset. It sent us the following statement:

One has to wonder about the timing of these reports and the motivations behind them. Donuts believes, and our own research confirms satisfactorily to us, that dotless domains and name collision are not threatening to the stability and security of the domain name system.

Name collisions, such as the NxD (in the technical parlance) collisions studied in this report, happen every day in .com, yet the study did not quantify those and Verisign does not block those names from being registered.

We’re concerned about false impressions being deliberately created and believe the reports are commercially or competitively motivated.

There is little reason to pre-empt dotless domains now when there are ICANN processes in place to evaluate them in due course. We don’t believe that ICANN resources need to be deployed at this point on understanding the potential innovations of possible uses nor any security harms.

We also think that name collision is an overstated issue. Rather than take the overdone step of halting or delaying these TLDs, if the issue really is such a concern, it would be wiser to focus on the second-level names where a conflict could occur.

As the NTIA recently wrote, Verisign’s inconsistencies on technical issues are very troubling. These issues have been thoroughly studied for some time. It’s far past due to conclude this eight-year process an move to delegation

As I haven’t previously heard any reason to doubt Interisle Consulting’s impartiality or question its motivation in writing the name collisions report I asked Donuts for clarification, but the company declined to elaborate.

Interisle has been working with ICANN for some time on various technical studies and is also one of the new gTLD program’s independent evaluators, responsible for registry services evaluations.

Uniregistry CEO Frank Schilling was also unhappy with the report. He sent the following statement:

We are deeply dismayed by this new report, both by its substance and its timing. On the substance, the concerns addressed by the report relate, primarily if not solely, to solvable problems created by third-parties using the DNS in non-standard ways. We expect that any problems will be addressed quickly by the companies and individuals that caused them in the first place.

On ICANN’s timing, it is, come just as the first new gTLDs are prepared to launch, very late and, quite obviously, highly disruptive to the long-standing business plans of the companies that relied on ICANN’s guidebook and stated timelines. Uniregistry believes that the best approach is to move forward with the launch of all new gTLDs on the existing schedule.

Finally, Famous Four Media is slightly more relaxed about the situation, judging by the statement it sent us:

Famous Four Media’s primary concern is the security and stability of the Internet. Since this is in the interest of all parties involved in the new gTLD program from registries to registrants and all in between Famous Four Media welcomes these proposals.

Whilst the latest report, and the consequent ICANN proposals, will inevitably cause delays and additional costs in the launches of new gTLDs, Famous Four Media does not believe it will impact its go-to-market plans significantly. The majority of our TLD strings are considered “low risk” and see this in a very positive light although other applicants might not afford to be as sanguine.

According to the DI PRO New gTLD Application Tracker, which has been updated with the risk levels ICANN says each applied-for gTLD poses, 18 of Famous Four’s 60 original applications are in the riskiest two categories, compared to 23 of Uniregistry’s 54 and 102 of Donuts’ of 307.

  • Page 1 of 2
  • 1
  • 2
  • >