Latest news of the domain name industry

Recent Posts

Angry gTLD applicants lay into ANA and Verisign “bullshit”

Kevin Murphy, October 2, 2013, Domain Services

They’re as mad as hell and they’re not going to take it any more.

New gTLD applicants yesterday laid into the Association of National Advertisers and Verisign with gusto, accusing them of seeking to delay the program for commercial reasons using security as a smokescreen.

The second TLD Security Forum in Washington DC was marked by a heated public argument between applicants and their back-end providers and the ANA’s representatives at the event.

The question was, of course, name collisions: will new gTLDs cause unacceptable security risks — maybe even threatening life — when they are delegated?

ANA vice president Dan Jaffe and outside counsel Amy Mushahwar had walked into the lion’s den, to their credit, to put forth the view that enterprises may face catastrophic IT failures if new gTLDs show up in the in DNS root.

What they got instead was a predictably hostile audience and a barrage of criticism from event organizer Alex Stamos, CTO of .secure applicant Artemis Internet, and Neustar VP Jeff Neuman.

Stamos was evidently already having a Bad Day before the ANA showed up for the afternoon sessions.

During his morning presentation, he laid the blame for certain types of name collision risks squarely with the “dumb” enterprises that are configuring their internal name servers in insecure ways. He said:

Any company that is using any of these domains, they’re all screwing up. Anyone who’s admitting these collisions is making a mistake. It’s a bad mistake, it’s a common mistake, but that doesn’t make it right. They’re opening themselves up to possible horrible security flaws that have nothing to do with the new gTLD program.

There is a mechanism by which you can split DNS resolution in a secure manner on Windows. But unless you do that, you’re in trouble, you’re creating a security hole for yourself. So stop complaining and delaying the whole new gTLD program, because you’re dumb, honestly. These are people who are going to have a problem whether new gTLDs exist or not. Let’s be realistic about this: it’s not about security, it’s about other commercial interests.

That’s of course a reference to Verisign, which is suspected of pressing the name collisions issue in order to prevent or delay competition to .com, and the ANA, which tried to get the program delayed on trademark grounds before it discovered collisions earlier this year.

Executives from Verisign, which put the ANA onto the name collision scent in the first place, apparently lacked the cojones to show up and defend the company’s position in person.

Stamos was preaching mainly to the choir at this point. The fireworks didn’t start until Jaffe and Mushahwar arrived for their panel a few hours later.

The ANA’s point of view, which they both made pretty clearly, is that there seems to be a risk that things could go badly wrong for enterprises if they’re running internal names that clash with applied-for gTLDs.

They’ve got beef with ICANN for running a “not long enough” comment period on the topic primarily during the vacation month of August, which didn’t give big companies enough time to figure out whether they’re at risk and obtain the necessary sign-off on disclosing this fact.

In short, the ANA wants more time — many more months — for its members and others to look at the issue before new gTLDs are delegated.

Mushahwar dismissed the argument that the event-free launches of .asia, .xxx and others showed that gTLD delegations don’t cause any problems, saying:

Let me admit right now: DNS collision is not new, it’s been around since the beginning of the internet… what is new is the velocity of change expected within the next year to 18 months.

I really dismiss the arguments that people are making on the public record saying we’ve dealt with this issue before, we’ve dealt with these issues, view the past TLDs as your test runs. We have never had this velocity of change happening.

The ANA seems to believe that the risk and the consequences are substantial, talking about people dying because their voice over IP fails or electricity supply gets cut off.

But other speakers weren’t buying it.

Stamos was first to the mic to challenge Mushahwar and Jaffe, saying their concerns are “mostly about IP and other commercial interests”, rather than sound technical analysis.

He pointed to letters sent to ICANN’s comment periods in support of the ANA’s position that were largely signed by IP lawyers. Security guys at these companies were not even aware of the letters, he said.

The internet is this crazy messy place where all kinds of weird things happen… if this is the mode that the internet goes forward — you have to prove everything you do has absolutely no risk of impacting anyone connected to the internet — then that’s it, we might as well call it done. We might as well freeze the internet as it is right now.

If you want to stall the program because you have a problem with IP rights or whatever I think that’s fine, but don’t try to grab hold of this thing and blow it up under a microscope and say “needs more study, needs more study”. For anything we do on the internet we can make that argument.

Any call for “we need to study every single possible impact for all several billion devices connected to the internet” is honestly kinda bullshit… it really smacks to me of lawyers coming in and telling engineers how to do their job.

Mushahwar pointed out in response that she’s a “security attorney, not an IP attorney” and that her primary concern is business continuity for large business, not trademark protection.

A few minutes later Neustar’s Neuman was equally passionate at the mic, clashing with Mushahwar more than once.

It all got a bit Fox News, with frequent crosstalk and “if you’d let me continue” and “I’ll let you finish” raising tempers. Neuman at one point accused Mushahwar of “condescending to the entire audience”.

His position, like Stamos before him, was that new gTLD applicants have looked at the same data as Interisle Consulting in its original report, and found that with the exception of .home, .corp and .mail, the risks posed by new gTLDs are minor and can be easily mitigated.

He asked the ANA to present some concrete examples of things that could go wrong.

“You guys have come to the table with a bunch of rhetoric, not supported by facts,” Neuman said.

He pointed to Neustar’s own research into the name collisions, which used the same data (more or less) as Interisle and Verisign and concluded that the risk of damaging effects is low.

The two sides of the debate were never going to come to any agreements yesterday, and they didn’t. But in many respects the ANA and applicants are on the same page.

Stamos, Neuman and others demanded examples of real-world problems that will be encountered when specific gTLDs are delegated and the ANA said basically: “Sure, but we need more time to do that”.

But more time means more delay, of course, which isn’t what the domain name industry wants to hear.

Donuts’ trademark block list goes live, pricing revealed

Kevin Murphy, September 25, 2013, Domain Registries

Donuts’ Domain Protected Marks List, which gives trademark owners the ability to defensively block their marks across the company’s whole portfolio of gTLDs, has gone live.

The service goes above and beyond what new gTLD registries are obliged to offer by ICANN.

As a “block” service, in which names will not resolve, it’s reminiscent of the Sunrise B service offered by ICM Registry at .xxx’s launch, which was praised and cursed in equal measure.

But with DPML, trademark owners also have the ability to block “trademark+keyword” names, for example, so Pepsi could block “drinkpepsi” or “pepsisucks”.

It’s not a wildcard, however. Companies would have to pay for each trademark+keyword string they wanted blocking.

DPML covers all of the gTLDs that Donuts plans to launch, which could be as many as 300. It currently has 28 registry agreements with ICANN and 272 applications remaining in various stages of evaluation.

Trademark owners will only be able to sign up to DPML if their marks are registered with the Trademark Clearinghouse under the “use” standard required to participate in Sunrise periods.

Donuts is also excluding an unspecified number of strings it regards as “premium”, so the owners of marks matching those strings will be out of luck, it seems.

Blocks will be available for a minimum of five years an maximum of 10 years. After expiration, they can be renewed with minimum terms of one year.

The company has not disclose its wholesale pricing, but registrars we’ve found listing the service on their web sites so far (101domain and EnCirca) price it between $2,895 and $2,995 for a five-year registration.

It looks pricey, but it’s likely to be extraordinarily good value compared to the alternative of Sunrise periods.

If Donuts winds up with 200 gTLDs in its portfolio, a $3,000 price tag ($600 per year) works out to a defensive registration cost of $3 per domain per gTLD per year.

If it winds up with all 300, the price would be $2.

That’s in line (if we’re assuming non-budget pricing comparisons and registrars’ DPML markup), with Donuts co-founder Richard Tindal’s statement earlier this year: that DPML would be 5% to 10% the cost of a regular registration.

Tindal also spoke then about a way for rival trademark owners to “unblock” matching names, so Apple the record company could unblock a DPML on apple.music obtained by Apple the computer company, for example.

Donuts is encouraging trademark owners to participate before its first gTLDs goes live, which it expects to happen later this year.

.sex and two other gTLD pass evaluation

Kevin Murphy, September 14, 2013, Domain Registries

Three new gTLD applications passed Initial Evaluation this week, including one of the two applications for .sex.

The approved .sex bid belongs to Internet Marketing Solutions, which is competing with .xxx operator ICM Registry.

The other applications passing IE this week are .leclerc, a French dot-brand, and .aquitaine, a French geographic region.

There are only 20 applications left without results, almost all of which — apart from a generic bid for .bar and Google’s controversial “dotless” .search — appear to be dot-brands.

.xxx sales spike 1,000% during discount

Kevin Murphy, September 5, 2013, Domain Registries

ICM Registry saw an over 1,000% spike in .xxx domain name registrations in May, during which it offered new registrations at a steep discount over its regular price.

The numbers were still relatively small. The registry saw 13,136 adds during the period, compared to 1,131 in April and 1,836 in May 2012, according to ICANN reports published today.

Average add-years rose sequentially from 1.34 to 1.88 (compared to a gTLD industry average of 1.23), according to TLD Health Check, with total add-years up over 1,500% to 24,663.

TLD Health Check

For the whole of May, ICM offered .xxx domains — which usually carry a registry fee of $62 — for the same price as .com domains. The promotion applied to any length of registration, from one to 10 years.

There were 747 10-year registrations in May. A small number, but exactly 100 more than ICM saw during its first month of general availability in December 2011. There were similar numbers of three and five-year sales, and over 1,100 two-year registrations.

The company ended the month with 120,409 domains under management.

Name collisions comments call for more gTLD delay

Kevin Murphy, August 29, 2013, Domain Registries

The first tranche of responses to Interisle Consulting’s study into the security risks of new gTLDs, and ICANN’s proposal to delay a few hundred strings pending more study, is in.

Comments filed with ICANN before the public comment deadline yesterday fall basically into two camps:

  • Non-applicants (mostly) urging ICANN to proceed with extreme caution. Many are asking for more time to study their own networks so they can get a better handle on their own risk profiles.
  • Applicants shooting holes in Interisle’s study and ICANN’s remeditation plan. They want ICANN to reclassify everything except .home and .corp as low risk, removing delays to delegation and go-live.

They were responding to ICANN’s decision to delay 521 “uncalculated risk” new gTLD applications by three to six months while further research into the risk of name collisions — where a new gTLD could conflict with a TLD already used by internet users in a non-standard way — is carried out.

Proceed with caution

Many commenters stated that more time is needed to analyse the risks posed by name collisions, noting that Interisle studied primarily the volume of queries for non-existent domains, rather than looking deeply into the consequences of delegating colliding gTLDs.

That was a point raised by applicants too, but while applicants conclude that this lack of data should lead ICANN to lift the current delays, others believe that it means more delays are needed.

Two ICANN constituencies seem to generally agree with the findings of the Interisle report.

The Internet Service Providers and Connectivity Providers constituency asked for the public comment period be put on hold until further research is carried out, or for at least 60 days. It noted:

corporations, ISPs and connectivity providers may bear the brunt of the security and customer-experience issues resulting from adverse (as yet un-analyzed) impacts from name collision

these issues, due to their security and customer-experience aspects, fall outside the remit of people who normally participate in the ICANN process, requiring extensive wide-ranging briefings even in corporations that do participate actively in the ICANN process

The At-Large Advisory Committee concurred that the Interisle study does not currently provide enough information to fully gauge the risk of name collisions causing harm.

ALAC said it was “in general concurrence with the proposed risk mitigation actions for the three defined risk categories” anyway, adding:

ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution

Several individual stakeholders agreed with the ISPCP that they need more time to look at their own networks. The Association of Nation Advertisers said:

Our member companies are working diligently to determine if DNS Clash issues are present within their respective networks. However the ANA had to communicate these issues to hundreds of companies, after which these companies must generate new data to determine the potential service failures on their respective networks.

The ANA wants the public comment period extended until November 22 to give its members more time to gather data.

While the ANA can always be relied upon to ask for new gTLDs to be delayed, its request was echoed by others.

General Electric called for three types of additional research:

  • Additional studies of traffic beyond the initial DITL sample.
  • Information and analysis of “use cases” — particular types of queries and traffic — and the consequences of the failure of particular use cases to resolve as intended (particular use cases could have severe consequences even if they might occur infrequently — like hurricanes), and
  • Studies of the time and costs of mitigation.

GE said more time is needed for companies such as itself to conduct impact analyses on their own internal networks and asked ICANN to not delegate any gTLD until the risk is “fully understood”.

Verizon, Heinz and the American Insurers Association have asked for comment deadline extensions for the same reasons.

The Association of Competitive Technology (which has Verisign as a member) said:

ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect.

Numerically, there were far more comments criticizing ICANN’s mitigation proposal. All were filed by new gTLD applicants, whose interests are aligned, however.

Most of these comments, which are far more focused on the details and the data, target perceived deficiencies in Interisle’s report and ICANN’s response to it.

Several very good arguments are made.

The Svalbard problem

First, there is criticism of the cut-off point between “low risk” and “uncalculated risk” strings, which some applicants say is “arbitrary”.

That’s mostly true.

ICANN basically took the list of applied-for strings, ordered by the frequency Interisle found they generate NXDOMAIN responses at the root, and drew a line across it at the 49,842 queries mark.

That’s because 49,842 queries is what .sj, the least-frequently-queried real TLD, received over the same period

If your string, despite not yet existing as a gTLD, already gets more traffic than .sj, it’s classed as “uncalculated risk” and faces more delays, according to ICANN’s plan.

As Directi said in its comments:

The result of this arbitrary selection is that .bio (Rank 281) with 50,000 queries (rounded to the nearest thousand) is part of the “uncategorized risk” list, and is delayed by 3 to 6 months, whereas .engineering (Rank 282) with 49,000 queries (rounded to the nearest thousand) is part of the “low risk” list, and can proceed without any significant delays.

What neither ICANN nor Interisle explained is why this is an appropriate place to draw a line in the sand.

This graphic from DotCLUB Domains illustrates the scale of the problem nicely:

.sj is the ccTLD for Svalbard, a Norwegian territory in the Arctic Circle with fewer than 3,000 inhabitants. The TLD is administered by .no registry Norid, but it’s not possible to register domains there.

Does having more traffic than .sj mean a gTLD is automatically more risky? Does having less mean a gTLD is safe? The ICANN proposal assumes “yes” to both questions, but it doesn’t explain why.

Many applicants say that having more traffic than existing gTLDs does not automatically mean your gTLD poses a risk.

They pointed to Verisign data from 2006, which shows that gTLDs such as .xxx and .asia were already receiving large amounts of traffic prior to their delegation. When they were delegated, the sky did not fall. Indeed, there were no reports of significant security and stability problems.

The New gTLD Applicants Group said:

In fact, the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

These successful delegations alone demonstrate that there is no need to delay any more than the two most risky strings.

Donuts said:

There is no factual basis in the study recommending halting delegation process of 20% of applied-for strings. As the paper itself says, “The Study did not find enough information to properly classify these strings given the short timeline.” Without evidence of actual harm, the TLDs should proceed to delegation. Such was the case with other TLDs such as .XXX and .ASIA, which were delegated without delay and with no problems post-delegation.

Applicants also believe that the release in June 2012 of the list of all 1,930 applied-for strings may have skewed the data set that Interisle used in its study.

Uniregistry, for example, said:

The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list of applied-for strings.

The argument seems to be that a lot of the NXDOMAIN traffic seen in 2013 is due to people and software querying applied-for TLDs to see if they’re live yet.

It’s quite a speculative argument, but it’s somewhat supported by the fact that many applied-for strings received more queries in 2013 than they did in the equivalent 2012 sampling.

Second-level domains

Some applicants pointed out that there may not be a correlation between the volume of traffic a string receives and the number of second-level domains being queried.

A string might get a bazillion queries for a single second-level domain name. If that domain name is reserved by the registry, the risk of a name collision might be completely eliminated.

The Interisle report did show that number of SLDs and the volume of traffic do not correlate.

For example, .hsbc is ranked 14th in terms of traffic volume but saw requests for just 2,000 domains, whereas .inc, which ranked 15th, saw requests for 73,000 domains.

Unfortunately, the Interisle report only published the SLD numbers for the top 35 strings by query volume, leaving most applicants none the wiser about the possible impact of their own strings.

And ICANN did not factor the number of SLDs into its decision about where to draw the line between “low” and “uncalculated” risk.

Conspiracy theories

Some applicants questioned whether the Interisle data itself was reliable, but I find these arguments poorly supported and largely speculative.

They propose that someone (meaning presumably Verisign, which stands to lose market share when new gTLDs go live, and which kicked off the name collisions debate in the first place) could have gamed the study by generating spurious requests for applied-for gTLDs during the period Interisle’s data was being captured.

Some applicants put forth this view, while other limited their comments to a request that future studies rely only on data collected before now, to avoid tampering at the point of collection in future.

NTAG said:

Query counts are very easily gamed by any Internet connected system, allowing for malicious actors to create the appearance of risk for any string that they may object to in the future. It would be very easy to create the impression of a widespread string collision problem with a home Internet connection and the abuse of the thousands of available open resolvers.

While this kind of mischief is a hypothetical possibility, nobody has supplied any evidence that Interisle’s data was manipulated by anyone.

Some people have privately pointed DI to the fact that Verisign made a substantial donation to the DNS-OARC — the group that collected the data that Interisle used in its study — in July.

The implication is that Verisign was somehow able to manipulate the data after it was captured by DNS-OARC.

I don’t buy this either. We’re talking about a highly complex 8TB data set that took Interisle’s computers a week to process on each pass. The data, under the OARC’s deal with the root server operators, is not allowed to leave its premises. It would not be easily manipulated.

Additionally, DNS-OARC is managed by Internet Systems Consortium — which runs the F-root and is Uniregistry’s back-end registry provider — from its own premises in California.

In short, in the absence of any evidence supporting this conspiracy theory, I find the idea that the Interisle data was hacked after it was collected highly improbable.

What next?

I’ve presented only a summary of some key points here. The full list of comments can be found here. The reply period for comments closes September 17.

Several ICANN constituencies that can usually be relied upon to comment on everything (registrars, intellectual property, business and non-commercial) have not yet commented.

Will ICANN extend the deadline? I suppose it depends on how cautious it wants to be, whether it believes the companies requesting the extension really are conducting their own internal collision studies, and how useful it thinks those studies will be.

:)