Latest news of the domain name industry

Recent Posts

Angry gTLD applicants lay into ANA and Verisign “bullshit”

Kevin Murphy, October 2, 2013, Domain Services

They’re as mad as hell and they’re not going to take it any more.
New gTLD applicants yesterday laid into the Association of National Advertisers and Verisign with gusto, accusing them of seeking to delay the program for commercial reasons using security as a smokescreen.
The second TLD Security Forum in Washington DC was marked by a heated public argument between applicants and their back-end providers and the ANA’s representatives at the event.
The question was, of course, name collisions: will new gTLDs cause unacceptable security risks — maybe even threatening life — when they are delegated?
ANA vice president Dan Jaffe and outside counsel Amy Mushahwar had walked into the lion’s den, to their credit, to put forth the view that enterprises may face catastrophic IT failures if new gTLDs show up in the in DNS root.
What they got instead was a predictably hostile audience and a barrage of criticism from event organizer Alex Stamos, CTO of .secure applicant Artemis Internet, and Neustar VP Jeff Neuman.
Stamos was evidently already having a Bad Day before the ANA showed up for the afternoon sessions.
During his morning presentation, he laid the blame for certain types of name collision risks squarely with the “dumb” enterprises that are configuring their internal name servers in insecure ways. He said:

Any company that is using any of these domains, they’re all screwing up. Anyone who’s admitting these collisions is making a mistake. It’s a bad mistake, it’s a common mistake, but that doesn’t make it right. They’re opening themselves up to possible horrible security flaws that have nothing to do with the new gTLD program.

There is a mechanism by which you can split DNS resolution in a secure manner on Windows. But unless you do that, you’re in trouble, you’re creating a security hole for yourself. So stop complaining and delaying the whole new gTLD program, because you’re dumb, honestly. These are people who are going to have a problem whether new gTLDs exist or not. Let’s be realistic about this: it’s not about security, it’s about other commercial interests.

That’s of course a reference to Verisign, which is suspected of pressing the name collisions issue in order to prevent or delay competition to .com, and the ANA, which tried to get the program delayed on trademark grounds before it discovered collisions earlier this year.
Executives from Verisign, which put the ANA onto the name collision scent in the first place, apparently lacked the cojones to show up and defend the company’s position in person.
Stamos was preaching mainly to the choir at this point. The fireworks didn’t start until Jaffe and Mushahwar arrived for their panel a few hours later.
The ANA’s point of view, which they both made pretty clearly, is that there seems to be a risk that things could go badly wrong for enterprises if they’re running internal names that clash with applied-for gTLDs.
They’ve got beef with ICANN for running a “not long enough” comment period on the topic primarily during the vacation month of August, which didn’t give big companies enough time to figure out whether they’re at risk and obtain the necessary sign-off on disclosing this fact.
In short, the ANA wants more time — many more months — for its members and others to look at the issue before new gTLDs are delegated.
Mushahwar dismissed the argument that the event-free launches of .asia, .xxx and others showed that gTLD delegations don’t cause any problems, saying:

Let me admit right now: DNS collision is not new, it’s been around since the beginning of the internet… what is new is the velocity of change expected within the next year to 18 months.
I really dismiss the arguments that people are making on the public record saying we’ve dealt with this issue before, we’ve dealt with these issues, view the past TLDs as your test runs. We have never had this velocity of change happening.

The ANA seems to believe that the risk and the consequences are substantial, talking about people dying because their voice over IP fails or electricity supply gets cut off.
But other speakers weren’t buying it.
Stamos was first to the mic to challenge Mushahwar and Jaffe, saying their concerns are “mostly about IP and other commercial interests”, rather than sound technical analysis.
He pointed to letters sent to ICANN’s comment periods in support of the ANA’s position that were largely signed by IP lawyers. Security guys at these companies were not even aware of the letters, he said.

The internet is this crazy messy place where all kinds of weird things happen… if this is the mode that the internet goes forward — you have to prove everything you do has absolutely no risk of impacting anyone connected to the internet — then that’s it, we might as well call it done. We might as well freeze the internet as it is right now.

If you want to stall the program because you have a problem with IP rights or whatever I think that’s fine, but don’t try to grab hold of this thing and blow it up under a microscope and say “needs more study, needs more study”. For anything we do on the internet we can make that argument.

Any call for “we need to study every single possible impact for all several billion devices connected to the internet” is honestly kinda bullshit… it really smacks to me of lawyers coming in and telling engineers how to do their job.

Mushahwar pointed out in response that she’s a “security attorney, not an IP attorney” and that her primary concern is business continuity for large business, not trademark protection.
A few minutes later Neustar’s Neuman was equally passionate at the mic, clashing with Mushahwar more than once.
It all got a bit Fox News, with frequent crosstalk and “if you’d let me continue” and “I’ll let you finish” raising tempers. Neuman at one point accused Mushahwar of “condescending to the entire audience”.
His position, like Stamos before him, was that new gTLD applicants have looked at the same data as Interisle Consulting in its original report, and found that with the exception of .home, .corp and .mail, the risks posed by new gTLDs are minor and can be easily mitigated.
He asked the ANA to present some concrete examples of things that could go wrong.
“You guys have come to the table with a bunch of rhetoric, not supported by facts,” Neuman said.
He pointed to Neustar’s own research into the name collisions, which used the same data (more or less) as Interisle and Verisign and concluded that the risk of damaging effects is low.
The two sides of the debate were never going to come to any agreements yesterday, and they didn’t. But in many respects the ANA and applicants are on the same page.
Stamos, Neuman and others demanded examples of real-world problems that will be encountered when specific gTLDs are delegated and the ANA said basically: “Sure, but we need more time to do that”.
But more time means more delay, of course, which isn’t what the domain name industry wants to hear.

Donuts’ trademark block list goes live, pricing revealed

Kevin Murphy, September 25, 2013, Domain Registries

Donuts’ Domain Protected Marks List, which gives trademark owners the ability to defensively block their marks across the company’s whole portfolio of gTLDs, has gone live.
The service goes above and beyond what new gTLD registries are obliged to offer by ICANN.
As a “block” service, in which names will not resolve, it’s reminiscent of the Sunrise B service offered by ICM Registry at .xxx’s launch, which was praised and cursed in equal measure.
But with DPML, trademark owners also have the ability to block “trademark+keyword” names, for example, so Pepsi could block “drinkpepsi” or “pepsisucks”.
It’s not a wildcard, however. Companies would have to pay for each trademark+keyword string they wanted blocking.
DPML covers all of the gTLDs that Donuts plans to launch, which could be as many as 300. It currently has 28 registry agreements with ICANN and 272 applications remaining in various stages of evaluation.
Trademark owners will only be able to sign up to DPML if their marks are registered with the Trademark Clearinghouse under the “use” standard required to participate in Sunrise periods.
Donuts is also excluding an unspecified number of strings it regards as “premium”, so the owners of marks matching those strings will be out of luck, it seems.
Blocks will be available for a minimum of five years an maximum of 10 years. After expiration, they can be renewed with minimum terms of one year.
The company has not disclose its wholesale pricing, but registrars we’ve found listing the service on their web sites so far (101domain and EnCirca) price it between $2,895 and $2,995 for a five-year registration.
It looks pricey, but it’s likely to be extraordinarily good value compared to the alternative of Sunrise periods.
If Donuts winds up with 200 gTLDs in its portfolio, a $3,000 price tag ($600 per year) works out to a defensive registration cost of $3 per domain per gTLD per year.
If it winds up with all 300, the price would be $2.
That’s in line (if we’re assuming non-budget pricing comparisons and registrars’ DPML markup), with Donuts co-founder Richard Tindal’s statement earlier this year: that DPML would be 5% to 10% the cost of a regular registration.
Tindal also spoke then about a way for rival trademark owners to “unblock” matching names, so Apple the record company could unblock a DPML on apple.music obtained by Apple the computer company, for example.
Donuts is encouraging trademark owners to participate before its first gTLDs goes live, which it expects to happen later this year.

Phishing domains double in 2013

Kevin Murphy, September 20, 2013, Domain Tech

The number of domain names registered for phishing attacks doubled in the first half of the year, according to the latest data from the Anti-Phishing Working Group.
The APWG identified 53,685 phishing domains, of which 12,173 are believed to have been registered by phishers. The remainder belonged to compromised web servers.
This 12,173 number — up from 5,835 in the year-ago period — is the important one for the domain name industry, as it is there that registries and registrars have the ability to make a difference.
“The increase is due to a sudden uptick in domain registrations by Chinese phishers,” the APWG said in its Domain Name Use and Trends 1H2013 report (pdf). Chinese targets accounted for 8,240 (68%) of the registered domains.
This works out to about 66 maliciously registered domains per day on average, or less than half a percent of the total number of domains registered across all TLDs daily.
According to the APWG, the number of phishing domains that actually contain a brand or a variation of a brand is smaller still, at 1,244. That’s flat on the second half of 2012.
It works out to about seven new trademark-infringing phishing domain names per day that a brand owner somewhere in the world (though probably China) has to deal with.
APWG reiterated what it has said in previous reports:

most maliciously registered domain names offered nothing to confuse a potential victim. Placing brand names or variations thereof in the domain name itself is not a favored tactic, since brand owners are proactively scanning Internet zone files for their brand names. As we have observed in the past, the domain name itself usually does not matter to phishers, and a domain name of any meaning, or no meaning at all, in any TLD, will usually do. Instead, phishers often place brand names in subdomains or subdirectories.

dotShabaka Diary — Day 11, 2013 RAA issues

Kevin Murphy, September 17, 2013, Domain Registries

The eleventh installment of dotShabaka Registry’s journal, charting its progress towards becoming one of the first new gTLDs to go live, written by general manager Yasmin Omer.

Tuesday 17 September 2013
As شبكة. gets closer to launch, signing up Registrars becomes ever more critical and we have started discussions with potential partners across three continents. To participate in Sunrise, Registrars must have already completed two steps: 1) Signed the 2013 RAA; and 2) Completed TMDB Accreditation.
To date, around 20 Registrars have signed the 2013 RAA according to the InterNIC website.
However, because Registrars cannot access the TMDB environment until they have signed the 2013 RAA, even fewer have started TMDB accreditation. Many of those that have signed the RAA have been frustrated by TMDB OTE access problems.
Is there any official ICANN database where Registries can confirm Registrar TMDB Accreditation?
Registries are locked out of the TMDB environment until the 2013 RAA is signed. Why not let Registries access the TMDB as needed (now) to accelerate readiness for the launch of IDN gTLDs?
And why aren’t ICANN or Deloitte publishing TMCH numbers for non-English trademarks? How can we decide whether the Sunrise Phase should be 30 days or more if we don’t know the numbers? Why not publish the forecasts and let the Registries decide how to optimise launch phases for their businesses model?

Read previous and future diary entries here.

Name collisions comments call for more gTLD delay

Kevin Murphy, August 29, 2013, Domain Registries

The first tranche of responses to Interisle Consulting’s study into the security risks of new gTLDs, and ICANN’s proposal to delay a few hundred strings pending more study, is in.
Comments filed with ICANN before the public comment deadline yesterday fall basically into two camps:

  • Non-applicants (mostly) urging ICANN to proceed with extreme caution. Many are asking for more time to study their own networks so they can get a better handle on their own risk profiles.
  • Applicants shooting holes in Interisle’s study and ICANN’s remeditation plan. They want ICANN to reclassify everything except .home and .corp as low risk, removing delays to delegation and go-live.

They were responding to ICANN’s decision to delay 521 “uncalculated risk” new gTLD applications by three to six months while further research into the risk of name collisions — where a new gTLD could conflict with a TLD already used by internet users in a non-standard way — is carried out.
Proceed with caution
Many commenters stated that more time is needed to analyse the risks posed by name collisions, noting that Interisle studied primarily the volume of queries for non-existent domains, rather than looking deeply into the consequences of delegating colliding gTLDs.
That was a point raised by applicants too, but while applicants conclude that this lack of data should lead ICANN to lift the current delays, others believe that it means more delays are needed.
Two ICANN constituencies seem to generally agree with the findings of the Interisle report.
The Internet Service Providers and Connectivity Providers constituency asked for the public comment period be put on hold until further research is carried out, or for at least 60 days. It noted:

corporations, ISPs and connectivity providers may bear the brunt of the security and customer-experience issues resulting from adverse (as yet un-analyzed) impacts from name collision

these issues, due to their security and customer-experience aspects, fall outside the remit of people who normally participate in the ICANN process, requiring extensive wide-ranging briefings even in corporations that do participate actively in the ICANN process

The At-Large Advisory Committee concurred that the Interisle study does not currently provide enough information to fully gauge the risk of name collisions causing harm.
ALAC said it was “in general concurrence with the proposed risk mitigation actions for the three defined risk categories” anyway, adding:

ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution

Several individual stakeholders agreed with the ISPCP that they need more time to look at their own networks. The Association of Nation Advertisers said:

Our member companies are working diligently to determine if DNS Clash issues are present within their respective networks. However the ANA had to communicate these issues to hundreds of companies, after which these companies must generate new data to determine the potential service failures on their respective networks.

The ANA wants the public comment period extended until November 22 to give its members more time to gather data.
While the ANA can always be relied upon to ask for new gTLDs to be delayed, its request was echoed by others.
General Electric called for three types of additional research:

  • Additional studies of traffic beyond the initial DITL sample.
  • Information and analysis of “use cases” — particular types of queries and traffic — and the consequences of the failure of particular use cases to resolve as intended (particular use cases could have severe consequences even if they might occur infrequently — like hurricanes), and
  • Studies of the time and costs of mitigation.

GE said more time is needed for companies such as itself to conduct impact analyses on their own internal networks and asked ICANN to not delegate any gTLD until the risk is “fully understood”.
Verizon, Heinz and the American Insurers Association have asked for comment deadline extensions for the same reasons.
The Association of Competitive Technology (which has Verisign as a member) said:

ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect.

Numerically, there were far more comments criticizing ICANN’s mitigation proposal. All were filed by new gTLD applicants, whose interests are aligned, however.
Most of these comments, which are far more focused on the details and the data, target perceived deficiencies in Interisle’s report and ICANN’s response to it.
Several very good arguments are made.
The Svalbard problem
First, there is criticism of the cut-off point between “low risk” and “uncalculated risk” strings, which some applicants say is “arbitrary”.
That’s mostly true.
ICANN basically took the list of applied-for strings, ordered by the frequency Interisle found they generate NXDOMAIN responses at the root, and drew a line across it at the 49,842 queries mark.
That’s because 49,842 queries is what .sj, the least-frequently-queried real TLD, received over the same period
If your string, despite not yet existing as a gTLD, already gets more traffic than .sj, it’s classed as “uncalculated risk” and faces more delays, according to ICANN’s plan.
As Directi said in its comments:

The result of this arbitrary selection is that .bio (Rank 281) with 50,000 queries (rounded to the nearest thousand) is part of the “uncategorized risk” list, and is delayed by 3 to 6 months, whereas .engineering (Rank 282) with 49,000 queries (rounded to the nearest thousand) is part of the “low risk” list, and can proceed without any significant delays.

What neither ICANN nor Interisle explained is why this is an appropriate place to draw a line in the sand.
This graphic from DotCLUB Domains illustrates the scale of the problem nicely:
.sj is the ccTLD for Svalbard, a Norwegian territory in the Arctic Circle with fewer than 3,000 inhabitants. The TLD is administered by .no registry Norid, but it’s not possible to register domains there.
Does having more traffic than .sj mean a gTLD is automatically more risky? Does having less mean a gTLD is safe? The ICANN proposal assumes “yes” to both questions, but it doesn’t explain why.
Many applicants say that having more traffic than existing gTLDs does not automatically mean your gTLD poses a risk.
They pointed to Verisign data from 2006, which shows that gTLDs such as .xxx and .asia were already receiving large amounts of traffic prior to their delegation. When they were delegated, the sky did not fall. Indeed, there were no reports of significant security and stability problems.
The New gTLD Applicants Group said:

In fact, the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.
These successful delegations alone demonstrate that there is no need to delay any more than the two most risky strings.

Donuts said:

There is no factual basis in the study recommending halting delegation process of 20% of applied-for strings. As the paper itself says, “The Study did not find enough information to properly classify these strings given the short timeline.” Without evidence of actual harm, the TLDs should proceed to delegation. Such was the case with other TLDs such as .XXX and .ASIA, which were delegated without delay and with no problems post-delegation.

Applicants also believe that the release in June 2012 of the list of all 1,930 applied-for strings may have skewed the data set that Interisle used in its study.
Uniregistry, for example, said:

The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list of applied-for strings.

The argument seems to be that a lot of the NXDOMAIN traffic seen in 2013 is due to people and software querying applied-for TLDs to see if they’re live yet.
It’s quite a speculative argument, but it’s somewhat supported by the fact that many applied-for strings received more queries in 2013 than they did in the equivalent 2012 sampling.
Second-level domains
Some applicants pointed out that there may not be a correlation between the volume of traffic a string receives and the number of second-level domains being queried.
A string might get a bazillion queries for a single second-level domain name. If that domain name is reserved by the registry, the risk of a name collision might be completely eliminated.
The Interisle report did show that number of SLDs and the volume of traffic do not correlate.
For example, .hsbc is ranked 14th in terms of traffic volume but saw requests for just 2,000 domains, whereas .inc, which ranked 15th, saw requests for 73,000 domains.
Unfortunately, the Interisle report only published the SLD numbers for the top 35 strings by query volume, leaving most applicants none the wiser about the possible impact of their own strings.
And ICANN did not factor the number of SLDs into its decision about where to draw the line between “low” and “uncalculated” risk.
Conspiracy theories
Some applicants questioned whether the Interisle data itself was reliable, but I find these arguments poorly supported and largely speculative.
They propose that someone (meaning presumably Verisign, which stands to lose market share when new gTLDs go live, and which kicked off the name collisions debate in the first place) could have gamed the study by generating spurious requests for applied-for gTLDs during the period Interisle’s data was being captured.
Some applicants put forth this view, while other limited their comments to a request that future studies rely only on data collected before now, to avoid tampering at the point of collection in future.
NTAG said:

Query counts are very easily gamed by any Internet connected system, allowing for malicious actors to create the appearance of risk for any string that they may object to in the future. It would be very easy to create the impression of a widespread string collision problem with a home Internet connection and the abuse of the thousands of available open resolvers.

While this kind of mischief is a hypothetical possibility, nobody has supplied any evidence that Interisle’s data was manipulated by anyone.
Some people have privately pointed DI to the fact that Verisign made a substantial donation to the DNS-OARC — the group that collected the data that Interisle used in its study — in July.
The implication is that Verisign was somehow able to manipulate the data after it was captured by DNS-OARC.
I don’t buy this either. We’re talking about a highly complex 8TB data set that took Interisle’s computers a week to process on each pass. The data, under the OARC’s deal with the root server operators, is not allowed to leave its premises. It would not be easily manipulated.
Additionally, DNS-OARC is managed by Internet Systems Consortium — which runs the F-root and is Uniregistry’s back-end registry provider — from its own premises in California.
In short, in the absence of any evidence supporting this conspiracy theory, I find the idea that the Interisle data was hacked after it was collected highly improbable.
What next?
I’ve presented only a summary of some key points here. The full list of comments can be found here. The reply period for comments closes September 17.
Several ICANN constituencies that can usually be relied upon to comment on everything (registrars, intellectual property, business and non-commercial) have not yet commented.
Will ICANN extend the deadline? I suppose it depends on how cautious it wants to be, whether it believes the companies requesting the extension really are conducting their own internal collision studies, and how useful it thinks those studies will be.

dotShabaka Diary — Day 5

Kevin Murphy, August 20, 2013, Domain Registries

Today, the fifth installment of dotShabaka Registry’s journal, charting its progress towards becoming one of the first new gTLDs to go live, written by general manager Yasmin Omer.

Tuesday 20 August 2013
We thought it would be timely to offer a recap of our Pre-Delegation Testing (PDT) experience to date.
Prior to PDT
We entered the PDT process with confidence due to our positive experience during beta testing. While we had lots of questions and concerns before entering the PDT beta program, particularly with documentation and test cases, we remained positive because there was constant communication between ourselves and the PDT Service Provider (IIS).
Prior to entering the first production PDT, we found all issues were resolved efficiently. We were also able to speak with ICANN to clarify some of the more vexing issues we’d faced during the PDT Beta program. We were also thankful that Patrik Hildingsson, the Production Manager for PDT (at IIS), even reached out personally to warn us of some documentation issues they’d not yet had time to resolve.
Our experience with the PDT Helpdesk had improved significantly through the process, which is a credit to all those involved.
During PDT
When we finally entered the PDT testing window two weeks ago, there was a noticeable drop in dialogue between ourselves and the PDT Service Provider. This was not necessarily a concern as everything appeared to be going fine, although our technical logs were not showing a great deal of activity. We assume that no news is good news. However, one suggested area for improvement would be an increase in communication during the testing phase, even if it’s an email to say everything is fine and we have no concerns. The lack of communication had our technical team biting their nails everyday while they nervously watched the logs.
By the Wednesday of the second week, we sent an email to ask if there were any problems and if the third week of PDT (described by ICANN as the remedy period) would be required. The response was a little vague, but we think we’re in the clear and the testing is complete. While the system status has not changed, there has been no activity in our logs since last week, suggesting the third week is not required. Fingers crossed.
Overall, we are happy with the process and wish other applicants the best of luck with their PDT. One small tip we can offer is that the data submission window closes at 11:59 UTC on the Friday before your PDT appointment. Don’t mistake this as 23:59 UTC, or you’ll miss out. We uploaded our documents well in advance, but some of our staff almost got caught out when discussing when to hit the “submit” button. Luckily there were keen observers on our internal mailing list and no mistake was made.
Good luck!

Read previous and future diary entries here.

dotShabaka Diary — Day 4

Kevin Murphy, August 16, 2013, Domain Registries

Here’s the fourth installment of dotShabaka Registry’s journal, charting its progress towards becoming one of the first new gTLDs to go live, written by general manager Yasmin Omer.

Friday 16 August 2013
The IBM TMDB webinar was disappointing. We had hoped to gain some much needed insight into the TMDB system, but instead we left with more questions and concerns. Let’s hope IBM can lift their game for next week’s webinar and the integration and testing process is clarified.
In other news, it has been a week since the teleconference to discuss the URS Technical Requirements Document and we are still unclear on when the requirements will be finalised, posted and whether they stand on the critical path to our Sunrise. If the discussions during the teleconference are anything to go by, significant work is required by both parties to finalise the document. Implementing the requirements in the URS Technical Requirements Document isn’t as simple as flicking a switch – development efforts will be required. This work needs to start now.
Finally, there are now only a couple of days left in our Pre-Delegation Testing window and so far we have not heard anything; we hope that no news is good news. Following this we expect the PDT service provider will take a couple of weeks to review our results. Fingers crossed!
Still no welcome package.

Read previous and future diary entries here.

dotShabaka Diary — Day 3

Kevin Murphy, August 14, 2013, Domain Registries

Here’s the third installment of dotShabaka Registry’s journal, charting its progress towards becoming one of the first new gTLDs to go live, written by general manager Yasmin Omer.

Wednesday 14 August 2013
Our Pre-Delegation Testing (PDT) continues. The latest ICANN published timeframe shows 30 days duration to 30 August. Previous communications indicated it would take 14 days plus rectification (if required) and the PDT ‘clock’ is counting down 21 days. When will it end?
We now have access to the TMDB and have received the initial Registration Token. We have run some internal tests and it all looks OK. So what next? We will attend the TMDB webinar today and hopefully the TMDB integration and testing process will be defined. Stay tuned.
According to ICANN we will receive a ‘new Registry’ Welcome Pack soon. I suspect we are ‘ahead of the curve’ in terms of the timing of this pack and other applicants will receive this information once the Agreement is signed.
In other news, ICANN have published IOC, Red Cross and Red Crescent reserved lists in multiple languages, but the IGO list has not been defined. Is ICANN going to publish a list of countries (in six official United Nations languages) or is every Registry going to generate their own list with their own rules? I guess we’ll have to wait and see.

Read previous and future diary entries here.

Dotless domains “dangerous”, security study says

Kevin Murphy, August 6, 2013, Domain Tech

An independent security study has given ICANN a couple dozen very good reasons to continue outlaw “dotless” domain names, but stopped short of recommending an outright ban.
The study, conducted by boutique security outfit Carve Systems and published by ICANN this morning, confirms that dotless domains — as it sounds, a single TLD label with no second-level domain and no dot — are potentially “dangerous”.
If dotless domains were to be allowed by ICANN, internet users may unwittingly send their private data across the internet instead of a local network, Carve found.
That’s basically the same “internal name collision” problem outlined in a separate paper, also published today, by Interisle Consulting (more on that later).
But dotless domains would also open up networks to serious vulnerabilities such as cookie leakage and cross-site scripting attacks, according to the report.
“A bug in a dotless website could be used to target any website a user frequents,” it says.
Internet Explorer, one of the many applications tested by Carve, automatically assumes dotless domains are local network resources and gives them a higher degree of trust, it says.
Such domains also pose risks to users of standard local networking software and residential internet routers, the study found. It’s not just Windows boxes either — MacOS and Unix could also be affected.
These are just a few of the 25 distinct security risks Carve identified, 10 of which are considered serious.
ICANN has a default prohibition on dotless gTLDs in the new gTLD Applicant Guidebook, but it’s allowed would-be registries to specially request the ability to go dotless via Extended Evaluation and the Registry Services Evaluation Process (with no guarantee of success, of course).
So far, Google is the only high-profile new gTLD applicant to say it wants a dotless domain. It wants to turn .search into such a service and expects to make a request for it via RSEP.
Other portfolio applicants, such as Donuts and Uniregistry, have also said they’re in favor of dotless gTLDs.
Given the breadth of the potential problems identified by Carve, you might expect a recommendation that dotless domains should be banned outright. But that didn’t happen.
Instead, the company has recommended that only certain strings likely to have a huge impact on many internet users — such as “mail” and “local” — be permanently prohibited as dotless TLDs.
It also recommends lots of ways ICANN could allow dotless domains and mitigate the risk. For example, it suggests massive educational outreach to hardware and software vendors and to end users.

Establish guidelines for software and hardware manufacturers to follow when selecting default dotless names for use on private networks. These organizations should use names from a restricted set of dotless domain names that will never be allowed on the public Internet.

Given that most people have never heard of ICANN, that internet standards generally take a long time to adopt, and allowing for regular hardware upgrade cycles, I couldn’t see ICANN pulling off such a feat for at least five to 10 years.
I can’t see ICANN approving any dotless domains any time soon, but it does appear to have wiggle-room in future. ICANN said:

The ICANN Board New gTLD Program Committee (NGPC) will consider dotless domain names and an appropriate risk mitigation approach at its upcoming meeting in August.

NTIA alarmed as Verisign hints that it will not delegate new gTLDs

Kevin Murphy, August 5, 2013, Domain Tech

Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.
The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.
In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.
The letters (pdf and pdf) were published by ICANN over the weekend, over two months after the first was sent.
Verisign senior VP Pat Kane wrote in the May letter:

we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.
We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.

we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.

Kane’s concerns were first outlined by Verisign in its March 2013 open letter to ICANN, which also expressed serious worries about issues such as internal name collisions.
Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.
These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:

the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program

That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.
Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.
NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:

NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.

Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.
This would be “sufficient for the delegation of new gTLDs”, she wrote.
Could Verisign block new gTLDs?
It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.
Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.
NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.
Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.
In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.
To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.
That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.
What’s next?
Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.
According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.
These documents are likely to be published within weeks, according to the NTIA letter.
Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.