Latest news of the domain name industry

Recent Posts

Two banks go down after forgetting to renew domains

Kevin Murphy, July 31, 2013, Domain Tech

Two UK banks suffered downtime over the weekend after apparently failing to renew their domain name registrations.

Clydesdale Bank and Yorkshire Bank, which offer online banking services at cbonline.co.uk and ybonline.co.uk respectively, both blamed a “systems update” for the downtime.

But some customers reported seeing a registrar’s renewal page when they attempted to access the sites, and others are reportedly still seeing difficulties consistent with DNS propagation delays.

Both domain names have expiry dates of July 26, according to Whois records.

Thankfully, the banks, both of which are owned by National Australia Bank, managed to retain control of their domains. If they’d fallen into third party hands things could have been a lot worse.

Combined, the banks have revenue of a couple of billion pounds.

“Risky” gTLDs could be sacrificed to avoid delay

Kevin Murphy, July 20, 2013, Domain Tech

Google and other members of the New gTLD Applicant Group are happy to let ICANN put their applications on hold in response to security concerns raised by Verisign.

During the ICANN 46 Public Forum in Durban on Thursday, NTAG’s Alex Stamos — CTO of .secure applicant Artemis — said that agreement had been reached that about half a dozen applications could be delayed:

NTAG has consensus that we are willing to allow these small numbers of TLDs that have a significant real risk to be delayed until technical implementations can be put in place. There’s going to be no objection from the NTAG on that.

While he didn’t name the strings, he was referring to gTLDs such as .home and .corp, which were highlighted earlier in the week as having large amounts of error traffic at the DNS root.

There’s a worry, originally expressed by Verisign in April and independent consultant Interisle this week, that collisions between new gTLDs and widely-used internal network names will lead to data leakage and other security problems.

Google’s Jordyn Buchanan also took the mic at the Public Forum to say that Google will gladly put its uncontested application for .ads — which Interisle says gets over 5 million root queries a day — on hold until any security problems are mitigated.

Two members of the board described Stamos’ proposal as “reasonable”.

Both Stamos and ICANN CEO Fadi Chehade indirectly criticised Verisign for the PR campaign it has recently built around its new gTLD security concerns, which has led to somewhat one-sided articles in the tech press and mainstream media such as the Washington Post.

Stamos said:

What we do object to is the use of the risk posed by a small, tiny, tiny fraction — my personal guess would be six, seven, eight possible name spaces that have any real impact — to then tar the entire project with a big brush. For contracted parties to go out to the Washington Post and plant stories about the 911 system not working because new TLDs are turned on is completely irresponsible and is clearly not about fixing the internet but is about undermining the internet and undermining new gTLDs.

Later, in response to comments on the same topic from the Association of National Advertisers, which suggested that emergency services could fail if new gTLDs go live, Chehade said:

Creating an unnecessary alarm is equally irresponsible… as publicly responsible members of one community, let’s measure how much alarm we raise. And in the trademark case, with all due respect it ended up, frankly, not looking good for anyone at the end.

That’s a reference to the ANA’s original campaign against new gTLDs, which wound up producing not much more than a lot of column inches about an utterly pointless Congressional hearing in late 2011.

Chehade and the ANA representative this time agreed publicly to work together on better terms.

.home gets half a billion hits a day. Could this put new gTLDs at risk?

Kevin Murphy, July 17, 2013, Domain Tech

New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.

ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.

His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.

That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.

Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.

The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.

Some of its findings are surprising:

  • Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
  • Three percent of all requests were for strings that have been applied for in the current round.
  • A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
  • .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com

Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.

Most Queried TLDs

If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.

Here’s the most-queried applied-for strings:

Most Queried TLDs

Chapin was quick to point out that big numbers do not necessarily equate to big security problems.

“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.

“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.

For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.

If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.

In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.

The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.

There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.

Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.

As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.

For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.

Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.

There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.

Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.

One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.

So what are Interisle’s recommendations likely to be?

Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.

For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.

It seems to me that if the second-level request data was available, more mitigation options would be opened up.

ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.

“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”

Without prompting, he addressed the risk of delay to the new gTLD program.

“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”

The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.

Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.

ICANN certainly likes to play things close to the whistle.

IAB gives dotless domains the thumbs down

Kevin Murphy, July 11, 2013, Domain Tech

The Internet Architecture Board believes dotless domain names would be “inherently harmful to Internet security.”

The IAB, the oversight committee which is to internet technical standards what ICANN is to domain names, weighed into the debate with an article apparently published yesterday.

In it, the committee states that over time dotless domains have evolved to be used only on local networks, rather than the internet, and that to start delegating them at the top level of the DNS would be dangerous:

most users entering single-label names want them to be resolved in a local context, and they do not expect a single name to refer to a TLD. The behavior is specified within a succession of standards track documents developed over several decades, and is now implemented by hundreds of millions of Internet hosts.

By attempting to change expected behavior, dotless domains introduce potential security vulnerabilities. These include causing traffic intended for local services to be directed onto the global Internet (and vice-versa), which can enable a number of attacks, including theft of credentials and cookies, cross-site scripting attacks, etc. As a result, the deployment of dotless domains has the potential to cause significant harm to the security of the Internet

The article also says (if I understand correctly) that it’s okay for browsers to interpret words entered into address bars without dots as local resources and/or search terms rather than domain names.

It’s pretty unequivocal that dotless domains would be Bad.

The article was written because there’s currently a lot of talk about new gTLD applicants — such as Google, Donuts and Uniregistry — asking ICANN to allow them to run their TLDs without dots.

There’s a ban in the Applicant Guidebook on the “apex A records” that would be required to make dotless TLDs work, but it’s been suggested that applicants could apply to have the ban lifted on a case by case basis.

More recently, ICANN’s Security and Stability Advisory Committee has stated almost as unequivocally as the IAB that dotless domains should not be allowed.

But for some reason ICANN recently commissioned a security company to look into the issue.

This seems to have made some people, such as the At Large Advisory Committee, worried that ICANN is looking for some wiggle room to give its new gTLD paymasters what they want.

Alternatively, ICANN may just be looking for a second opinion to wave in the faces of new gTLD registries when it tells them to take a hike. It was quite vague about its motives.

It’s not just a technical issue, of course. Dotless TLDs would shake up the web search market in a big way, and not necessarily for the better.

Donuts CEO Paul Stahura today published an article on CircleID that makes the case that it is the browser makers, specifically Microsoft, that are implementing DNS all wrong, and that they’re objecting to dotless domains for competitive reasons. The IAB apparently disagrees, but it’s an interesting counterpoint nevertheless.

Microsoft objects to Google’s dotless domains plan

Kevin Murphy, June 11, 2013, Domain Tech

Microsoft has strongly urged ICANN to reject Google’s plan for a “dotless” .search gTLD.

In a letter sent a couple of weeks ago and published last night, the company says that Google risks putting the security and stability of the internet at risk if its .search idea goes ahead.

David Tennenhouse, corporate vice president of technology policy, wrote:

Dotless domains are currently used as intranet addresses controlled by private networks for internal use. Google’s proposed amendment would interfere with that private space, creating security vulnerabilities and impacting enterprise network and systems infrastructure around the globe.

It’s a parallel argument to the one going on between Verisign and everyone else with regards to gTLD strings that may conflict with naming schemes on internal corporate networks.

While they’re subtly different problems, ICANN recently commissioned a security study into dotless domains (announced 11 days after Microsoft’s letter was sent) that links the two.

As Tennenhouse says in his letter, ICANN’s Security and Stability Advisory Committee, which has Google employees on it, has already warned about the dotless name problem in SAC053 (pdf).

He also claims that Google had submitted follow-up comments to SAC053 saying dotless domains would be “actively harmful”, but this is slightly misleading.

One Google engineer did submit such a comment, but it limited itself to talking about clashes with internal name certificates, a slightly different issue, and it’s not clear it was an official Google Inc comment.

The new gTLD Applicant Guidebook currently outlaws dotless domains through its ban on “apex A records”, but that ban can be circumvented if applicants can convince a registry services evaluation panel that their dotless domain plans don’t pose a stability risk.

While Google’s original .search application envisaged a single-registrant “closed generic”, it later amended the proposal to make it “open” and include the dotless domain proposal.

This is the relevant bit of the amended application:

Charleston Road Registry will operate a service that allows users to easily perform searches using the search functionality of their choice. This service will operate on the “dotless” search domain name (http://search/) and provide a simple web interface. This interface operates in two modes:

1) When the user has not set a preference for a search engine, they will be prompted to select one. The user will be provided with a simple web form that will allow them to designate a search engine by entering the second level label for any second level domain registered with in the TLD (e.g., if “foo.search” was a valid second level domain name, the user could indicated that their preferred search engine was “foo”). The user can also elect to save this preference, in which case a cookie will be set in the userʹs browser. This cookie will be used in the second mode, as described below. If the user enters an invalid name, they will be prompted again to provide a valid response.

2) If the user has already set a preferred search engine, the redirect service will redirect the initial query to the second level domain name indicated by the userʹs preference, including any query string provided by the user. For example, if the user had previously selected the “foo” search engine and had issued a query for http://search/?q=bar, the server would issue a redirect to http://foo.search/?q=bar. In this manner, the userʹs query will be consistently redirected to the search engine of their choice.

While Google seems to have preempted some concerns about monopolistic practices in the search engine market, approval of its dotless search feature would nevertheless have huge implications.

Make no mistake, dotless domains are a Big Deal and it would be a huge mistake for ICANN to treat them only as a security and stability issue.

What’s weird about Google’s proposal is that by asking ICANN to open up the floodgates for dotless domains, it risks inviting the domain name industry to eat its breakfast, lunch and dinner.

If ICANN lets registries offer TLDs domains without dots, the new gTLD program will no longer be about delegating domain names, it will be about auctioning exclusive rights to search terms.

Today, if you type “beer” into your browser’s address bar (which in all the cases I’m aware of are also search bars) you’ll be directed to a page of search results for the term “beer”.

In future, if “beer” is a domain name, what happens? Do you get search or do you get a web page, owned by the .beer registry? Would that page have value, or would it be little better than a parking page?

If browser makers decided to implement dotless domains — and of course there are plenty of reasons why they wouldn’t — every borderline useful dictionary word gTLD would be sold off in a single round.

Would that be good for the internet? I’d lean toward “no”.