New gTLDs are the new Y2K: .corp and .home are doomed and everything else is delayed
The proposed gTLDs .home and .corp create risks to the internet comparable to the Millennium Bug, which terrorized a burgeoning internet at the turn of the century, and should be rejected.
Meanwhile, every other gTLD that has been applied for in the current round could be delayed by months in order to mitigate the risks they pose to internet users.
These are the conclusions ICANN has drawn from Interisle Consulting’s independent study into the problems that could be caused when new gTLDs clash with widely-used internal naming systems.
The extensive study, which drew on 8TB of traffic data provided by 11 of the 13 DNS root server operators, is 197 pages long and absolutely fascinating. It was published by ICANN today.
As Interisle CEO Lyman Chapin reported at the ICANN meeting in Durban a few weeks ago, the large majority of TLDs that have been applied for in the current round already receive large amounts of error traffic:
Of the 1,409 distinct applied-for TLD strings, 1,367 appeared at least once in the 2013 DITL [Day In the Life of the Internet] data with the string at the TLD position.
We’ve previously reported on the volume of queries new gTLDs get, such as the fact that .home gets half a billion hits a day and that 3% of all requests were for strings that have been applied for in the current round.
The extra value in Interisle’s report comes when it starts to figure out how many end points are making these requests, and how many second-level domains they’re looking for.
These are vitally important factors for assessing the scale of the risk of each TLD.
Again, .home and .corp appear to be the most dangerous.
Interisle capped the number of second-level domains it counted in the 2013 data at 100,000 per TLD per root server — 1,100,000 domains in total — and .home was the only TLD string to hit this cap.
Cisco Systems’ proposed .cisco TLD came close, failing to hit the cap in only one of the 11 root servers providing data, while .box and .iinet (both also used widely on home routers) hit the cap on at least one root server.
The lowest count of second-level domains of the 35 listed in the report came from .hsbc, the bank brand, but even that number was a not-inconsiderable 2,000.
Why are these requests being made?
Surprisingly, interactions between a security feature in Google’s own Chrome browser and common residential routers appear to be the biggest cause of queries for non-existent TLDs.
That issue, which impacts mainly .home, accounts for about 46% of the requests counted, according to the report.
In second place, with 15% of the queries, are requests for real domain names that appear to have had a non-existent TLD — again, usually .home — appended by a residential router or cable modem.
Apparent typos — where a user enters a URL but forgets to type the TLD — were a relatively small percentage of requests, coming in at under 1% of queries.
The study also found that bad requests come from many thousands of sources. This table compares the number of requests to the number of sources.
[table id=14 /]
The “Count” column is the number, in thousands, of requests for each TLD string. The “Prefix Count ” column refers to the number of sources providing this traffic, counted by the /24 IP address block (each of which is up to 256 potential hosts).
As you can see, there’s not necessarily a correlation between the number of requests a TLD gets and the number of people making the requests — .google gets queried by more sources than the others, but it’s only ranked 24 in terms of overall query volume, for example.
Interisle concluded from all this that .corp and .home are simply too dangerous to delegate, comparing the problem to the year 2000 bug, where a global effort was required to make sure software could support the four-digit dating scheme required by the turn of the century.
Here’s what the report says about .corp:
users could be taken to the wrong web site (and possibly be exposed to phishing attacks) or told that web sites do not exist when they do, depending on how the .corp TLD is resolved. A corporate mail system might attempt to deliver email to the wrong server, and this could expose sensitive or confidential information to someone who was not supposed to receive it. In essence, everything deployed in the private network would need to be checked.
There are no easy solutions to these problems. In an ideal world, the operators of these private networks would get a timely notification of the new TLD’s delegation and then take action to address these issues. That seems very improbable. Even if ICANN generated sufficient publicity about the new TLD’s delegation, there is no guarantee that this will come to the attention of the management or operators of the private networks that could be jeopardized by the delegation.
…
It seems reasonable to estimate that the amount of effort involved might be comparable to a wholesale renumbering of the internal network or the Y2K problem.
It notes that applied-for TLDs such as .site, .office, .group and .inc appear to be used in similar ways to .home and .corp, but do not appear to present as broad a risk.
To be clear, the risk we’re talking about here isn’t just people typing the wrong things into browsers, it’s about the infrastructure on many thousands of private networks starting to make the wrong security assumptions about domain names.
ICANN, in response, has outlined a series of measures sure to infuriate many gTLD applicants, but which are consistent with its goal to protect the security and stability of the internet.
They’re also consistent with some of the recommendations put forward by Verisign over the last few months in its campaign to show that new gTLDs pose huge risks.
First, .corp and .home are dead. These two strings have been categorized “high risk” by ICANN, which said:
Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk
Given the Y2K-scale effort required to mitigate the risks, and the fact that the eventual pay-off wouldn’t compensate for the work, I feel fairly confident in saying the two strings will never be delegated.
Another 80% of the applied-for strings have been categorized “low risk”. ICANN has published a spreadsheet explaining which string falls into which category. Low risk does not mean they get off scot-free, however.
First, all registries for low-risk strings will not be allowed to activate any domain names in their gTLD for 120 days after contract signing.
Second, for 30 days after a gTLD is delegated the new registries will have to reach out to the owners of each IP address that attempts to query names in that gTLD, to try to mitigate the risk of internal name collisions.
This, as applicants will no doubt quickly argue, is going to place them under a massive cost burden.
But their outlook is considerably brighter than that of the remaining 20% of applications, which are categorized as “uncalculated risk” and face a further three to six months of delay while ICANN conducts further studies into whether they’re each “high” or “low” risk strings.
In other words, the new gTLD program is about to see its biggest shake-up since the GAC delivered its Advice in Beijing, adding potentially millions in costs and delays for applicants.
ICANN’s proposed mitigation efforts are now open for public comment.
One has to wonder why the hell ICANN didn’t do this study two years ago.
If you find this post or this blog useful or interestjng, please support Domain Incite, the independent source of news, analysis and opinion for the domain name industry and ICANN community.
Exactly:
One has to wonder why the hell ICANN didn’t do this study two years ago.
Answer:
Why should ICANN lose all the application money for all the tlds that are not going to be delegated?
The real danger with the new gTLD program is ICANN and all related committees. This whole new gTLD is going worse by the day. Only one title is fit for this: “TO HELL”.
Verisign wins
+1. VeriSign Inc. have played this very well.
Fahd A. Batayneh
Executive Director
.موقع (.site in Arabic) IDn gTLD
Hard to disagree.
Interesting that GoDaddy pulled their .home application a couple of months ago under the pretense of not wanting to compete with applicants and now .home is dead in the water.
Actually, they said they would withdraw, but those applications are still active.
GoDaddy withdrew both its .home and .casa applications shortly after it announced it would.
Didn’t have anything to do with foresight to the name collision issue. More of a new CEO, different objectives sort of thing.
Thanks for clarifying Andrew. Easy to jump to conspiracy theories in an industry that is so close knit, but your explanation makes a lot of sense.
In theory the first new gTLD domain name could go live November 11’th (120 days after the first contracts where signed).
And sunrises could even still start October 5’th, as long as domain names registered during sunrises aren’t activated before that day November 11’th.
But then again November 11’th is only the possible start date of domain delegations for those 4 TLD’s which contracts where signed mid July.
Is there a list somewhere of who else has actually already signed and on which date?
Bart, the ICANN webpage that shows the various TLD contracts signed between their respective Registry and ICANN is a good source. Link at http://www.icann.org/en/about/agreements/registries.
On another note, the .موقع (.site in Arabic) IDN gTLD has been invited to sign the Registry contract with ICANN.
Fahd
Some of the issues in the report, and in the dotless domain report, are of a software implementation nature or can be more effectively dealt with in software.
Chrome is already not showing internal names or dotless names as secure since two months ago:
https://plus.google.com/105761279104103278252/posts/9HZyiaht8L8?e=-RedirectToSandbox
Thanks for the very good report, Kevin — adds to my understanding of the real world and marketplace implications of all this.
IMHO, we should pay heed to the very strong words ICANN has for those who use DNS in a way that relies on TLD’s not in the authoritative root:
http://www.icann.org/en/about/unique-authoritative-root
Some private organizations have established DNS roots as alternates to the authoritative root. Some uses of these alternate roots do not jeopardize the stability of the DNS. For example, many are purely private roots operating inside institutions and are carefully insulated from the DNS.
…
Alternate roots inherently endanger DNS stability – that is, they create the real risk of name resolvers being unable to determine to which numeric address a given name should point. This violates the fundamental design of the DNS and impairs the Internet’s utility as a ubiquitous global communications medium. Some of these alternate systems also employ special technologies that – ingenious as they may be – may conflict with future generations of community-established Internet standards. Indeed, can there be any guarantee that these proprietary technologies can or will be adapted to future changes in Internet standards?
“ICANN’s mandate to preserve stability of the DNS requires that it avoid encouraging the proliferation of these alternate roots that could cause conflicts and instability. This means that ICANN continues to adhere to community-based processes in its decisions regarding the content of the authoritative root. Within its current policy framework, ICANN can give no preference to those who choose to work outside of these processes and outside of the policies engendered by this public trust.”
Are you trying to compare network administrators’ vendor-recommended use of internal TLDs to the various money-grubbing alt-root scams we’ve seen over the years?
Doesn’t seem like a fair comparison to me.
I don’t think DNS looks at motivation when resolving a query, or that the point was to promote spiritual purity. I thought ICANN was in the technical coordination business. ICP-3 appears to indicate that use of non-authoritative TLD’s is an “at your risk” proposition to those who do, whatever the reason.
Now, if Google is publishing software which uses non-authoritative TLDs, we can, yes, assume it is in the furtherment of the vow of poverty in Google’s corporate mission statement, but the upshot of ICP-3 seems to be that it is not ICANN’s job to make accommodations for it.
The new gTLD program has faced a lot of criticism and is under massive pressure to deliver. I question whether short-cuts are being taken and due process flaunted. My example of this is with Regtime who has flaunted rules by running multiple alternate-roots, yet a director of this company has been accepted into the ICANN community and then gamed the system by applying for a Cyrillic gTLD and at the same time abused the LRO process by “objecting” to 2 new Cyrillic gTLDs on the grounds that they collide with his alternate-root TLDs. All this has done is delay legitimate Cyrillic gTLD applications from PIR and Verisign. This surely cannot be tolerated. Where are the safeguards? Does ICANN accept to its team anyone regardless of their background, morals or agenda?
The objection process itself is a safeguard, not an abuse. People might try to abuse it, as many tried with all types of objections, but the small delay of the objection process is tiny compared to the overall length of the program.
If the objections have no merit, loser pays model will make the applicants spend a limited amount of money (admin fee plus legal fees) to defend themselves.
We can regret people trying to abuse the process, but hindering objection avenues would make the process worse.
What was learnt from the earlier rounds? For example info, post, tel and asia.
Clearly nothing 🙂
One could however assume that, since no precautions where taking with those launches and nothing did go very badly wrong (at least as far as we know), there was actually nothing to learn but the fact that adding extra TLD’s to the root does not break stuff.
so .IDNs get a fast track despite known issues that emails sent to / from them may not work. while all the strong generics are put into cold storage. someone needs to stand up to this..
The difference is that if IDN email doesn’t work, that’s going to affect the new registrants of IDN domains. As far as I know it’s not going to break anyone else’s existing systems.
Curious how the City TLDs will be risk rated under this supposed cloud? What’s a few more months when compared to the past several years spent on getting to this point. The time will pass quickly and so will this issue (like so many before it)!
Funny is that most of the .home applications already passed IE. Remember that one of the two main elements of the IE is:
“String reviews (concerning the applied-for gTLD
string). String reviews include a determination that
the applied-for gTLD string is not likely to cause
security or stability problems in the DNS, including
problems caused by similarity to existing TLDs or
reserved names.”
Applicants paid a part of the 185 kUSD to get an independent evaluation which suddenly is not valid anymore?
And if I recall correctly, done by the same company that did the studies…
I don’t think you recall correctly this time. Interisle is doing the registry services evaluation, the technical evaluation was Ernst & Young, JAS Global Advisors, and KPMG.
http://newgtlds.icann.org/en/blog/preparing-evaluators-22nov11-en
Perhaps they should have used Deloitte Consulting. 🙂
All this time, I’ve assumed this issue was researched by the RSSAC and the SSAC. I just reviewed the 31 August 2009 RSSAC paper, “Scaling the Root,” and now realize: it doesn’t address dotless TLDs, nor automated routines that rely on valid domain names. It mostly concerns itself with impacts on IANA and anchor DNS operators.
Can it really be true that the Interisle study is ICANN’s first serious look at these issues? I’m still in denial.