While the universe of new gTLDs is growing at a rapid clip, DI research shows that at least one in 10 individual new gTLDs are shrinking.
Using zone file data, I’ve also established that almost a third of new gTLDs were smaller June 1 than they were 90 days earlier, and that more than one in five shrunk over a 12-month period.
There’s been a lot written recently, here and elsewhere, about the volume boom at the top-end of the new gTLD league tables, driven by the inexplicable hunger in China for worthless domain names, so I thought I’d try to balance it out by looking at those not benefiting from the budget land-grab madness.
It’s been about two and a half years since the first new gTLDs of the 2012 round were delegated. A few hundred were in general availability by the end of 2014.
These are the ones I chose to look at for this article.
Taking the full list of delegated 2012-round gTLDs, I first disregarded any dot-brands. For me, that’s any gTLD that has Specifications 9 or 13 in its ICANN Registry Agreement.
Volume is not a measure of success for dot-brands in general, where only the registry can own names, so we’re not interested in their growth rates.
Then I disregarded any gTLD that had a general availability date after March 14, 2015.
That date was selected because it’s 445 days before June 1, 2016 — enough time for a gTLD to go through its first renewal/deletion cycle.
There’s no point looking at TLDs less than a year old as they can only be growing.
This whittling process left me with 334 gTLDs.
Counting the domains in those gTLDs’ zone files, I found that:
- 96 (28.7%) were smaller June 1 than they were 30 days earlier.
- 104 (31.1%) were smaller June 1 than they were 90 days earlier.
- 76 (22.7%) were smaller June 1 than they were 366 days earlier.
- 35 (10.4%) were smaller on a monthly, quarterly and annual basis.
Zone files don’t include all registered domains, of course, but the proportion of those excluded tends to be broadly similar between gTLDs. Apples-to-apples comparisons are, I believe, fair.
And I think it’s fair to say that if a gTLD has gotten smaller over the previous month, quarter and year, that gTLD is “shrinking”.
There are the TLDs.
|TLD||Registry||Domains||Annual Change||Quarterly Change||Monthly Change|
|..在线 (xn--3ds443g)||TLD Registry||34800||-1161||-1183||-1124|
|.شبكة (xn--ngbc5azd)||International Domain Registry||1103||-379||-150||-84|
|.ОНЛАЙН (xn--80asehdb)||CORE Association||2350||-128||-157||-215|
|.САЙТ (xn--80aswg)||CORE Association||1072||-8||-46||-65|
Concerning those 35 shrinking gTLDs:
- The average size of the zones, as of June 1, was 17,299 domains.
- Combined, they accounted for 605,472 domains, down 34,412 on the year. That’s a small portion of the gTLD universe, which is currently over 20 million.
- The smallest was .wed, with 144 domains and annual shrinkage of 12. The largest was .网址 (Chinese for “.website”) which had 330,554 domains and annual shrinkage of 7,487.
- The mean shrinkage over the year was 983 domains per gTLD. Over the quarter it was 1,025. Over the month it was 400.
Sixteen of the 35 domains belong to Donuts, which is perhaps to be expected given that it has the largest stable and was the most aggressive early mover.
Of its first batch of seven domains to go to GA, way back in February 2014, only three — .guru, .singles, and .plumbing — are on our list of shrinkers.
A Donuts spokesperson told DI today that its overall number of registrations is on the increase and that “too much focus on individual TLDs doesn’t accurately indicate the overall health of the TLD program in general and of our portfolio specifically.”
He pointed out that Donuts has not pursued the domainer market with aggressive promotions, targeting instead small and medium businesses that are more likely to actually use their domains.
“As initial domainer investors shake out, you’re likely to see some degradation in the size of the zone,” he said.
He added that Donuts has seen second-year renewal rates of 72%, which were higher than the first year.
“That indicates that there’s more steadiness in the registration base today than there was when first-year renewals were due,” he said.
The US National Telecommunications and Information Administration has formally thrown its weight behind the community-led proposal that would remove the US government, itself in effect, from DNS root oversight.
Assistant secretary Larry Strickling held a press conference this afternoon to confirm the hardly surprising development, but dodged questions about a Republican move to scupper the plan in Congress.
The IANA transition plan, which was developed by the ICANN community over about two years, meets all the criteria NTIA had set out in its surprise 2014 announcement, Strickling confirmed.
Namely, NTIA said in a press release that the the plan would:
- Support and enhance the multistakeholder model;
- Maintain the security, stability, and resiliency of the Internet DNS;
- Meet the needs and expectations of the global customers and partners of the IANA services; and
- Maintain the openness of the Internet.
Probably more importantly, NTIA agrees with everyone else that the plan does not replace NTIA’s role with more government meddling.
US Sen. Ted Cruz and Rep. Sean Duffy see things differently. They yesterday introduced the Protecting Internet Freedom Act, which would stop the transition going ahead.
Strickling said that NTIA has been talking to Congress members about the transition, but declined to “speculate” about the new bill’s likelihood of success.
“We’ve been up on the Hill doing briefings and will continue to do so with any member that wants to talk to us,” he said.
Currently, NTIA is forbidden by law from spending any money on the transition, but that prohibition expires (unless it is renewed) at the end of the current federal budget cycle.
The plan is to carry out the transition after that, Strickling said.
The current IANA contract expires September 30. It may be extended, depending on how quickly ICANN and Verisign proceed on their implementation tasks.
The world’s third-largest mobile phone company, worth some $14 billion a year, is the first new gTLD registry operator to refuse to pay ICANN fees.
That’s according to ICANN’s compliance department, which last night slapped Bharti Airtel with the new gTLD program’s first public contract breach notices.
The notices, which apply to .bharti and .airtel, claim that the Indian company has been ignoring demands to pay past due fees since February.
The ICANN quarterly fee for registries is $6,250. Given .airtel and .bharti were delegated 11 months ago, the company, which has assets of $33 billion, can’t owe any more than $37,500.
Bharti Airtel is, according to Wikipedia, the third largest mobile network operator in the world and the largest in India, with 325 million subscribers.
Yet ICANN also claims it has had terrible difficulty getting in touch with staff there, saying:
ICANN notes that Bharti Airtel exhibits a pattern of non-response to ICANN Contractual Compliance matters and, when responses are provided to ICANN, they are often untimely and incomplete.
The compliance notices show that ICANN has also communicated with Verisign, the registry back-end operator for both gTLDs, to try to get the matters resolved.
According to ICANN, the registry is also in breach of terms that require it to publish links to its Whois service, abuse contacts and DNSSEC practice statements on its web site.
The sites nic.airtel and nic.bharti don’t resolve (for me at least) with or without a www., but the Whois services at whois.nic.airtel and whois.nic.bharti appear to work.
These are the first two registries of any flavor emerging from the 2012 application round to receive public breach notices. Only one pre-2012 gTLD, .jobs, has the same honor.
ICANN has given Bharti Airtel 30 days from yesterday to come back into compliance or risk losing its Registry Agreements.
Given that both gTLDS are almost a year old and the nic. sites still don’t resolve, one wonders if the company will bother.
Verisign has revived its old name collisions security scare story, publishing this week a weighty research paper claiming millions are at risk of man-in-the-middle attacks.
It’s actually a study into how a well-known type of attack, first documented in the 1990s, might become easier due to the expansion of the DNS at the top level.
According to the paper there might be as many as 238,000 instances per day of query traffic intended for private networks leaking to the public DNS, where attackers could potentially exploit it to all manner of genuinely nasty things.
But Verisign has seen no evidence of the vulnerability being used by bad guys yet and it might not be as scary as it first appears.
You can read the paper here (pdf), but I’ll attempt to summarize.
The problem concerns a virtually ubiquitous protocol called WPAD, for Web Proxy Auto-Discovery.
It’s used by mostly by Windows clients to automatically download a web proxy configuration file that tells their browser how to connect to the web.
Organizations host these files on their local networks. The WPAD protocol tries to find the file using DHCP first, but fails over to DNS.
So, your browser might look for a wpad.dat file on wpad.example.com, depending on what domain your computer belongs to, using DNS.
The vulnerability arises because companies often use previously undelegated TLDs — such as .prod or .global — on their internal networks. Their PCs could belong to domains ending in .corp, even though .corp isn’t real TLD in the DNS root.
When these devices are roaming outside of their local network, they will still attempt to use the DNS to find their WPAD file. And if the TLD their company uses internally has actually been delegated by ICANN, their WPAD requests “leak” to registry or registrant.
A malicious attacker could register a domain name in a TLD that matches the domain the target company uses internally, allowing him to intercept and respond to the WPAD request and setting himself up as the roaming laptop’s web proxy.
That would basically allow the attacker to do pretty much whatever he wanted to the victim’s browsing experience.
Verisign says it saw 20 million WPAD leaks hit its two root servers every single day when it collected its data, and estimates that 6.6 million users are affected.
The paper says that of the 738 new gTLDs it looked at, 65.7% of them saw some degree of WPAD query leakage.
The ones with the most leaks, in order, were .global, .ads, .group, .network, .dev, .office, .prod, .hsbc, .win, .world, .one, .sap and .site.
It’s potentially quite scary, but there are some mitigating factors.
First, the problem is not limited to new gTLDs.
Yesterday I talked to Matt Larson, ICANN’s new vice president of research (who held the same post at Verisign’s until a few years ago).
He said ICANN has seen the same problem with .int, which was delegated in 1988. ICANN runs one of .int’s authoritative name servers.
“We did a really quick look at 24 hours of traffic and saw a million and a half queries for domain names of the form wpad.something.int, and that’s just one name server out of several in a 24-hour period,” he said.
“This is not a new problem, and it’s not a problem that’s specific to new gTLDs,” he said.
According to Verisign’s paper, only 2.3% of the WPAD query leaks hitting its root servers were related to new gTLDs. That’s about 238,000 queries every day.
With such a small percentage, you might wonder why new gTLDs are being highlighted as a problem.
I think it’s because organizations typically won’t own the new gTLD domain name that matches their internal domain, something that would eliminate the risk of an attacker exploiting a leak.
Verisign’s report also has limited visibility into the actual degree of risk organizations are experiencing today.
Its research methodology by necessity was limited to observing leaked WPAD queries hitting its two root servers before the new gTLDs in question were delegated.
The company only collected relevant NXDOMAIN traffic to its two root servers — DNS queries with answers typically get resolved closer to the user in the DNS hierarchy — so it has no visibility to whether the same level of leaks happen post-delegation.
Well aware of the name collisions problem, largely due to Verisign’s 11th-hour epiphany on the subject, ICANN forces all new gTLD registries to wildcard their zones for 90 days after they go live.
All collision names are pointed to 127.0.53.53, a reserved IP address picked in order to catch the attention of network administrators (DNS uses TCP/IP port 53).
Potentially, at-risk organizations could have fixed their collision problems shortly after the colliding gTLD was delegated, reducing the global impact of the vulnerability.
There’s no good data showing how many networks were reconfigured due to name collisions in the new gTLD program, but some anecdotal evidence of admins telling Google to go fuck itself when .prod got delegated.
A December 2015 report from JAS Advisors, which came up with the 127.0.53.53 idea, said the effects of name collisions have been rather limited.
ICANN’s Larson echoed the advice put out by security watchdog US-CERT this week, which among other things urges admins to use proper domain names that they actually control on their internal networks.
The 1,000th new gTLD from the 2012 application round was delegated yesterday.
It was either .shop or .realestate, appropriately enough, which both appear to have been added to the DNS root zone at about the same time.
Right now, there are actually only 999 new gTLDs live in the DNS. That’s because the unwanted .doosan was retired in February.
During its pre-launch planning for the new gTLD program, ICANN based its root zone stability planning on the assumption that fewer than 1,000 TLDs would be added to the root per year.
In reality, it’s taken much longer to reach that threshold. The first few new gTLDs were added in late October 2013, 945 days ago.
On average, in other words, a new gTLD has been added to the root slightly more than once per day.
Over that same period, nine ccTLDs — internationalized domain names applied for via a separate ICANN program — have also gone live.
The 1,000th new gTLD to be added to the IANA database was .blog.
There are 1,314 TLDs in the root all told.