Latest news of the domain name industry

Recent Posts

ICANN chair paid $114,000 last year

Kevin Murphy, July 13, 2017, Domain Policy

ICANN chair Steve Crocker was paid $114,203.24 in the organization’s last tax year.
The number was released today (pdf) in response to a request by domain blogger John Poole of DomainMondo.com.
Poole had requested the figures because Crocker is paid via his company, Shinkuro, rather than directly, so his compensation does not show up on ICANN’s published tax returns.
It was already known that ICANN’s chair is eligible for $75,000 a year in salary, but today’s letter, from CFO Xavier Calvez, states that he also received $39,203.24 for office rent (about $3,250 per month) in the year ended June 30 2016.
This does not include his travel reimbursements and such, which came to well over $100,000 in the same fiscal year according to ICANN disclosures.
If Crocker were on ICANN staff, he would be the 18th most costly employee, even if you do include the extra reimbursements.
Other ICANN directors receive $45,000 per year.
Calvez said ICANN will update its disclosure process to make it clearer how much Crocker is paid via Shinkuro.

Could the next new gTLD round last 25 years? Or 70 years?

Kevin Murphy, July 13, 2017, Domain Policy

Will the next new gTLD round see 25,000 applications? If so, how long will it take for them all to go live?
The 25,000 figure is one that I’ve heard touted a few times, most recently during public sessions at ICANN’s meeting in Johannesburg last month.
The problem is that, judging by ICANN’s previous performance, such a huge number of applications would take anywhere from 25 to 70 years to process.
It’s unclear to me where the 25,000 application estimate comes from originally, but it does not strike me as laughably implausible.
There were just shy of 1,930 applications for 1,408 unique strings in the most recent round.
There could have been so many more.
ICANN’s outreach campaign is generally considered to have been a bit lackluster, particularly in developing markets, so many potential applicants were not aware of the opportunity.
In addition, some major portfolio applicants chose to rein in their ambitions.
Larry Page, then-CEO of Google, is known to have wanted to apply for many, many more than the 101 Google wound up applying for, but was talked down by staff.
There’s talk of pent-up demand for dot-brands among those companies that missed the 2012 window, but it’s impossible to know the scale of that demand with any precision.
Despite the fact that a handful of dot-brands with ICANN registry agreements and delegations have since cancelled their contracts, there’s no reason they could not reapply for defensive purposes again in subsequent rounds.
There are also thousands of towns and cities with populations comparable to cities that applied in 2012 that could apply next time around.
And there’s a possibility that the cost of applying — set at $185,000 on a highly redundant “cost recovery” basis — may come down in the next round.
Lots of other factors will play a role in how many applications we see, but in general it doesn’t seem impossible that there could be as many as 25,000.
Assuming for a moment that there are 25,000, how long will that take to process?
In the 2012 round, ICANN said it would delegate TLDs at a rate of no more than 1,000 per year. So that’s at least 25 years for a 25,000-app round.
That rate was set somewhat arbitrarily during discussions about root zone scaling before anyone knew how many gTLDs would be applied for and estimates were around the 500 mark.
Essentially, the 1,000-per-year number was floated as a sort of straw man (or “straw person” as some ICANNers have it nowadays) so the technical folk had a basis to figure out whether the root system could withstand such an influx.
Of course, this limit will have to be revised significantly if ICANN has any hope of processing 25,000 applications in under a generation.
Discussions at the time indicated that the rate of change, not the size of the root zone, was what represented the stability threat.
In reality, the rate of delegation has been significantly slower than 1,000 per year.
It took until May 2016 for the 1,000th new gTLD to go live, 945 days after the first batch were delegated in late October 2013.
That means that during the relative “rush-hour” of new gTLD delegations, there was still only a little over one per day on average.
And that’s counting from the date of the first delegation, which was actually 18 months after the application window was closed.
If that pattern held in subsequent rounds, we would be looking at about 70 years for a batch of 25,000 to make their way through the system.
You could apply for a vanity gTLD matching your family name and leave the delegation as a gift to your great-grandchildren, long after your death.
Clearly, with 25,000 applications some significant process efficiencies — including, I fancy, much more automation — would be in order.
Currently, IANA’s process for making changes to root zone records (including delegations) is somewhat complex and has multiple manual steps. And that’s before Verisign makes the actual change to the master root zone file.
But the act of delegation is only the final stage of processing a gTLD application.
First, applications that typically run into tens of thousands of words have to undergo Initial Evaluation by several teams of knowledgeable consultants.
From Reveal Day in 2012 to the final IE being published in 2014 took a little over two years, or an average of 2.5 applications per day.
Again, we’re looking at over a quarter of a century just to conduct IE on 25,000 applications.
Then there’s contracting — ICANN’s lawyers would have to sign off on about a dozen Registry Agreements per day if it wanted to process 25,000 delegations in just five years.
Not to mention there’s also pre-delegation testing, contention resolution, auctions, change requests, objections…
There’s a limited window to file objections and there were many complaints, largely from governments, that this period was far too short to read through just 1,930 applications.
A 25,000-string round could take forever, and ICANN’s policies and processes would have to be significantly revised to handle them in a reasonable timeframe.
Then again, potential applicants might view the 2012 round as a bust and the next round could be hugely under-subscribed.
There’s no way of knowing for sure, unfortunately.

ICANN expects to lose 750 registrars in the next year

ICANN is predicting that about 750 accredited registrars will close over the next 12 months due to the over-saturation of the drop-catching market.
ICANN VP Cyrus Namazi made the estimate while explaining ICANN’s fiscal 2018 budget, which is where the projection originated, at the organization’s public meeting in South Africa last week.
He said that ICANN ended its fiscal 2017 last week with 2,989 accredited registrars, but that ICANN expects to lose about 250 per quarter starting from October until this time next year.
These almost 3,000 registrars belong to about 400 registrar families, he said.
By my estimate, roughly two thirds of the registrars are shell accreditations under the ownership of just three companies — Web.com (Namejet and SnapNames), Pheenix, and TurnCommerce (DropCatch.com).
These companies lay out millions of dollars on accreditation fees in order to game ICANN rules and get more connections to registries — mainly Verisign’s .com.
More connections gives them a greater chance of quickly registering potentially valuable domains milliseconds after they are deleted. Drop-catching, in other words.
But Namazi indicated that ICANN’s cautious “best estimate” is that there’s not enough good stuff dropping to justify the number of accreditations these three companies own.
“With the model we have, I believe at the moment the total available market for these sought-after domains that these multifamily registrars are after is not able to withstand the thousands of accreditations that are there,” he said. “Each accreditation costs quite a bit of money.”
Having a registrar accreditation costs $4,000 a year, not including ICANN’s variable and transaction fees.
“We think the market has probably gone beyond what the available market is,” he said.
He cautioned that the situation was “fluid” and that ICANN was keeping an eye on it because these accreditations fees have become material to its budget in the last few years.
If the three drop-catchers do start dumping registrars, it would reveal an extremely short shelf life for their accreditations.
Pheenix upped its registrar count by 300 and DropCatch added 500 to its already huge stable as recently as December 2016.

ICANN shuffles regional bosses, drops “hub” concept

Kevin Murphy, June 29, 2017, Domain Policy

ICANN has made a couple of changes to its senior management team and abandoned the Chehade-era concept of “hub” offices.
Rather than having three offices it calls “hubs” in different parts of the world — Los Angeles, Istanbul and Singapore — it will now have five of what it calls “regional offices”.
As well as the three former hubs, one will be in Brussels, Belgium and the other in Montevideo, Uruguay.
A few vice presidents are being shuffled around to head up each of these offices.
Senior policy VP David Olive is being replaced as managing director of the Istanbul office by Nick Tomasso, who’s also VP in charge of ICANN’s public meetings. Olive will carry on in his VP role, but back in Washington DC, from August.
Fellow policy VP and veteran GAC relations guy Olof Nordling is retiring from ICANN at the end of the July. His role as MD of the Brussels office will be filled by Jean-Jacques Sahel, VP of stakeholder engagement for Europe.
Rodrigo de la Parra, VP of stakeholder engagement for the Latin America region, will be MD of the Montevideo office. Jia-Rong Low runs the Singapore office. ICANN CEO Goran Marby of course is top dog in LA.
The difference in nomenclature — “hub” versus “regional office” — looks to me like it’s quite minor.
Former CEO Fadi Chehade had early on in his stint at ICANN professed a desire to pursue a strategy of aggressive internationalization, with hub offices having equal importance, but I don’t think the idea ever really took off to the extent he expected and he didn’t stick around long enough to see it through.
In addition, the IANA transition last year, which separated ICANN from its US government oversight, pretty much carved ICANN’s California roots into stone for the time being.

.net price increases approved

Verisign has been given the right to continue to raise the wholesale price of .net domains.
It now seems likely the price charged to registrars will top $15 by 2023.
ICANN’s board of directors at the weekend approved the renewal of the .net Registry Agreement, which gives Verisign the right increase its prices by 10% per year for the six years of the contract.
Assuming the company exercises all six options — and there’s no reason to assume it will not — the price of a .net would be $15.27 by the time the contract expires, $0.75 of which would be paid to ICANN in fees.
There was some negative public comment (pdf) about the increases, largely from domainers and those representing domainers, but the ICANN board saw nothing to persuade it to change the terms of the contract.
In notes appended to its resolution, the board stated:

the Board understands that the current price cap provisions in Verisign’s Registry Agreements, including in the .NET Registry Agreement, evolved historically to address various market factors in cooperation with constituencies beyond ICANN including the Department of Commerce. During the negotiations for the renewal, Verisign did not request to alter the pricing cap provisions, the parties did not negotiate these provisions and the provisions remain changed from the previous agreement. The historical 10% price cap was arguably included to allow the Registry Operator to increase prices to account for inflation and increased costs/investments and to take into account other market forces but were not dictated solely by ICANN.

(I assume the word “changed” in that quote should have read “unchanged”.)
Unlike contract renewals for other pre-2012 gTLDs, the .net contract does not include any of the new gTLD program’s rights protection mechanisms, such as the Uniform Rapid Suspension policy.
ICANN explained this disparity by saying these mechanisms are not consensus policies and that it has no right to impose them on legacy gTLD registry operators.

India to have SIXTEEN ccTLDs

While most countries are content to operate using a single ccTLD, India is to up its count to an unprecedented 16.
It already has eight, but ICANN’s board of directors at the weekend approved the delegation of an additional eight.
The new ccTLDs, which have yet to hit the root, are .ಭಾರತ, .ഭാരതം, .ভাৰত, .ଭାରତ, .بارت, .भारतम्, .भारोत, and .ڀارت.
If Google Translate and Wikipedia can be trusted, these words all mean “India” in, respectively, Kannada, Malayalam, Bengali, Odia, Arabic, Nepali, Hindi and Sindhi.
They were all approved under ICANN’s IDN ccTLD Fast Track program and will not operate under ICANN contract.
India already has seven internationalized domain name versions of its ccTLD in seven other scripts, along with its vanilla ASCII .in.
National Internet Exchange of India (NIXI) will be ccTLD manager for the whole lot.
India may have as many as 122 languages, according to Wikipedia, with 30 spoken by more than a million people.

ICANN heading to Japan and Canada in 2019

Kevin Murphy, June 28, 2017, Domain Policy

ICANN has named two of the host cities for its 2019 public meetings.
The community will descend upon Kobe, Japan in March 2019 for the first meeting of the year and will head to Montreal, Canada, for the annual general meeting in November.
Both locations were approved by the ICANN board of directors at a meeting this weekend.
The location of the middle “policy forum” meeting for 2019 has not yet been identified.
ICANN is currently meeting in Johannesburg, South Africa. Later this year it will convene in Abu Dhabi, UAE.
Spanish speakers can rejoice next year, when the locations, in order, are Barcelona, Panama City and San Juan (the Puerto Rican one).

As .boots self-terminates, ICANN will not redelegate it

The dot-brand .boots may become the first single-dictionary-word gTLD to be taken off the market, as The Boots Company told ICANN it no longer wishes to be a registry.
Boots, the 168-year-old British pharmacy chain, told ICANN in April that it is unilaterally terminating its Registry Agreement for .boots and ICANN opened it up for comment this week.
As with the 22 self-terminating dot-brands before it, .boots was unloved and unused, with just the solitary, ICANN-mandated nic.boots in its zone file.
Boots, as well as being a universally known brand name in the UK and Ireland, is of course a generic dictionary word representing an unrelated class of goods (ie footwear).
It’s the first dying dot-brand to have this kind of dual use, making it potentially modestly attractive as a true generic TLD.
However, because it’s currently a dot-brand with no third-party users, it will not be redelegated to another registry.
Under Specification 13 of the Registry Agreement, which gives dot-brands special rights, ICANN has the ability to redelegate dot-brands, but only if it’s in the public interest to do so. That’s clearly not the case in this instance.
These rules also state that ICANN is not allowed to delegate .boots to any other company for a period of two years after the contract ends.
Given that there’s no chance of ICANN delegating any gTLDs in the next two years, this has no real impact. Perhaps, if the ICANN community settles on a rolling gTLD application process in future, this kind of termination may be of more interest.

Zero registrars pass ICANN audit

Some of the biggest names in the registrar game were among a bewildering 100% that failed an ICANN first-pass audit in the latest round of random compliance checks.
Of the 55 registrars picked to participate in the audit, a resounding 0 passed the initial audit, according to data released today.
Among them were recognizable names including Tucows, Register.com, 1&1, Google and Xin Net.
ICANN found 86% of the registrars had three or more “deficiencies” in their compliance with the 2013 Registrar Accreditation Agreement.
By far the most problematic area was compliance with sections 3.7.7.1 to 3.7.7.12 of the RAA, which specifies what terms registrars must put in their registration agreements and how they verify the contact details of their customers.
A full three quarters of audited registrars failed on that count, according to ICANN’s report (pdf).
More than half of tested registrars failed to live up to their commitments to respond to reports of abuse, where they’re obliged among other things to have a 24/7 contact number available.
There was one breach notice to a registrar as a result of the audit, but none of the failures were serious enough for ICANN to terminate the deficient registrar’s contract. Two registrars self-terminated during the process.
ICANN’s audit program is ongoing and operates in rounds.
In the current round, registrars were selected from those which either hadn’t had an audit in a couple of years, were found lacking in previous rounds, or had veered dangerously close to formal breach notices.
The round kicked off last September with requests for documents. The initial audit, which all registrars failed, was followed by a remediation phase from January to May.
Over the remediation phase, only one third of the registrars successfully resolved all the issues highlight by the audit. The remainder issued remediation plans and will be followed up on in future rounds.
The 0% pass rate is not unprecedented. It’s the same as the immediately prior audit (pdf), which ran from May to October 2016.

Zone file access is crap, security panel confirms

Kevin Murphy, June 20, 2017, Domain Policy

ICANN’s Centralized Zone Data Service has some serious shortcomings and needs an overhaul, according to the Security and Stability Advisory Committee.
The panel of DNS security experts has confirmed what CZDS subscribers, including your humble correspondent, have known since 2014 — the system had a major design flaw baked in from day one for no readily apparent reason.
CZDS is the centralized repository of gTLD zone files. It’s hosted by ICANN and aggregates zones from all 2012-round, and some older, gTLDs on a daily basis.
Signing up for it is fairly simple. You simply fill out your contact information, agree to the terms of service, select which zones you want and hit “submit”.
The purpose of the service is to allow researchers to receive zone files without having to enter into separate agreements with each of the 1,200+ gTLDs currently online.
The major problem, as subscribers know and SSAC has confirmed, is that the default subscription period is 90 days.
Unless the gTLD registry extends the period at its end and in its own discretion, each subscription ends after three months — cutting off access — and the subscriber must reapply.
Many of the larger registries exercise this option, but many — particularly dot-brands — do not.
The constant need to reapply and re-approve creates a recurring arse-ache for subscribers and, registry staff have told me, the registries themselves.
The approval process itself is highly unpredictable. Some of the major registries process requests within 24 hours — I’ve found Afilias is the fastest — but I’ve been waiting for approval for Valuetainment’s .voting since September 2016.
Some dot-brands even attempt to insert extra terms of service into the deal before approving requests, which defeats the entire purpose of having a centralized service in the first place.
Usually, a polite email to the person handling the requests can produce results. Other times, it’s necessary to report them to ICANN Compliance.
The SSAC has evidently interviewed many people who share my concerns, as well as looking at data from Compliance (where CZDS reliably generates the most complaints, wasting the time of Compliance staff).

This situation makes zone file access unreliable and subject to unnecessary interruptions. The missing data introduces “blind spots” in security coverage and research projects, and the reliability of software – such as security and analytics applications – that relies upon zone files is reduced. Lastly, the introduced inefficiency creates additional work for both registry operators and subscribers.

The SSAC has no idea why the need to reapply every 90 days was introduced, figuring it must have happened during implementation.
But it recommends that access agreements should automatically renew once they expire, eliminating the busywork of reapplying and closing the holes in researchers’ data sets.
As I’m not objective on this issue, I agree with that recommendation wholeheartedly.
I’m less keen on the SSAC’s recommendation that registries should be able to opt out of the auto-renewals on a per-subscriber basis. This will certainly be abused by the precious snowflake dot-brands that have already shown their reluctance to abide by their contractual obligations.
The SSAC report can be read here (pdf).