Latest news of the domain name industry

Recent Posts

Kafka turns in grave as ICANN crowbars “useless” Greek TLD into the root

Kevin Murphy, September 9, 2019, Domain Policy

ICANN has finally approved a version of .eu in Greek script, but it’s already been criticized as “useless”.

Yesterday, ICANN’s board of directors rubber-stamped .ευ, the second internationalized domain name version of the European Union’s .eu, which will be represented in the DNS as .xn--qxa6a.

There’s a lot of history behind .ευ, much of it maddeningly illustrative of ICANN’s Kafkaesque obsession with procedure.

The first amusing thing to point out is that .ευ is technically being approved under ICANN’s IDN ccTLD Fast Track Process, a mere NINE YEARS after EURid first submitted its application.

The “Fast Track” has been used so far to approve 61 IDN ccTLDs. Often, the requested string is merely the name of the country in question, written in one of the local scripts, and the TLD is approved fairly quickly.

But in some cases, especially where the desired string is a two-character code, a string review will find the possibility of confusion with another TLD. This runs the risk of broadening the scope of domain homograph attacks sometimes used in phishing.

That’s what happened to .ευ, along with Bulgaria’s Cyrillic .бг and Greece’s own .ελ, which were rejected on string confusion grounds back in 2010 and 2011.

Under pressure from the Governmental Advisory Committee, ICANN then implemented an Extended Process Similarity Review Panel, essentially an appeals process designed to give unsuccessful Fast Track applicants a second bite at the apple.

That process led to Bulgaria being told that .бг was not too similar to Brazil’s .br, and Greece being told that .ελ did not look too much like .EA, a non-existent ccTLD that may or may not be delegated in future, after all.

But the EU’s .ευ failed at the same time, in 2014. The appeals review panel found that the string was confusable with upper-case .EY and .EV.

Again, these are not ccTLDs, just strings of two characters that have the potential to become ccTLDs in future should a new country or territory emerge and be assigned those codes by the International Standards Organization, a low-probability event.

I reported at the time that .ευ was probably as good as dead. It seemed pretty clear based on the rules at the time that if a string was confusable in uppercase OR lowercase, it would be rejected.

But I was quickly informed by ICANN that I was incorrect, and that ICANN top brass needed to discuss the results.

That seems to have led to ICANN tweaking the rules yet again in order to crowbar .ευ into the root.

In 2015, the board of directors reached out to the GAC, the ccNSO and the Security and Stability Advisory Committee for advice.

They dutifully returned two years later with proposed changes (pdf) that seemed tailor-made for the European Union’s predicament.

A requested IDN ccTLD that caused confusion with other strings in only uppercase, but not lowercase (just like .ευ!!!) could still get delegated, provided it had a comprehensive risk mitigation strategy in place, they recommended.

The recommendation was quickly approved by ICANN, which then sent its implementation guidelines (again, tailor-made for EURid (pdf)) back to the ccNSO/SSAC.

It was not until February this year that the ccNSO/SSAC group got back to ICANN (pdf) to approve of its implementation plan and to say that it has already tested it against EURid’s proposed risk-mitigation plan (pdf).

Basically, the process in 2009 didn’t produce the desired result, so ICANN changed the process. It didn’t produced the desired result again in 2014, so the process was changed again.

But at least Greek-speaking EU citizens are finally going to get a meaningful ccTLD that allows them to express their EUishness in their native script, right?

WRONG!

I recently read with interest and surprise a blog post by domainer-blogger Konstantinos Zournas, in which he referred to .ευ as the “worst domain extension ever”.

Zournas, who is Greek, opened my eyes to the fact that “.ευ” is meaningless in his native tongue. It’s just two Greek letters that visually resemble “EU” in Latin script. It’s confusing by design, but with .eu, a ccTLD that EURid already manages.

While not for a moment doubting Zournas’ familiarity with his own language, I had to confirm this on the EU’s Greek-language web site.

He’s right, the Greek for “European Union” is “Ευρωπαϊκής Ένωσης”, so the sensible two-letter IDN ccTLD would be .ΕΈ (those are Greek characters that look a bit like Latin E).

That would have almost certainly failed the ICANN string similarity process, however, as .ee/EE is the current, extant ccTLD for Estonia.

In short (too late), it seems to have taken ICANN the best part of a decade, and Jesus H Christ knows how many person-hours, to hack its own procedures multiple times in order to force through an application for a TLD that doesn’t mean anything, can’t be confused with anything that currently exists on the internet, and probably won’t be widely used anyway.

Gratz to all involved!

More than 1,000 new gTLDs a year? Sure!

Kevin Murphy, September 5, 2019, Domain Tech

There’s no particular reason ICANN shouldn’t be able to add more than 1,000 new gTLDs to the DNS every year, according to security experts.

The Security and Stability Advisory Committee has informed ICANN (pdf) that the cap, which was in place for the 2012 application round, “has no relevance for the security of the root zone”.

Back then, ICANN had picked the 1,000-a-year upper limit for delegations more or less out of thin air, as a straw man for SSAC, the root server operators, and those who were opposed to new gTLDs in general to shake their sticks at. It was concluded that 1,000 should present no issues.

As it turned out, it took two and a half years for ICANN to add the first 1,000 new gTLDs, largely due to the manual elements of the application process.

SSAC is now reiterating its previous advice that monitoring the rate of change at the root is more important than how many TLDs are added, and that there needs to be a way to slam the brakes on delegations if things go titsup.

The committee is also far more concerned that some of the 2012 new gTLDs are being quite badly abused by spammers and the like, and that ICANN is not doing enough to address this problem.

Major registries posting “fabricated” Whois data

One or more of the major gTLD registries are publishing Whois query data that may be “fabricated”, according to some of ICANN’s top security minds.

The Security and Stability Advisory Committee recently wrote to ICANN’s top brass to complain about inconsistent and possibly outright bogus reporting of Whois port 43 query volumes.

SSAC said (pdf):

it appears that the WHOIS query statistics provided to ICANN by registry operators as part of their monthly reporting obligations are generally not reliable. Some operators are using different methods to count queries, some are interpreting the registry contract differently, and some may be reporting numbers that are fabricated or otherwise not reflective of reality. Reliable reporting is essential to the ICANN community, especially to inform policy-making.

SSAC says that the inconsistency of the data makes it very difficult to make informed decisions about the future of Whois access and to determine the impact of GPDR.

While the letter does not name names, I’ve replicated some of SSAC’s research and I think I’m in a position to point fingers.

In my opinion, Google, Verisign, Afilias and Donuts appear to be the causes of the greatest concern for SSAC, but several others exhibit behavior SSAC is not happy about.

I reached out to these four registries on Wednesday and have published their responses, if I received any, below.

SSAC’s concerns relate to the monthly data dumps that gTLD registries new and old are contractually obliged to provide ICANN, which publishes the data three months later.

Some of these stats concern billable transactions such as registrations and renewals. Others are used to measure uptime obligations. Others are largely of academic interest.

One such stat is “Whois port 43 queries”, defined in gTLD contracts as “number of WHOIS (port-43) queries responded during the reporting period”.

According to SSAC, and confirmed by my look at the data, there appears to be a wide divergence in how registries and back-end registry services providers calculate this number.

The most obvious example of bogosity is that some registries are reporting identical numbers for each of their TLDs. SSAC chair Rod Rasmussen told DI:

The largest issue we saw at various registries was the reporting of the exact or near exact same number of queries for many or all of their supported TLDs, regardless of how many registered domain names are in those zones. That result is a statistical improbability so vanishingly small that it seems clear that they were reporting some sort of aggregate number for all their TLDs, either as a whole or divided amongst them.

While Rasmussen would not name the registries concerned, my research shows that the main culprit here appears to be Google.

In its December data dumps, it reported exactly 68,031,882 port 43 queries for each of its 45 gTLDs.

If these numbers are to be believed, .app with its 385,000 domains received precisely the same amount of port 43 interest as .gbiz, which has no registrations.

As SSAC points out, this is simply not plausible.

A Google spokesperson has not yet responded to DI’s request for comment.

Similarly, Afilias appears to have reported identical data for a subset of its dot-brand clients’ gTLDs, 16 of which purportedly had exactly 1,071,939 port 43 lookups in December.

Afilias has many more TLDs that did not report identical data.

An Afilias spokesperson told DI: “Afilias has submitted data to ICANN that addresses the anomaly and the update should be posted shortly.”

SSAC’s second beef is that one particular operator may have reported numbers that “were altered or synthesized”. SSAC said in its letter:

In a given month, the number of reported WHOIS queries for each of the operator’s TLDs is different. While some of the TLDs are much larger than others, the WHOIS query totals for them are close to each other. Further statistical analysis on the number of WHOIS queries per TLD revealed that an abnormal distribution. For one month of data for one of the registries, the WHOIS query counts per TLD differed from the mean by about +/- 1%, nearly linearly. This appeared to be highly unusual, especially with TLDs that have different usage patterns and domain counts. There is a chance that the numbers were altered or synthesized.

I think SSAC could be either referring here to Donuts or Verisign

Looking again at December’s data, all but one of Donuts’ gTLDs reported port 43 queries between 99.3% and 100.7% of the mean average of 458,658,327 queries.

Is it plausible that .gripe, with 1,200 registrations, is getting almost as much Whois traffic as .live, with 343,000? Seems unlikely.

Donuts has yet to provide DI with its comments on the SSAC letter. I’ll update this post and tweet the link if I receive any new information.

All of the gTLDs Verisign manages on behalf of dot-brand clients, and some of its own non-.com gTLDs, exhibit the same pattern as Donuts in terms of all queries falling within +/- 1% of the mean, which is around 431 million per month.

So, as I put to Verisign, .realtor (~40k regs) purportedly has roughly the same number of port 43 queries as .comsec (which hasn’t launched).

Verisign explained this by saying that almost all of the port 43 queries it reports come from its own systems. A spokesperson told DI:

The .realtor and .comsec query responses are almost all responses to our own monitoring tools. After explaining to SSAC how Verisign continuously monitors its systems and services (which may be active in tens or even hundreds of locations at any given time) we are confident that the accuracy of the data Verisign reports is not in question. The reporting requirement calls for all query responses to be counted and does not draw a distinction between responses to monitoring and non-monitoring queries. If ICANN would prefer that all registries distinguish between the two, then it is up to ICANN to discuss that with registry operators.

It appears from the reported numbers that Verisign polls its own Whois servers more than 160 times per second. Donuts’ numbers are even larger.

I would guess, based on the huge volumes of queries being reported by other registries, that this is common (but not universal) practice.

SSAC said that it approves of the practice of monitoring port 43 responses, but it does not think that registries should aggregate their own internal queries with those that come from real Whois consumers when reporting traffic to ICANN.

Either way, it thinks that all registries should calculate their totals in the same way, to make apples-to-apples comparisons possible.

Afilias’ spokesperson said: “Afilias agrees that everyone should report the data the same way.”

As far as ICANN goes, its standard registry contract is open to interpretation. It doesn’t really say why registries are expected to collect and supply this data, merely that they are obliged to do so.

The contracts do not specify whether registries are supposed to report these numbers to show off the load their servers are bearing, or to quantify demand for Whois services.

SSAC thinks it should be the latter.

You may be thinking that the fact that it’s taken a decade or more for anyone to notice that the data is basically useless means that it’s probably not all that important.

But SSAC thinks the poor data quality interferes with research on important policy and practical issues.

It’s rendered SSAC’s attempt to figure out whether GDPR and ICANN’s Temp Spec have had an effect on Whois queries pretty much futile, for example.

The meaningful research in question also includes work leading to the replacement of Whois with RDAP, the Registration Data Access Protocol.

Finally, there’s the looming possibility that ICANN may before long start acting as a clearinghouse for access to unredacted Whois records. If it has no idea how often Whois is actually used, that’s going to make planning its infrastructure very difficult, which in turn could lead to downtime.

Rasmussen told DI: “Our impression is that all involved want to get the numbers right, but there are inconsistent approaches to reporting between registry operators that lead to data that cannot be utilized for meaningful research.”

Set buttocks to clench! ICANN approves risky KSK rollover

Kevin Murphy, September 17, 2018, Domain Policy

ICANN has approved the first rollover of the domain name system’s master security key, setting the clock ticking on a change that could cause internet access issues for millions.

The so-called KSK rollover, when ICANN deletes the key-signing key that has been used as the trust anchor for the DNSSEC ecosystem since 2011 and replaces it with the new one — will now go ahead as planned on October 11.

The decision was made yesterday at the ICANN board of directors’ retreat in Brussels.

ICANN chief technology officer David Conrad posted this to an ICANN mailing list this morning:

The Board voted to approve the resolution for ICANN org to move forward with the revised KSK rollover plan. So barring unforeseen circumstances, the KSK-2017-signed ZSK will be used to sign the root zone on 11 October 2018.

The rollover was due to happen October 11 last year, but ICANN delayed it when it emerged that many DNS resolvers weren’t yet configured to use the new key.

That’s still a problem, and nobody knows for sure how many endpoints will stop functioning properly when the new KSK goes solo.

While most experts weighing in on the rollover, including Conrad, agreed that the risk of more delay outweighed the risk of rolling now, that feeling was not unanimous.

Five members of the 22-member Security and Stability Advisory Committee — including top guys from Google and Verisign — last month dissented from the majority view and said ICANN should delay again.

The question now is not whether internet users will see a disruption in the days following October 11, but how many users will be affected and how serious their disruptions will be.

Based on current information, as many as two million internet users could be affected.

ICANN is likely to take flak for even relatively minor disruptions, but the alternative was to continue with the delays and risk an even bigger impact, and even more flak, in future.

The text of ICANN’s resolution and the rationale behind it will be published in the next day or so.

ICANN faces critical choice as security experts warn against key rollover

Kevin Murphy, August 23, 2018, Domain Tech

Members of ICANN’s top security body have advised the organization to further delay plans to change the domain name system’s top cryptographic key.

Five dissenting members of the influential, 22-member Security and Stability Advisory Committee said they believe “the risks of rolling in accordance with the current schedule are larger than the risks of postponing”.

Their comments relate to the so-called KSK rollover, which would see ICANN for the first time ever change the key-signing key that acts as the trust anchor for all DNSSEC queries on the internet.

ICANN is fairly certain rolling the key will cause DNS resolution problems for some — possibly as much as 0.05% of the internet or a couple million people — but it currently lacks the data to be absolutely certain of the scale of the impact.

What it does know — explained fairly succinctly in this newly published guide (pdf) — is that within 48 hours of the roll, a certain small percentage of internet users will start to see DNS resolution fail.

But there’s a prevailing school of thought that believes the longer the rollover is postponed, the bigger that number of affected users will become.

The rollover is currently penciled in for October 11, but the ultimate decision on whether to go ahead rests with the ICANN board of directors.

David Conrad, the organization’s CTO, told us last week that his office has already decided to recommend that the roll should proceed as planned. At the time, he noted that SSAC was a few days late in delivering its own verdict.

Now, after some apparently divisive discussions, that verdict is in (pdf).

SSAC’s majority consensus is that it “has not identified any reason within the SSAC’s scope why the rollover should not proceed as currently planned.”

That’s in line with what Conrad, and the Root Server System Advisory Committee have said. But SSAC noted:

The assessment of risk in this particular area has some uncertainty and therefore includes a component of subjective judgement. Individuals (including some members of the SSAC) have different assessments of the overall balance of risk of the resumption of this plan.

It added that it’s up to the ICANN board (comprised largely of non-security people) to make the final call on what the acceptable level of risk is.

The minority, dissenting opinion gets into slightly more detail:

The decision to proceed with the keyroll is a complex tradeoff of technical and non-technical risks. While there is risk in proceeding with the currently planned roll, we understand that there is also risk in further delay, including loss of confidence in DNSSEC operational planning, potential for more at-risk users as more DNSSEC validation is deployed, etc.

While evaluating these risks, the consensus within the SSAC is that proceeding is preferable to delay. We personally evaluate the tradeoffs differently, and we believe that the risks of rolling in accordance with the current schedule are larger than the risks of postponing and focusing heavily on additional research and outreach, and in particular leveraging newly developed techniques that provide better signal and fidelity into potentially impacted parties.

We would like to reiterate that we understand our colleagues’ position, but evaluate the risks and associated mitigation prospects differently. We believe that the ultimate decision lies with the ICANN Board, and do not envy them with this decision.

SSAC members are no slouches when it comes to security expertise, and the dissenting members are no exception. They are:

  • Lyman Chapin, co-owner of Interisle Consulting, a regular ICANN contractor perhaps best-known to DI readers for carrying out a study into new gTLD name collisions five years ago.
  • Kimberly “kc claffy” Claffy, head of the Center for Applied Internet Data Analysis at the University of California in San Diego. CAIDA does nothing but map and measure the internet.
  • Jay Daley, a registry executive with a technical background whose career includes senior stints at .uk and .nz. He’s currently keeping the CEO’s chair warm at .org manager Public Interest Registry.
  • Warren Kumari, a senior network security engineer at Google, which is probably the largest early adopter of DNSSEC on the resolution side.
  • Danny McPherson, Verisign’s chief security officer. As well as .com, Verisign runs the two of the 13 root servers, including the master A-root. It’s running the boxes that sit at the top of the DNSSEC hierarchy.

It may be the first time SSAC has failed to reach a full-consensus opinion on a security matter. If it has ever published a dissenting opinion before, I certainly cannot recall it.

The big decision about whether to proceed or delay is expected to be made by the ICANN board during its retreat in Brussels, a three-day meeting that starts September 14.

Given that ICANN’s primary mission is “to ensure the stable and secure operation of the Internet’s unique identifier systems”, it could turn out to be one of ICANN’s biggest decisions to date.