Latest news of the domain name industry

Recent Posts

Amazon and Google among .internal TLD ban backers

Kevin Murphy, March 20, 2024, Domain Tech

Google and Amazon have publicly backed ICANN’s plan to reserve the top-level domain .internal for private behind-the-firewall uses.

ICANN picked the string “internal” as the one that it will promise to never delegate to the DNS root, allowing network administrators and software developers to confidently use it with a lower risk of data leakage should the TLD come under a registry’s control in future.

The public comment period over its choice is coming to a close tomorrow, with a generally supportive vibe coming from the 30-odd comments submitted so far.

Notably, tech giants Amazon and Google have both filed comments backing .internal, with both companies saying that they already use the TLD extensively for internal purposes (Google in its Cloud services) and that to allow it to be delegated in future would cause big problems.

Some commenters niggled that .internal is too long, and that something like .local or .lan, both already reserved, might be better. Others wondered why strings such as .corp or .home, which are already effectively banned due to the high risk of name collisions, were not chosen instead.

ICANN chair reins in new gTLD timeline hopes

Kevin Murphy, May 24, 2021, Domain Policy

Don’t get excited about the next round of new gTLDs launching any time soon.

That’s my takeaway from recent correspondence between ICANN’s chair and brand-owners who are apparently champing at the bit to get their teeth into some serious dot-brand action.

Maarten Botterman warned Brand Registry Group chair Cole Quinn that “significant work lies ahead” before the org can start accepting applications once more.

Quinn had urged ICANN to get a move on last month, saying in a letter that there was “significant demand” from trademark owners.

The last three-month application window ended in March 2012, governed by an Applicant Guidebook that said: “The goal is for the next application round to begin within one year of the close of the application submission period for the initial round.”

That plainly never happened, as ICANN proceeded to tie itself in bureaucratic knots and recursive cycles of review and analysis.

Any company that missed the boat or was founded in the meantime has been unable to to even get a sniff of operating its own dot-brand, or indeed any other type of gTLD.

Spelling out some of the steps that need to be accomplished before the next window opens, Botterman wrote:

the 2012 Applicant Guidebook must be updated with more than 100 outputs from the SubPro PDP WG; we will need to apply lessons learned from the previous round, many of which are documented in the 2016 Program Implementation Review, and appropriate resources for implementing and conducting subsequent rounds must be put in place. At present it appears that WG recommendations will benefit from an Operational Design Phase (ODP) to provide the Board with information on the operational implications of implementing the recommendations. As part of such an ODP, the Board may also task ICANN org to provide an assessment of some of the issues of concern that the Board raised in its comments on the Draft Final Report, as well as those topics that did not reach consensus and were thus not adopted by the GNSO Council. The outcome of such an assessment could also add to the work that would be required before launching subsequent rounds.

The Board notes your views regarding SAC114. We are aware of discussions that took place during ICANN70 and the Board is in communication with the Security and Stability Committee (SSAC) and its leadership, as per the ‘Understand’ phase of the Board Advice Process. As with all advice items received, the Board will treat SAC114 in accordance with that process.

Breaking that down for your convenience…

The reference to “more than 100 outputs from the SubPro PDP WG” refers to the now six-year old Policy Development Process for New gTLD Subsequent Procedures working group of the GNSO.

SubPro delivered its final report in January and it was adopted by the GNSO Council in February.

ICANN asked the Governmental Advisory Committee for its formal input a few weeks ago, has opened the report for a public comment period that ends June 1, and will accept or reject the report at some point in the future.

SubPro’s more significant recommendations include the creation of a new accreditation mechanism for registry back-end service providers and a gaming-preventing overhaul of the contention resolution process.

The “the 2016 Program Implementation Review” is a reference to a self-assessment of the 2012 round that the ICANN staff carried out six years ago, producing a 215-page report (pdf).

That report contains about 50 recommendations covering areas where staff thought the system of actually processing new gTLD applications could possibly be improved or streamlined in subsequent rounds.

The Operational Design Phase (ODP) Botterman refers to is a brand-new phase of ICANN bureaucracy that is currently untested. It fits between GNSO Council approval of recommendations and ICANN board consideration.

The ODP is basically a way for ICANN staff to insert itself into the process, between community policy-making and community policy-approval, to make sure the GNSO’s tenuous consensus-building exercise has not produced something too crazily complicated, ineffective or expensive to implement.

Staff denies this is a power-grab.

The ODP is currently being deployed to assess proposed changes to Whois privacy policy, and ICANN has already stated multiple times that it will also be used to vet SubPro’s work.

Botterman’s reference to “issues of concern that the Board raised in its comments on the Draft Final Report” seems to mean this September 2020 letter (pdf) to SubPro’s chairs, in which the ICANN board outlined some of its initial concerns with SubPro’s proposed policies.

One fairly important concern was whether ICANN has the power under its bylaws (which have changed since 2012) to enforce Public Interest Commitments (now called Registry Voluntary Commitments) that SubPro thinks could be used to make some sensitive gTLDs more trustworthy.

The reference to SAC144 may turn out to be a big stumbling block too.

SAC114 is the bombshell document (pdf) submitted by the Security and Stability Advisory Committee in February, in which ICANN’s top security community members openly questioned whether allowing more new gTLDs is consistent with ICANN’s commitment to keep the internet secure.

While SAC114 seems to reluctantly acknowledge that the program will likely go ahead regardless, it asks that ICANN do more to address so-called “DNS abuse” before proceeding.

Given that the various factions within the ICANN community can’t even agree on what “DNS abuse” is, how ICANN chooses to “understand” SAC114 will have a serious impact on how much further the runway to the next round gets extended.

In short, Botterman is warning brand owners not to hold their breath anticipating the next application window. I think I even detect some serious skepticism as to whether demand is really as high as Quinn claims.

And quite beyond the stuff Botterman outlines in his letter, there’s presumably going to be at least one round of review and revision on the next Applicant Guidebook, as well as the time needed for ICANN to build or upgrade the systems it needs to process the applications, to hire evaluators and resolution providers, and to make sure it conducts a sufficiently long and broad global marketing program so that potential applicants in the developing world don’t feel left out. And that’s a non-exhaustive list.

Introducing competition into the registry space is of course one of ICANN’s foundational raisons d’être.

After the org was founded in September 1998, it took less than two years before it opened up the first new gTLD application round.

It was another three years before the second round launched.

It then took eight and a half years for the 2012 window to open.

It will be well over a decade from then before anyone next gets the opportunity to apply for a new gTLD. It’s entirely feasible that we’ll see an applicant in the next round headed by somebody who wasn’t even born when the first window opened.

Kafka turns in grave as ICANN crowbars “useless” Greek TLD into the root

Kevin Murphy, September 9, 2019, Domain Policy

ICANN has finally approved a version of .eu in Greek script, but it’s already been criticized as “useless”.
Yesterday, ICANN’s board of directors rubber-stamped .ευ, the second internationalized domain name version of the European Union’s .eu, which will be represented in the DNS as .xn--qxa6a.
There’s a lot of history behind .ευ, much of it maddeningly illustrative of ICANN’s Kafkaesque obsession with procedure.
The first amusing thing to point out is that .ευ is technically being approved under ICANN’s IDN ccTLD Fast Track Process, a mere NINE YEARS after EURid first submitted its application.
The “Fast Track” has been used so far to approve 61 IDN ccTLDs. Often, the requested string is merely the name of the country in question, written in one of the local scripts, and the TLD is approved fairly quickly.
But in some cases, especially where the desired string is a two-character code, a string review will find the possibility of confusion with another TLD. This runs the risk of broadening the scope of domain homograph attacks sometimes used in phishing.
That’s what happened to .ευ, along with Bulgaria’s Cyrillic .бг and Greece’s own .ελ, which were rejected on string confusion grounds back in 2010 and 2011.
Under pressure from the Governmental Advisory Committee, ICANN then implemented an Extended Process Similarity Review Panel, essentially an appeals process designed to give unsuccessful Fast Track applicants a second bite at the apple.
That process led to Bulgaria being told that .бг was not too similar to Brazil’s .br, and Greece being told that .ελ did not look too much like .EA, a non-existent ccTLD that may or may not be delegated in future, after all.
But the EU’s .ευ failed at the same time, in 2014. The appeals review panel found that the string was confusable with upper-case .EY and .EV.
Again, these are not ccTLDs, just strings of two characters that have the potential to become ccTLDs in future should a new country or territory emerge and be assigned those codes by the International Standards Organization, a low-probability event.
I reported at the time that .ευ was probably as good as dead. It seemed pretty clear based on the rules at the time that if a string was confusable in uppercase OR lowercase, it would be rejected.
But I was quickly informed by ICANN that I was incorrect, and that ICANN top brass needed to discuss the results.
That seems to have led to ICANN tweaking the rules yet again in order to crowbar .ευ into the root.
In 2015, the board of directors reached out to the GAC, the ccNSO and the Security and Stability Advisory Committee for advice.
They dutifully returned two years later with proposed changes (pdf) that seemed tailor-made for the European Union’s predicament.
A requested IDN ccTLD that caused confusion with other strings in only uppercase, but not lowercase (just like .ευ!!!) could still get delegated, provided it had a comprehensive risk mitigation strategy in place, they recommended.
The recommendation was quickly approved by ICANN, which then sent its implementation guidelines (again, tailor-made for EURid (pdf)) back to the ccNSO/SSAC.
It was not until February this year that the ccNSO/SSAC group got back to ICANN (pdf) to approve of its implementation plan and to say that it has already tested it against EURid’s proposed risk-mitigation plan (pdf).
Basically, the process in 2009 didn’t produce the desired result, so ICANN changed the process. It didn’t produced the desired result again in 2014, so the process was changed again.
But at least Greek-speaking EU citizens are finally going to get a meaningful ccTLD that allows them to express their EUishness in their native script, right?
WRONG!
I recently read with interest and surprise a blog post by domainer-blogger Konstantinos Zournas, in which he referred to .ευ as the “worst domain extension ever”.
Zournas, who is Greek, opened my eyes to the fact that “.ευ” is meaningless in his native tongue. It’s just two Greek letters that visually resemble “EU” in Latin script. It’s confusing by design, but with .eu, a ccTLD that EURid already manages.
While not for a moment doubting Zournas’ familiarity with his own language, I had to confirm this on the EU’s Greek-language web site.
He’s right, the Greek for “European Union” is “Ευρωπαϊκής Ένωσης”, so the sensible two-letter IDN ccTLD would be .ΕΈ (those are Greek characters that look a bit like Latin E).
That would have almost certainly failed the ICANN string similarity process, however, as .ee/EE is the current, extant ccTLD for Estonia.
In short (too late), it seems to have taken ICANN the best part of a decade, and Jesus H Christ knows how many person-hours, to hack its own procedures multiple times in order to force through an application for a TLD that doesn’t mean anything, can’t be confused with anything that currently exists on the internet, and probably won’t be widely used anyway.
Gratz to all involved!

More than 1,000 new gTLDs a year? Sure!

Kevin Murphy, September 5, 2019, Domain Tech

There’s no particular reason ICANN shouldn’t be able to add more than 1,000 new gTLDs to the DNS every year, according to security experts.
The Security and Stability Advisory Committee has informed ICANN (pdf) that the cap, which was in place for the 2012 application round, “has no relevance for the security of the root zone”.
Back then, ICANN had picked the 1,000-a-year upper limit for delegations more or less out of thin air, as a straw man for SSAC, the root server operators, and those who were opposed to new gTLDs in general to shake their sticks at. It was concluded that 1,000 should present no issues.
As it turned out, it took two and a half years for ICANN to add the first 1,000 new gTLDs, largely due to the manual elements of the application process.
SSAC is now reiterating its previous advice that monitoring the rate of change at the root is more important than how many TLDs are added, and that there needs to be a way to slam the brakes on delegations if things go titsup.
The committee is also far more concerned that some of the 2012 new gTLDs are being quite badly abused by spammers and the like, and that ICANN is not doing enough to address this problem.

Major registries posting “fabricated” Whois data

One or more of the major gTLD registries are publishing Whois query data that may be “fabricated”, according to some of ICANN’s top security minds.
The Security and Stability Advisory Committee recently wrote to ICANN’s top brass to complain about inconsistent and possibly outright bogus reporting of Whois port 43 query volumes.
SSAC said (pdf):

it appears that the WHOIS query statistics provided to ICANN by registry operators as part of their monthly reporting obligations are generally not reliable. Some operators are using different methods to count queries, some are interpreting the registry contract differently, and some may be reporting numbers that are fabricated or otherwise not reflective of reality. Reliable reporting is essential to the ICANN community, especially to inform policy-making.

SSAC says that the inconsistency of the data makes it very difficult to make informed decisions about the future of Whois access and to determine the impact of GPDR.
While the letter does not name names, I’ve replicated some of SSAC’s research and I think I’m in a position to point fingers.
In my opinion, Google, Verisign, Afilias and Donuts appear to be the causes of the greatest concern for SSAC, but several others exhibit behavior SSAC is not happy about.
I reached out to these four registries on Wednesday and have published their responses, if I received any, below.
SSAC’s concerns relate to the monthly data dumps that gTLD registries new and old are contractually obliged to provide ICANN, which publishes the data three months later.
Some of these stats concern billable transactions such as registrations and renewals. Others are used to measure uptime obligations. Others are largely of academic interest.
One such stat is “Whois port 43 queries”, defined in gTLD contracts as “number of WHOIS (port-43) queries responded during the reporting period”.
According to SSAC, and confirmed by my look at the data, there appears to be a wide divergence in how registries and back-end registry services providers calculate this number.
The most obvious example of bogosity is that some registries are reporting identical numbers for each of their TLDs. SSAC chair Rod Rasmussen told DI:

The largest issue we saw at various registries was the reporting of the exact or near exact same number of queries for many or all of their supported TLDs, regardless of how many registered domain names are in those zones. That result is a statistical improbability so vanishingly small that it seems clear that they were reporting some sort of aggregate number for all their TLDs, either as a whole or divided amongst them.

While Rasmussen would not name the registries concerned, my research shows that the main culprit here appears to be Google.
In its December data dumps, it reported exactly 68,031,882 port 43 queries for each of its 45 gTLDs.
If these numbers are to be believed, .app with its 385,000 domains received precisely the same amount of port 43 interest as .gbiz, which has no registrations.
As SSAC points out, this is simply not plausible.
A Google spokesperson has not yet responded to DI’s request for comment.
Similarly, Afilias appears to have reported identical data for a subset of its dot-brand clients’ gTLDs, 16 of which purportedly had exactly 1,071,939 port 43 lookups in December.
Afilias has many more TLDs that did not report identical data.
An Afilias spokesperson told DI: “Afilias has submitted data to ICANN that addresses the anomaly and the update should be posted shortly.”
SSAC’s second beef is that one particular operator may have reported numbers that “were altered or synthesized”. SSAC said in its letter:

In a given month, the number of reported WHOIS queries for each of the operator’s TLDs is different. While some of the TLDs are much larger than others, the WHOIS query totals for them are close to each other. Further statistical analysis on the number of WHOIS queries per TLD revealed that an abnormal distribution. For one month of data for one of the registries, the WHOIS query counts per TLD differed from the mean by about +/- 1%, nearly linearly. This appeared to be highly unusual, especially with TLDs that have different usage patterns and domain counts. There is a chance that the numbers were altered or synthesized.

I think SSAC could be either referring here to Donuts or Verisign
Looking again at December’s data, all but one of Donuts’ gTLDs reported port 43 queries between 99.3% and 100.7% of the mean average of 458,658,327 queries.
Is it plausible that .gripe, with 1,200 registrations, is getting almost as much Whois traffic as .live, with 343,000? Seems unlikely.
Donuts has yet to provide DI with its comments on the SSAC letter. I’ll update this post and tweet the link if I receive any new information.
All of the gTLDs Verisign manages on behalf of dot-brand clients, and some of its own non-.com gTLDs, exhibit the same pattern as Donuts in terms of all queries falling within +/- 1% of the mean, which is around 431 million per month.
So, as I put to Verisign, .realtor (~40k regs) purportedly has roughly the same number of port 43 queries as .comsec (which hasn’t launched).
Verisign explained this by saying that almost all of the port 43 queries it reports come from its own systems. A spokesperson told DI:

The .realtor and .comsec query responses are almost all responses to our own monitoring tools. After explaining to SSAC how Verisign continuously monitors its systems and services (which may be active in tens or even hundreds of locations at any given time) we are confident that the accuracy of the data Verisign reports is not in question. The reporting requirement calls for all query responses to be counted and does not draw a distinction between responses to monitoring and non-monitoring queries. If ICANN would prefer that all registries distinguish between the two, then it is up to ICANN to discuss that with registry operators.

It appears from the reported numbers that Verisign polls its own Whois servers more than 160 times per second. Donuts’ numbers are even larger.
I would guess, based on the huge volumes of queries being reported by other registries, that this is common (but not universal) practice.
SSAC said that it approves of the practice of monitoring port 43 responses, but it does not think that registries should aggregate their own internal queries with those that come from real Whois consumers when reporting traffic to ICANN.
Either way, it thinks that all registries should calculate their totals in the same way, to make apples-to-apples comparisons possible.
Afilias’ spokesperson said: “Afilias agrees that everyone should report the data the same way.”
As far as ICANN goes, its standard registry contract is open to interpretation. It doesn’t really say why registries are expected to collect and supply this data, merely that they are obliged to do so.
The contracts do not specify whether registries are supposed to report these numbers to show off the load their servers are bearing, or to quantify demand for Whois services.
SSAC thinks it should be the latter.
You may be thinking that the fact that it’s taken a decade or more for anyone to notice that the data is basically useless means that it’s probably not all that important.
But SSAC thinks the poor data quality interferes with research on important policy and practical issues.
It’s rendered SSAC’s attempt to figure out whether GDPR and ICANN’s Temp Spec have had an effect on Whois queries pretty much futile, for example.
The meaningful research in question also includes work leading to the replacement of Whois with RDAP, the Registration Data Access Protocol.
Finally, there’s the looming possibility that ICANN may before long start acting as a clearinghouse for access to unredacted Whois records. If it has no idea how often Whois is actually used, that’s going to make planning its infrastructure very difficult, which in turn could lead to downtime.
Rasmussen told DI: “Our impression is that all involved want to get the numbers right, but there are inconsistent approaches to reporting between registry operators that lead to data that cannot be utilized for meaningful research.”

Set buttocks to clench! ICANN approves risky KSK rollover

Kevin Murphy, September 17, 2018, Domain Policy

ICANN has approved the first rollover of the domain name system’s master security key, setting the clock ticking on a change that could cause internet access issues for millions.
The so-called KSK rollover, when ICANN deletes the key-signing key that has been used as the trust anchor for the DNSSEC ecosystem since 2011 and replaces it with the new one — will now go ahead as planned on October 11.
The decision was made yesterday at the ICANN board of directors’ retreat in Brussels.
ICANN chief technology officer David Conrad posted this to an ICANN mailing list this morning:

The Board voted to approve the resolution for ICANN org to move forward with the revised KSK rollover plan. So barring unforeseen circumstances, the KSK-2017-signed ZSK will be used to sign the root zone on 11 October 2018.

The rollover was due to happen October 11 last year, but ICANN delayed it when it emerged that many DNS resolvers weren’t yet configured to use the new key.
That’s still a problem, and nobody knows for sure how many endpoints will stop functioning properly when the new KSK goes solo.
While most experts weighing in on the rollover, including Conrad, agreed that the risk of more delay outweighed the risk of rolling now, that feeling was not unanimous.
Five members of the 22-member Security and Stability Advisory Committee — including top guys from Google and Verisign — last month dissented from the majority view and said ICANN should delay again.
The question now is not whether internet users will see a disruption in the days following October 11, but how many users will be affected and how serious their disruptions will be.
Based on current information, as many as two million internet users could be affected.
ICANN is likely to take flak for even relatively minor disruptions, but the alternative was to continue with the delays and risk an even bigger impact, and even more flak, in future.
The text of ICANN’s resolution and the rationale behind it will be published in the next day or so.

ICANN faces critical choice as security experts warn against key rollover

Kevin Murphy, August 23, 2018, Domain Tech

Members of ICANN’s top security body have advised the organization to further delay plans to change the domain name system’s top cryptographic key.
Five dissenting members of the influential, 22-member Security and Stability Advisory Committee said they believe “the risks of rolling in accordance with the current schedule are larger than the risks of postponing”.
Their comments relate to the so-called KSK rollover, which would see ICANN for the first time ever change the key-signing key that acts as the trust anchor for all DNSSEC queries on the internet.
ICANN is fairly certain rolling the key will cause DNS resolution problems for some — possibly as much as 0.05% of the internet or a couple million people — but it currently lacks the data to be absolutely certain of the scale of the impact.
What it does know — explained fairly succinctly in this newly published guide (pdf) — is that within 48 hours of the roll, a certain small percentage of internet users will start to see DNS resolution fail.
But there’s a prevailing school of thought that believes the longer the rollover is postponed, the bigger that number of affected users will become.
The rollover is currently penciled in for October 11, but the ultimate decision on whether to go ahead rests with the ICANN board of directors.
David Conrad, the organization’s CTO, told us last week that his office has already decided to recommend that the roll should proceed as planned. At the time, he noted that SSAC was a few days late in delivering its own verdict.
Now, after some apparently divisive discussions, that verdict is in (pdf).
SSAC’s majority consensus is that it “has not identified any reason within the SSAC’s scope why the rollover should not proceed as currently planned.”
That’s in line with what Conrad, and the Root Server System Advisory Committee have said. But SSAC noted:

The assessment of risk in this particular area has some uncertainty and therefore includes a component of subjective judgement. Individuals (including some members of the SSAC) have different assessments of the overall balance of risk of the resumption of this plan.

It added that it’s up to the ICANN board (comprised largely of non-security people) to make the final call on what the acceptable level of risk is.
The minority, dissenting opinion gets into slightly more detail:

The decision to proceed with the keyroll is a complex tradeoff of technical and non-technical risks. While there is risk in proceeding with the currently planned roll, we understand that there is also risk in further delay, including loss of confidence in DNSSEC operational planning, potential for more at-risk users as more DNSSEC validation is deployed, etc.
While evaluating these risks, the consensus within the SSAC is that proceeding is preferable to delay. We personally evaluate the tradeoffs differently, and we believe that the risks of rolling in accordance with the current schedule are larger than the risks of postponing and focusing heavily on additional research and outreach, and in particular leveraging newly developed techniques that provide better signal and fidelity into potentially impacted parties.
We would like to reiterate that we understand our colleagues’ position, but evaluate the risks and associated mitigation prospects differently. We believe that the ultimate decision lies with the ICANN Board, and do not envy them with this decision.

SSAC members are no slouches when it comes to security expertise, and the dissenting members are no exception. They are:

  • Lyman Chapin, co-owner of Interisle Consulting, a regular ICANN contractor perhaps best-known to DI readers for carrying out a study into new gTLD name collisions five years ago.
  • Kimberly “kc claffy” Claffy, head of the Center for Applied Internet Data Analysis at the University of California in San Diego. CAIDA does nothing but map and measure the internet.
  • Jay Daley, a registry executive with a technical background whose career includes senior stints at .uk and .nz. He’s currently keeping the CEO’s chair warm at .org manager Public Interest Registry.
  • Warren Kumari, a senior network security engineer at Google, which is probably the largest early adopter of DNSSEC on the resolution side.
  • Danny McPherson, Verisign’s chief security officer. As well as .com, Verisign runs the two of the 13 root servers, including the master A-root. It’s running the boxes that sit at the top of the DNSSEC hierarchy.

It may be the first time SSAC has failed to reach a full-consensus opinion on a security matter. If it has ever published a dissenting opinion before, I certainly cannot recall it.
The big decision about whether to proceed or delay is expected to be made by the ICANN board during its retreat in Brussels, a three-day meeting that starts September 14.
Given that ICANN’s primary mission is “to ensure the stable and secure operation of the Internet’s unique identifier systems”, it could turn out to be one of ICANN’s biggest decisions to date.

ICANN CTO: no reason to delay KSK rollover

Kevin Murphy, August 15, 2018, Domain Tech

ICANN’s board of directors will be advised to go ahead with a key security change at the DNS root — “the so-called KSK rollover” — this October, according to the organization’s CTO.
“We don’t see any reason to postpone again,” David Conrad told DI on Monday.
If it does go ahead as planned, the rollover will see ICANN change the key-signing key that acts as the trust anchor for the whole DNSSEC-using internet, for the first time since DNSSEC came online in 2010.
It’s been delayed since last October after it emerged that misconfigurations elsewhere in the DNS cloud could see potentially millions of internet users see glitches when the key is rolled.
Ever since then, ICANN and others have been trying to figure out how many people could be adversely affected by the change, and to reduce that number to the greatest extent possible.
The impact has been tricky to estimate due to patchy data.
While it’s been possible to determine a number of resolvers — about 8,000 — that definitely are poorly configured, that only represents a subset of the total number. It’s also been hard to map that to endpoints due to “resolvers behind resolvers behind resolvers”, Conrad said.
“The problem here is that it’s sort of a subjective evaluation,” he said. “We can’t rely on the data were seeing. We’re seeing the resolvers but we’re not seeing the users behind the resolvers.”
Some say that the roll is still too risky to carry out without better visibility into the potential impact, but others say that more delays would lead to more networks and devices becoming DNSSEC-compatible, potentially leading to even greater problems after the eventual rollover.
ICANN knows of about 8,000 resolver IP addresses that are likely to stop working properly after the rollover, because they only support the current KSK, but that’s only counting resolvers that automatically report their status to the root using a relatively new internet standard. There’s a blind spot concerning resolvers that do not have that feature turned on.
ICANN has also had difficulty reaching out to the network operators behind these resolvers, with good contact information apparently only available for about a quarter of the affected IP addresses, Conrad said.
Right now, the best data available suggests that 0.05% of the internet’s population could see access issues after the October 11 rollover, according to Conrad.
That’s about two million people, but it’s 10 times fewer people than the 0.5% acceptable collateral damage threshold outlined in ICANN’s rollover plan.
The 0.05% number comes from research by APNIC, which used Google’s advertising system to place “zero-pixel ads” to check whether individual user endpoints were using compatible resolvers or not.
If problems do emerge October 11 the temporary solution is apparently quite quick to implement — network operators can simply turn off DNSSEC, assuming they know that’s what they’re supposed to do.
But still, if a million or two internet users could have their day ruined by the rollover, why do it at all?
It’s not as if the KSK is in any danger of being cracked any time soon. Conrad explained that a successful brute-force attack on the 2048-bit RSA key would take longer than the lifetime of the universe using current technology.
Rather, the practice of rolling the key every five years is to get network operators and developers accustomed to the idea that the KSK is not a permanent fixture that can be hard-coded into their systems, Conrad said.
It’s a problem comparable to new gTLD name collisions or the Y2K problem, instances where developers respectively hard-coded assumptions about valid TLDs or the century into their software.
ICANN has already been reaching out to the managers of open-source projects on repositories such as Github that have been seen to hard-code the current KSK into their software, Conrad said.
Separately, Wes Hardaker at the University of Southern California Information Sciences Institute discovered that a popular VPN client was misconfigured. Outreach to the developer saw the problem fixed, reducing the number of users who will be affected by the roll.
“What we’re trying to avoid is having these keys hardwired into firmware, so that that it would never be changeable,” he said. “The idea is if you exercise the infrastructure frequently enough, people will know the that the key is not permanent configuration, it’s not something embedded in concrete.”
One change that ICANN may want to make in future is to change the algorithm used to generate the KSK.
Right now it’s using RSA, but Conrad said it has downsides such as rather large signature size, which leads to heavier DNSSEC traffic. By switching to elliptical curve cryptography, signatures could be reduced by “orders of magnitude”, leading to a more efficient and slimline DNS infrastructure, Conrad said.
Last week, ICANN’s Root Server Stability Advisory Committee issued an advisory (pdf) that essentially gave ICANN the all-clear to go ahead with the roll.
The influential Security and Stability Advisory Committee has yet to issue its own advisory, however, despite being asked to do so by August 10.
Could SSAC be more cautious in its advice? We’ll have to wait and see, but perhaps not too long; the current plan is for the ICANN board to consider whether to go ahead with the roll during its three-day Brussels retreat, which starts September 14.

Zone file access is crap, security panel confirms

Kevin Murphy, June 20, 2017, Domain Policy

ICANN’s Centralized Zone Data Service has some serious shortcomings and needs an overhaul, according to the Security and Stability Advisory Committee.
The panel of DNS security experts has confirmed what CZDS subscribers, including your humble correspondent, have known since 2014 — the system had a major design flaw baked in from day one for no readily apparent reason.
CZDS is the centralized repository of gTLD zone files. It’s hosted by ICANN and aggregates zones from all 2012-round, and some older, gTLDs on a daily basis.
Signing up for it is fairly simple. You simply fill out your contact information, agree to the terms of service, select which zones you want and hit “submit”.
The purpose of the service is to allow researchers to receive zone files without having to enter into separate agreements with each of the 1,200+ gTLDs currently online.
The major problem, as subscribers know and SSAC has confirmed, is that the default subscription period is 90 days.
Unless the gTLD registry extends the period at its end and in its own discretion, each subscription ends after three months — cutting off access — and the subscriber must reapply.
Many of the larger registries exercise this option, but many — particularly dot-brands — do not.
The constant need to reapply and re-approve creates a recurring arse-ache for subscribers and, registry staff have told me, the registries themselves.
The approval process itself is highly unpredictable. Some of the major registries process requests within 24 hours — I’ve found Afilias is the fastest — but I’ve been waiting for approval for Valuetainment’s .voting since September 2016.
Some dot-brands even attempt to insert extra terms of service into the deal before approving requests, which defeats the entire purpose of having a centralized service in the first place.
Usually, a polite email to the person handling the requests can produce results. Other times, it’s necessary to report them to ICANN Compliance.
The SSAC has evidently interviewed many people who share my concerns, as well as looking at data from Compliance (where CZDS reliably generates the most complaints, wasting the time of Compliance staff).

This situation makes zone file access unreliable and subject to unnecessary interruptions. The missing data introduces “blind spots” in security coverage and research projects, and the reliability of software – such as security and analytics applications – that relies upon zone files is reduced. Lastly, the introduced inefficiency creates additional work for both registry operators and subscribers.

The SSAC has no idea why the need to reapply every 90 days was introduced, figuring it must have happened during implementation.
But it recommends that access agreements should automatically renew once they expire, eliminating the busywork of reapplying and closing the holes in researchers’ data sets.
As I’m not objective on this issue, I agree with that recommendation wholeheartedly.
I’m less keen on the SSAC’s recommendation that registries should be able to opt out of the auto-renewals on a per-subscriber basis. This will certainly be abused by the precious snowflake dot-brands that have already shown their reluctance to abide by their contractual obligations.
The SSAC report can be read here (pdf).

Emoji domains get a 👎 from security panel

Kevin Murphy, May 30, 2017, Domain Tech

The use of emojis in domain names has been discouraged by ICANN’s Security and Stability Advisory Committee.
In a paper late last week, SSAC told ICANN that emojis — aka emoticons or smileys — lack standardization, are barred by the relevant domain name technical standards, and could cause user confusion.
Emoji domains, while technically possible, are not particularly prevalent on the internet right now.
They’re implicitly banned in gTLDs due to the contractual requirement to adhere to the IDNA2008 standard, which restricts internationalized domain names to actual spoken human languages, and the only ccTLD I’m aware of actively marketing the names is Samoa’s .ws.
There was a notable example of Coca Cola registering 😀.ws (xn--h28h.ws) for a billboard marketing campaign in Puerto Rico a couple of years ago, but that name has since expired and been registered by an Australian photographer.
The SSAC said that emoji use should be banned in TLDs and discouraged at the second level for several reasons.
Mainly, the problem is that while emojis are described in the Unicode standards, there’s no standardization across devices and applications as to how they are displayed.
A certain degree of creative flair is permitted, meaning a smiling face in one app may look unlike the technically same emoji in another app. On smaller screens and with smaller fonts, technically different emojis may look alike.
This could lead to confusion, which could lead to security problems, SSAC warns:

It is generally difficult for people to figure out how to specify exactly what happy face they are trying to produce, and different systems represent the same emoji with different code points. The shape and color of emoji can change while a user is viewing them, and the user has no way of knowing whether what they are seeing is what the sender intended. As a result, the user is less likely to reach the intended resource and may instead be tricked by a phishing site or other intentional misrepresentation.

SSAC added that it:

strongly discourages the registration of any domain name that includes emoji in any of its labels. The SSAC also advises registrants of domain names with emoji that such domains may not function consistently or may not be universally accessible as expected

The brief paper can be read here (pdf).