Latest news of the domain name industry

Recent Posts

Major registries posting “fabricated” Whois data

One or more of the major gTLD registries are publishing Whois query data that may be “fabricated”, according to some of ICANN’s top security minds.

The Security and Stability Advisory Committee recently wrote to ICANN’s top brass to complain about inconsistent and possibly outright bogus reporting of Whois port 43 query volumes.

SSAC said (pdf):

it appears that the WHOIS query statistics provided to ICANN by registry operators as part of their monthly reporting obligations are generally not reliable. Some operators are using different methods to count queries, some are interpreting the registry contract differently, and some may be reporting numbers that are fabricated or otherwise not reflective of reality. Reliable reporting is essential to the ICANN community, especially to inform policy-making.

SSAC says that the inconsistency of the data makes it very difficult to make informed decisions about the future of Whois access and to determine the impact of GPDR.

While the letter does not name names, I’ve replicated some of SSAC’s research and I think I’m in a position to point fingers.

In my opinion, Google, Verisign, Afilias and Donuts appear to be the causes of the greatest concern for SSAC, but several others exhibit behavior SSAC is not happy about.

I reached out to these four registries on Wednesday and have published their responses, if I received any, below.

SSAC’s concerns relate to the monthly data dumps that gTLD registries new and old are contractually obliged to provide ICANN, which publishes the data three months later.

Some of these stats concern billable transactions such as registrations and renewals. Others are used to measure uptime obligations. Others are largely of academic interest.

One such stat is “Whois port 43 queries”, defined in gTLD contracts as “number of WHOIS (port-43) queries responded during the reporting period”.

According to SSAC, and confirmed by my look at the data, there appears to be a wide divergence in how registries and back-end registry services providers calculate this number.

The most obvious example of bogosity is that some registries are reporting identical numbers for each of their TLDs. SSAC chair Rod Rasmussen told DI:

The largest issue we saw at various registries was the reporting of the exact or near exact same number of queries for many or all of their supported TLDs, regardless of how many registered domain names are in those zones. That result is a statistical improbability so vanishingly small that it seems clear that they were reporting some sort of aggregate number for all their TLDs, either as a whole or divided amongst them.

While Rasmussen would not name the registries concerned, my research shows that the main culprit here appears to be Google.

In its December data dumps, it reported exactly 68,031,882 port 43 queries for each of its 45 gTLDs.

If these numbers are to be believed, .app with its 385,000 domains received precisely the same amount of port 43 interest as .gbiz, which has no registrations.

As SSAC points out, this is simply not plausible.

A Google spokesperson has not yet responded to DI’s request for comment.

Similarly, Afilias appears to have reported identical data for a subset of its dot-brand clients’ gTLDs, 16 of which purportedly had exactly 1,071,939 port 43 lookups in December.

Afilias has many more TLDs that did not report identical data.

An Afilias spokesperson told DI: “Afilias has submitted data to ICANN that addresses the anomaly and the update should be posted shortly.”

SSAC’s second beef is that one particular operator may have reported numbers that “were altered or synthesized”. SSAC said in its letter:

In a given month, the number of reported WHOIS queries for each of the operator’s TLDs is different. While some of the TLDs are much larger than others, the WHOIS query totals for them are close to each other. Further statistical analysis on the number of WHOIS queries per TLD revealed that an abnormal distribution. For one month of data for one of the registries, the WHOIS query counts per TLD differed from the mean by about +/- 1%, nearly linearly. This appeared to be highly unusual, especially with TLDs that have different usage patterns and domain counts. There is a chance that the numbers were altered or synthesized.

I think SSAC could be either referring here to Donuts or Verisign

Looking again at December’s data, all but one of Donuts’ gTLDs reported port 43 queries between 99.3% and 100.7% of the mean average of 458,658,327 queries.

Is it plausible that .gripe, with 1,200 registrations, is getting almost as much Whois traffic as .live, with 343,000? Seems unlikely.

Donuts has yet to provide DI with its comments on the SSAC letter. I’ll update this post and tweet the link if I receive any new information.

All of the gTLDs Verisign manages on behalf of dot-brand clients, and some of its own non-.com gTLDs, exhibit the same pattern as Donuts in terms of all queries falling within +/- 1% of the mean, which is around 431 million per month.

So, as I put to Verisign, .realtor (~40k regs) purportedly has roughly the same number of port 43 queries as .comsec (which hasn’t launched).

Verisign explained this by saying that almost all of the port 43 queries it reports come from its own systems. A spokesperson told DI:

The .realtor and .comsec query responses are almost all responses to our own monitoring tools. After explaining to SSAC how Verisign continuously monitors its systems and services (which may be active in tens or even hundreds of locations at any given time) we are confident that the accuracy of the data Verisign reports is not in question. The reporting requirement calls for all query responses to be counted and does not draw a distinction between responses to monitoring and non-monitoring queries. If ICANN would prefer that all registries distinguish between the two, then it is up to ICANN to discuss that with registry operators.

It appears from the reported numbers that Verisign polls its own Whois servers more than 160 times per second. Donuts’ numbers are even larger.

I would guess, based on the huge volumes of queries being reported by other registries, that this is common (but not universal) practice.

SSAC said that it approves of the practice of monitoring port 43 responses, but it does not think that registries should aggregate their own internal queries with those that come from real Whois consumers when reporting traffic to ICANN.

Either way, it thinks that all registries should calculate their totals in the same way, to make apples-to-apples comparisons possible.

Afilias’ spokesperson said: “Afilias agrees that everyone should report the data the same way.”

As far as ICANN goes, its standard registry contract is open to interpretation. It doesn’t really say why registries are expected to collect and supply this data, merely that they are obliged to do so.

The contracts do not specify whether registries are supposed to report these numbers to show off the load their servers are bearing, or to quantify demand for Whois services.

SSAC thinks it should be the latter.

You may be thinking that the fact that it’s taken a decade or more for anyone to notice that the data is basically useless means that it’s probably not all that important.

But SSAC thinks the poor data quality interferes with research on important policy and practical issues.

It’s rendered SSAC’s attempt to figure out whether GDPR and ICANN’s Temp Spec have had an effect on Whois queries pretty much futile, for example.

The meaningful research in question also includes work leading to the replacement of Whois with RDAP, the Registration Data Access Protocol.

Finally, there’s the looming possibility that ICANN may before long start acting as a clearinghouse for access to unredacted Whois records. If it has no idea how often Whois is actually used, that’s going to make planning its infrastructure very difficult, which in turn could lead to downtime.

Rasmussen told DI: “Our impression is that all involved want to get the numbers right, but there are inconsistent approaches to reporting between registry operators that lead to data that cannot be utilized for meaningful research.”

Set buttocks to clench! ICANN approves risky KSK rollover

Kevin Murphy, September 17, 2018, Domain Policy

ICANN has approved the first rollover of the domain name system’s master security key, setting the clock ticking on a change that could cause internet access issues for millions.

The so-called KSK rollover, when ICANN deletes the key-signing key that has been used as the trust anchor for the DNSSEC ecosystem since 2011 and replaces it with the new one — will now go ahead as planned on October 11.

The decision was made yesterday at the ICANN board of directors’ retreat in Brussels.

ICANN chief technology officer David Conrad posted this to an ICANN mailing list this morning:

The Board voted to approve the resolution for ICANN org to move forward with the revised KSK rollover plan. So barring unforeseen circumstances, the KSK-2017-signed ZSK will be used to sign the root zone on 11 October 2018.

The rollover was due to happen October 11 last year, but ICANN delayed it when it emerged that many DNS resolvers weren’t yet configured to use the new key.

That’s still a problem, and nobody knows for sure how many endpoints will stop functioning properly when the new KSK goes solo.

While most experts weighing in on the rollover, including Conrad, agreed that the risk of more delay outweighed the risk of rolling now, that feeling was not unanimous.

Five members of the 22-member Security and Stability Advisory Committee — including top guys from Google and Verisign — last month dissented from the majority view and said ICANN should delay again.

The question now is not whether internet users will see a disruption in the days following October 11, but how many users will be affected and how serious their disruptions will be.

Based on current information, as many as two million internet users could be affected.

ICANN is likely to take flak for even relatively minor disruptions, but the alternative was to continue with the delays and risk an even bigger impact, and even more flak, in future.

The text of ICANN’s resolution and the rationale behind it will be published in the next day or so.

ICANN faces critical choice as security experts warn against key rollover

Kevin Murphy, August 23, 2018, Domain Tech

Members of ICANN’s top security body have advised the organization to further delay plans to change the domain name system’s top cryptographic key.

Five dissenting members of the influential, 22-member Security and Stability Advisory Committee said they believe “the risks of rolling in accordance with the current schedule are larger than the risks of postponing”.

Their comments relate to the so-called KSK rollover, which would see ICANN for the first time ever change the key-signing key that acts as the trust anchor for all DNSSEC queries on the internet.

ICANN is fairly certain rolling the key will cause DNS resolution problems for some — possibly as much as 0.05% of the internet or a couple million people — but it currently lacks the data to be absolutely certain of the scale of the impact.

What it does know — explained fairly succinctly in this newly published guide (pdf) — is that within 48 hours of the roll, a certain small percentage of internet users will start to see DNS resolution fail.

But there’s a prevailing school of thought that believes the longer the rollover is postponed, the bigger that number of affected users will become.

The rollover is currently penciled in for October 11, but the ultimate decision on whether to go ahead rests with the ICANN board of directors.

David Conrad, the organization’s CTO, told us last week that his office has already decided to recommend that the roll should proceed as planned. At the time, he noted that SSAC was a few days late in delivering its own verdict.

Now, after some apparently divisive discussions, that verdict is in (pdf).

SSAC’s majority consensus is that it “has not identified any reason within the SSAC’s scope why the rollover should not proceed as currently planned.”

That’s in line with what Conrad, and the Root Server System Advisory Committee have said. But SSAC noted:

The assessment of risk in this particular area has some uncertainty and therefore includes a component of subjective judgement. Individuals (including some members of the SSAC) have different assessments of the overall balance of risk of the resumption of this plan.

It added that it’s up to the ICANN board (comprised largely of non-security people) to make the final call on what the acceptable level of risk is.

The minority, dissenting opinion gets into slightly more detail:

The decision to proceed with the keyroll is a complex tradeoff of technical and non-technical risks. While there is risk in proceeding with the currently planned roll, we understand that there is also risk in further delay, including loss of confidence in DNSSEC operational planning, potential for more at-risk users as more DNSSEC validation is deployed, etc.

While evaluating these risks, the consensus within the SSAC is that proceeding is preferable to delay. We personally evaluate the tradeoffs differently, and we believe that the risks of rolling in accordance with the current schedule are larger than the risks of postponing and focusing heavily on additional research and outreach, and in particular leveraging newly developed techniques that provide better signal and fidelity into potentially impacted parties.

We would like to reiterate that we understand our colleagues’ position, but evaluate the risks and associated mitigation prospects differently. We believe that the ultimate decision lies with the ICANN Board, and do not envy them with this decision.

SSAC members are no slouches when it comes to security expertise, and the dissenting members are no exception. They are:

  • Lyman Chapin, co-owner of Interisle Consulting, a regular ICANN contractor perhaps best-known to DI readers for carrying out a study into new gTLD name collisions five years ago.
  • Kimberly “kc claffy” Claffy, head of the Center for Applied Internet Data Analysis at the University of California in San Diego. CAIDA does nothing but map and measure the internet.
  • Jay Daley, a registry executive with a technical background whose career includes senior stints at .uk and .nz. He’s currently keeping the CEO’s chair warm at .org manager Public Interest Registry.
  • Warren Kumari, a senior network security engineer at Google, which is probably the largest early adopter of DNSSEC on the resolution side.
  • Danny McPherson, Verisign’s chief security officer. As well as .com, Verisign runs the two of the 13 root servers, including the master A-root. It’s running the boxes that sit at the top of the DNSSEC hierarchy.

It may be the first time SSAC has failed to reach a full-consensus opinion on a security matter. If it has ever published a dissenting opinion before, I certainly cannot recall it.

The big decision about whether to proceed or delay is expected to be made by the ICANN board during its retreat in Brussels, a three-day meeting that starts September 14.

Given that ICANN’s primary mission is “to ensure the stable and secure operation of the Internet’s unique identifier systems”, it could turn out to be one of ICANN’s biggest decisions to date.

ICANN CTO: no reason to delay KSK rollover

Kevin Murphy, August 15, 2018, Domain Tech

ICANN’s board of directors will be advised to go ahead with a key security change at the DNS root — “the so-called KSK rollover” — this October, according to the organization’s CTO.

“We don’t see any reason to postpone again,” David Conrad told DI on Monday.

If it does go ahead as planned, the rollover will see ICANN change the key-signing key that acts as the trust anchor for the whole DNSSEC-using internet, for the first time since DNSSEC came online in 2010.

It’s been delayed since last October after it emerged that misconfigurations elsewhere in the DNS cloud could see potentially millions of internet users see glitches when the key is rolled.

Ever since then, ICANN and others have been trying to figure out how many people could be adversely affected by the change, and to reduce that number to the greatest extent possible.

The impact has been tricky to estimate due to patchy data.

While it’s been possible to determine a number of resolvers — about 8,000 — that definitely are poorly configured, that only represents a subset of the total number. It’s also been hard to map that to endpoints due to “resolvers behind resolvers behind resolvers”, Conrad said.

“The problem here is that it’s sort of a subjective evaluation,” he said. “We can’t rely on the data were seeing. We’re seeing the resolvers but we’re not seeing the users behind the resolvers.”

Some say that the roll is still too risky to carry out without better visibility into the potential impact, but others say that more delays would lead to more networks and devices becoming DNSSEC-compatible, potentially leading to even greater problems after the eventual rollover.

ICANN knows of about 8,000 resolver IP addresses that are likely to stop working properly after the rollover, because they only support the current KSK, but that’s only counting resolvers that automatically report their status to the root using a relatively new internet standard. There’s a blind spot concerning resolvers that do not have that feature turned on.

ICANN has also had difficulty reaching out to the network operators behind these resolvers, with good contact information apparently only available for about a quarter of the affected IP addresses, Conrad said.

Right now, the best data available suggests that 0.05% of the internet’s population could see access issues after the October 11 rollover, according to Conrad.

That’s about two million people, but it’s 10 times fewer people than the 0.5% acceptable collateral damage threshold outlined in ICANN’s rollover plan.

The 0.05% number comes from research by APNIC, which used Google’s advertising system to place “zero-pixel ads” to check whether individual user endpoints were using compatible resolvers or not.

If problems do emerge October 11 the temporary solution is apparently quite quick to implement — network operators can simply turn off DNSSEC, assuming they know that’s what they’re supposed to do.

But still, if a million or two internet users could have their day ruined by the rollover, why do it at all?

It’s not as if the KSK is in any danger of being cracked any time soon. Conrad explained that a successful brute-force attack on the 2048-bit RSA key would take longer than the lifetime of the universe using current technology.

Rather, the practice of rolling the key every five years is to get network operators and developers accustomed to the idea that the KSK is not a permanent fixture that can be hard-coded into their systems, Conrad said.

It’s a problem comparable to new gTLD name collisions or the Y2K problem, instances where developers respectively hard-coded assumptions about valid TLDs or the century into their software.

ICANN has already been reaching out to the managers of open-source projects on repositories such as Github that have been seen to hard-code the current KSK into their software, Conrad said.

Separately, Wes Hardaker at the University of Southern California Information Sciences Institute discovered that a popular VPN client was misconfigured. Outreach to the developer saw the problem fixed, reducing the number of users who will be affected by the roll.

“What we’re trying to avoid is having these keys hardwired into firmware, so that that it would never be changeable,” he said. “The idea is if you exercise the infrastructure frequently enough, people will know the that the key is not permanent configuration, it’s not something embedded in concrete.”

One change that ICANN may want to make in future is to change the algorithm used to generate the KSK.

Right now it’s using RSA, but Conrad said it has downsides such as rather large signature size, which leads to heavier DNSSEC traffic. By switching to elliptical curve cryptography, signatures could be reduced by “orders of magnitude”, leading to a more efficient and slimline DNS infrastructure, Conrad said.

Last week, ICANN’s Root Server Stability Advisory Committee issued an advisory (pdf) that essentially gave ICANN the all-clear to go ahead with the roll.

The influential Security and Stability Advisory Committee has yet to issue its own advisory, however, despite being asked to do so by August 10.

Could SSAC be more cautious in its advice? We’ll have to wait and see, but perhaps not too long; the current plan is for the ICANN board to consider whether to go ahead with the roll during its three-day Brussels retreat, which starts September 14.

Zone file access is crap, security panel confirms

Kevin Murphy, June 20, 2017, Domain Policy

ICANN’s Centralized Zone Data Service has some serious shortcomings and needs an overhaul, according to the Security and Stability Advisory Committee.

The panel of DNS security experts has confirmed what CZDS subscribers, including your humble correspondent, have known since 2014 — the system had a major design flaw baked in from day one for no readily apparent reason.

CZDS is the centralized repository of gTLD zone files. It’s hosted by ICANN and aggregates zones from all 2012-round, and some older, gTLDs on a daily basis.

Signing up for it is fairly simple. You simply fill out your contact information, agree to the terms of service, select which zones you want and hit “submit”.

The purpose of the service is to allow researchers to receive zone files without having to enter into separate agreements with each of the 1,200+ gTLDs currently online.

The major problem, as subscribers know and SSAC has confirmed, is that the default subscription period is 90 days.

Unless the gTLD registry extends the period at its end and in its own discretion, each subscription ends after three months — cutting off access — and the subscriber must reapply.

Many of the larger registries exercise this option, but many — particularly dot-brands — do not.

The constant need to reapply and re-approve creates a recurring arse-ache for subscribers and, registry staff have told me, the registries themselves.

The approval process itself is highly unpredictable. Some of the major registries process requests within 24 hours — I’ve found Afilias is the fastest — but I’ve been waiting for approval for Valuetainment’s .voting since September 2016.

Some dot-brands even attempt to insert extra terms of service into the deal before approving requests, which defeats the entire purpose of having a centralized service in the first place.

Usually, a polite email to the person handling the requests can produce results. Other times, it’s necessary to report them to ICANN Compliance.

The SSAC has evidently interviewed many people who share my concerns, as well as looking at data from Compliance (where CZDS reliably generates the most complaints, wasting the time of Compliance staff).

This situation makes zone file access unreliable and subject to unnecessary interruptions. The missing data introduces “blind spots” in security coverage and research projects, and the reliability of software – such as security and analytics applications – that relies upon zone files is reduced. Lastly, the introduced inefficiency creates additional work for both registry operators and subscribers.

The SSAC has no idea why the need to reapply every 90 days was introduced, figuring it must have happened during implementation.

But it recommends that access agreements should automatically renew once they expire, eliminating the busywork of reapplying and closing the holes in researchers’ data sets.

As I’m not objective on this issue, I agree with that recommendation wholeheartedly.

I’m less keen on the SSAC’s recommendation that registries should be able to opt out of the auto-renewals on a per-subscriber basis. This will certainly be abused by the precious snowflake dot-brands that have already shown their reluctance to abide by their contractual obligations.

The SSAC report can be read here (pdf).

  • Page 1 of 2
  • 1
  • 2
  • >