One of ICANN’s proposed methods of reducing the risk of name collisions in new gTLDs actually may create its own “significant risk for abuse”, according to RIPE NCC.
Asking registry operators to send a notification to the owner of IP address blocks that have done look-ups of their TLD before it is delegated risks creating a “backlash” against ICANN and registry operators, RIPE said.
Earlier this month, ICANN said that for the 80% of applied-for strings that are categorized as low risk, “the registry operator will notify the point of contacts of the IP addresses that issue DNS requests for an un-delegated TLD or names under it.”
The proposal is intended to reduce the risk of harms caused by the collision of new gTLDs and matching names that are already in use on internal networks.
For example, if the company given .web discovers that .web already receives queries from 100 different IP blocks, it will have to look up the owners of those blocks with the Regional Internet Registries and send them each an email telling them than .web is about to hit the internet.
RIPE is the RIR for Europe, responsible for allocating IP addresses in the region, so its view on how effective a mitigation plan this is cannot be easily shrugged off.
Chief scientist Daniel Karrenberg told ICANN today that the complexity of the DNS, with its layers of recursive name servers and such, makes the approach pointless:
The notifications will not be effective because they will typically not reach the party that is potentially at risk.
In addition, it will be trivial for mischief-makers to create floods of useless notifications by conducting deliberately erroneous DNS queries for target TLDs, he said:
anyone can cause the registry operator to send an arbitrary amount of mandatory notifications to any holder of IP address space. It will be highly impractical to detect such attacks or find their source by technical means. On the other hand there are quite a number of motivations for such an attack directed at the recipient or the sender of the notifications. The backlash towards the registry operator, ICANN and other parties in the chain will be even more severe once the volume increases and when it turns out that the notifications are for “non-existing” queries.
With a suitably large botnet, it’s easy to see how an attacker could generate the need for many thousands of mandatory notifications.
If the registry has a manual notification process, such a flood would effectively DDoS the registry’s ability to send the notices, potentially delaying the gTLD.
Even if the process were to be automated, you can imagine how IP address block owners (network admins at ISPs and hosting companies, for example) would respond to receiving notifications, each of which creates work, from hundred of affected gTLD operators.
It’s an interesting view, and one that affected new gTLD applicants (which is most of them) will no doubt point to in their own comments on the name collisions mitigation plan.
The Commonwealth Bank of Australia, which has applied for the new gTLD .cba, has told ICANN that its own systems are to blame for most of the error traffic the string sees at the DNS root.
The company wants ICANN to downgrade its gTLD application to “low risk” from its current delay-laden “uncalculated” status, saying that it can remediate the problem itself.
Since the publication of Interisle Consulting’s name collisions report, CBA said it has discovered that its own systems “make extensive use of ‘.cba’ as a strictly internal domain.”
Leakage is the reason Interisle’s analysis of root error traffic saw so many occurrences of .cba, the bank claims:
As the cause of the name collision is primarily from CBA internal systems and associated certificate use, it is within the CBA realm of control to detect and remediate said systems and internal certificate use.
One has to wonder how CBA can be so confident based merely on an “internal investigation”, apparently without access to the same extensive and highly restricted data set Interisle used.
There are many uses of the string CBA and there can be no guarantees that CBA is the only organization spewing internal DNS queries out onto the internet.
CBA’s comment is however notable for being an example of a bank that is so unconcerned about the potential risks of name collision that it’s happy to let ICANN delegate its dot-brand without additional review.
This will surely help those who are skeptical about Interisle’s report and ICANN’s response to it.
Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.
Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.
The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.
Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.
That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.
The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.
The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.
NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.
The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.
On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.
The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.
The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.
But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.
NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.
The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.
The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.
NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:
We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.
The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.
That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.
If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.
None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.
the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.
Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.
This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.
The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).
NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.
Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.
While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.
Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.
For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.
In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)
The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.
Verisign this morning confirmed yesterday’s reports that the .gov top-level domain went down for some internet users due to a DNSSEC problem, which it said was related to an algorithm change.
In a posting to various mailing lists, Verisign principal engineer Duane Wessels said:
On the morning of August 14, a relatively small number of networks may have experienced an operational disruption related to the signing of the .gov zone. In preparation for a previously announced algorithm rollover, a software defect resulted in publishing the .gov zone signed only with DNSSEC algorithm 8 keys rather than with both algorithm 7 and 8. As a result .gov name resolution may have failed for validating recursive name servers. Upon discovery of the issue, Verisign took prompt action to restore the valid zone.
Verisign plans to proceed with the previously announced .gov algorithm rollover at the end of the month with the zone being signed with both algorithms for a period of approximately 10 days.
This clarifies that the problem was slightly different to what had been assumed yesterday.
It was related to change of the cryptographic algorithm used to create .gov’s DNSSEC keys, a relatively rare event, rather than a scheduled key rollover, which is a rather more frequent occurrence.
The problem would only have made .gov domains (and consequently web sites, email, etc) inaccessible for users of networks where DNSSEC validation is strictly enforced, which is quite small.
The US ISP with the strongest support for DNSSEC is Comcast. Since turning on its validators it has reported dozens of instances of DNSSEC failing — mostly in second-level .gov domains, where DNSSEC is mandated by US policy.
On two other occasions Comcast has blogged about the whole .gov TLD failing DNSSEC validation due to problems keeping keys up to date.
The general problem is widespread enough, and the impact severe enough, that Comcast has had to create an entirely new technology to prevent borked key rollovers making web sites go dark for its customers.
Called Negative Trust Anchors, it’s basically a Band-Aid that allows the ISP to deliberately ignore DNSSEC on a given domain while it waits for that domain’s owner to sort out its key problem.
The technology was created following the widely reported nasa.gov outage last year.
It’s really little wonder that so few organizations are interested in deploying DNSSEC today.
Yesterday’s .gov problem may have been minor, lasting only an hour or two, but had the affected TLD been .com, and had DNSSEC deployment been more widespread, everyone on the planet would have noticed.
Under ICANN contract, DNSSEC is mandatory for new gTLDs at the top level, but not the second level.