New gTLDs are set to be added to the widely used Public Suffix List within a month of signing an ICANN registry agreement, according to PSL volunteer Jothan Frakes.
This is pretty good news for new gTLD registries.
The PSL, maintained by volunteers under the Mozilla banner, is used in browsers including Firefox and Chrome, and will be a vital part of making sure new gTLDs “work” out of the box.
If a TLD doesn’t have an entry on the PSL, browsers tend to handle them badly.
For example, after .sx launched last year, Google’s Chrome browser returned search results instead of the intended web site when .sx domain names were typed into the address/search bar.
It also provides a critical security function, telling browsers at which level they should allow domains to set cookies.
According to Frakes, who has been working behind the scenes with other PSL volunteers and ICANN staff to get this process working, new gTLDs will usually hit the PSL within 30 days of an ICANN contract.
Due to the mandatory pre-delegation testing period, new gTLDs should be on the PSL before or at roughly the same time as they are delegated, with plenty of time to spare before they launch.
The process of being added to the PSL should be fairly quick for TLDs that intend to run flat second-level spaces, according to Frakes, but may be more complex if they plan to do something less standard, such as selling third-level domains, for example.
Browser makers may take some time to update their own lists with the PSL updates. Google, with its own huge portfolio of applications, will presumably be incentivized to stay on the ball.
The first tranche of responses to Interisle Consulting’s study into the security risks of new gTLDs, and ICANN’s proposal to delay a few hundred strings pending more study, is in.
Comments filed with ICANN before the public comment deadline yesterday fall basically into two camps:
- Non-applicants (mostly) urging ICANN to proceed with extreme caution. Many are asking for more time to study their own networks so they can get a better handle on their own risk profiles.
- Applicants shooting holes in Interisle’s study and ICANN’s remeditation plan. They want ICANN to reclassify everything except .home and .corp as low risk, removing delays to delegation and go-live.
They were responding to ICANN’s decision to delay 521 “uncalculated risk” new gTLD applications by three to six months while further research into the risk of name collisions — where a new gTLD could conflict with a TLD already used by internet users in a non-standard way — is carried out.
Proceed with caution
Many commenters stated that more time is needed to analyse the risks posed by name collisions, noting that Interisle studied primarily the volume of queries for non-existent domains, rather than looking deeply into the consequences of delegating colliding gTLDs.
That was a point raised by applicants too, but while applicants conclude that this lack of data should lead ICANN to lift the current delays, others believe that it means more delays are needed.
Two ICANN constituencies seem to generally agree with the findings of the Interisle report.
The Internet Service Providers and Connectivity Providers constituency asked for the public comment period be put on hold until further research is carried out, or for at least 60 days. It noted:
corporations, ISPs and connectivity providers may bear the brunt of the security and customer-experience issues resulting from adverse (as yet un-analyzed) impacts from name collision
these issues, due to their security and customer-experience aspects, fall outside the remit of people who normally participate in the ICANN process, requiring extensive wide-ranging briefings even in corporations that do participate actively in the ICANN process
The At-Large Advisory Committee concurred that the Interisle study does not currently provide enough information to fully gauge the risk of name collisions causing harm.
ALAC said it was “in general concurrence with the proposed risk mitigation actions for the three defined risk categories” anyway, adding:
ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution
Several individual stakeholders agreed with the ISPCP that they need more time to look at their own networks. The Association of Nation Advertisers said:
Our member companies are working diligently to determine if DNS Clash issues are present within their respective networks. However the ANA had to communicate these issues to hundreds of companies, after which these companies must generate new data to determine the potential service failures on their respective networks.
The ANA wants the public comment period extended until November 22 to give its members more time to gather data.
While the ANA can always be relied upon to ask for new gTLDs to be delayed, its request was echoed by others.
General Electric called for three types of additional research:
- Additional studies of traffic beyond the initial DITL sample.
- Information and analysis of “use cases” — particular types of queries and traffic — and the consequences of the failure of particular use cases to resolve as intended (particular use cases could have severe consequences even if they might occur infrequently — like hurricanes), and
- Studies of the time and costs of mitigation.
GE said more time is needed for companies such as itself to conduct impact analyses on their own internal networks and asked ICANN to not delegate any gTLD until the risk is “fully understood”.
The Association of Competitive Technology (which has Verisign as a member) said:
ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect.
Numerically, there were far more comments criticizing ICANN’s mitigation proposal. All were filed by new gTLD applicants, whose interests are aligned, however.
Most of these comments, which are far more focused on the details and the data, target perceived deficiencies in Interisle’s report and ICANN’s response to it.
Several very good arguments are made.
The Svalbard problem
First, there is criticism of the cut-off point between “low risk” and “uncalculated risk” strings, which some applicants say is “arbitrary”.
That’s mostly true.
ICANN basically took the list of applied-for strings, ordered by the frequency Interisle found they generate NXDOMAIN responses at the root, and drew a line across it at the 49,842 queries mark.
That’s because 49,842 queries is what .sj, the least-frequently-queried real TLD, received over the same period
If your string, despite not yet existing as a gTLD, already gets more traffic than .sj, it’s classed as “uncalculated risk” and faces more delays, according to ICANN’s plan.
As Directi said in its comments:
The result of this arbitrary selection is that .bio (Rank 281) with 50,000 queries (rounded to the nearest thousand) is part of the “uncategorized risk” list, and is delayed by 3 to 6 months, whereas .engineering (Rank 282) with 49,000 queries (rounded to the nearest thousand) is part of the “low risk” list, and can proceed without any significant delays.
What neither ICANN nor Interisle explained is why this is an appropriate place to draw a line in the sand.
.sj is the ccTLD for Svalbard, a Norwegian territory in the Arctic Circle with fewer than 3,000 inhabitants. The TLD is administered by .no registry Norid, but it’s not possible to register domains there.
Does having more traffic than .sj mean a gTLD is automatically more risky? Does having less mean a gTLD is safe? The ICANN proposal assumes “yes” to both questions, but it doesn’t explain why.
Many applicants say that having more traffic than existing gTLDs does not automatically mean your gTLD poses a risk.
They pointed to Verisign data from 2006, which shows that gTLDs such as .xxx and .asia were already receiving large amounts of traffic prior to their delegation. When they were delegated, the sky did not fall. Indeed, there were no reports of significant security and stability problems.
The New gTLD Applicants Group said:
In fact, the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.
These successful delegations alone demonstrate that there is no need to delay any more than the two most risky strings.
There is no factual basis in the study recommending halting delegation process of 20% of applied-for strings. As the paper itself says, “The Study did not find enough information to properly classify these strings given the short timeline.” Without evidence of actual harm, the TLDs should proceed to delegation. Such was the case with other TLDs such as .XXX and .ASIA, which were delegated without delay and with no problems post-delegation.
Applicants also believe that the release in June 2012 of the list of all 1,930 applied-for strings may have skewed the data set that Interisle used in its study.
Uniregistry, for example, said:
The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list of applied-for strings.
The argument seems to be that a lot of the NXDOMAIN traffic seen in 2013 is due to people and software querying applied-for TLDs to see if they’re live yet.
It’s quite a speculative argument, but it’s somewhat supported by the fact that many applied-for strings received more queries in 2013 than they did in the equivalent 2012 sampling.
Some applicants pointed out that there may not be a correlation between the volume of traffic a string receives and the number of second-level domains being queried.
A string might get a bazillion queries for a single second-level domain name. If that domain name is reserved by the registry, the risk of a name collision might be completely eliminated.
The Interisle report did show that number of SLDs and the volume of traffic do not correlate.
For example, .hsbc is ranked 14th in terms of traffic volume but saw requests for just 2,000 domains, whereas .inc, which ranked 15th, saw requests for 73,000 domains.
Unfortunately, the Interisle report only published the SLD numbers for the top 35 strings by query volume, leaving most applicants none the wiser about the possible impact of their own strings.
And ICANN did not factor the number of SLDs into its decision about where to draw the line between “low” and “uncalculated” risk.
Some applicants questioned whether the Interisle data itself was reliable, but I find these arguments poorly supported and largely speculative.
They propose that someone (meaning presumably Verisign, which stands to lose market share when new gTLDs go live, and which kicked off the name collisions debate in the first place) could have gamed the study by generating spurious requests for applied-for gTLDs during the period Interisle’s data was being captured.
Some applicants put forth this view, while other limited their comments to a request that future studies rely only on data collected before now, to avoid tampering at the point of collection in future.
Query counts are very easily gamed by any Internet connected system, allowing for malicious actors to create the appearance of risk for any string that they may object to in the future. It would be very easy to create the impression of a widespread string collision problem with a home Internet connection and the abuse of the thousands of available open resolvers.
While this kind of mischief is a hypothetical possibility, nobody has supplied any evidence that Interisle’s data was manipulated by anyone.
Some people have privately pointed DI to the fact that Verisign made a substantial donation to the DNS-OARC — the group that collected the data that Interisle used in its study — in July.
The implication is that Verisign was somehow able to manipulate the data after it was captured by DNS-OARC.
I don’t buy this either. We’re talking about a highly complex 8TB data set that took Interisle’s computers a week to process on each pass. The data, under the OARC’s deal with the root server operators, is not allowed to leave its premises. It would not be easily manipulated.
Additionally, DNS-OARC is managed by Internet Systems Consortium — which runs the F-root and is Uniregistry’s back-end registry provider — from its own premises in California.
In short, in the absence of any evidence supporting this conspiracy theory, I find the idea that the Interisle data was hacked after it was collected highly improbable.
Several ICANN constituencies that can usually be relied upon to comment on everything (registrars, intellectual property, business and non-commercial) have not yet commented.
Will ICANN extend the deadline? I suppose it depends on how cautious it wants to be, whether it believes the companies requesting the extension really are conducting their own internal collision studies, and how useful it thinks those studies will be.
One of ICANN’s proposed methods of reducing the risk of name collisions in new gTLDs actually may create its own “significant risk for abuse”, according to RIPE NCC.
Asking registry operators to send a notification to the owner of IP address blocks that have done look-ups of their TLD before it is delegated risks creating a “backlash” against ICANN and registry operators, RIPE said.
Earlier this month, ICANN said that for the 80% of applied-for strings that are categorized as low risk, “the registry operator will notify the point of contacts of the IP addresses that issue DNS requests for an un-delegated TLD or names under it.”
The proposal is intended to reduce the risk of harms caused by the collision of new gTLDs and matching names that are already in use on internal networks.
For example, if the company given .web discovers that .web already receives queries from 100 different IP blocks, it will have to look up the owners of those blocks with the Regional Internet Registries and send them each an email telling them than .web is about to hit the internet.
RIPE is the RIR for Europe, responsible for allocating IP addresses in the region, so its view on how effective a mitigation plan this is cannot be easily shrugged off.
Chief scientist Daniel Karrenberg told ICANN today that the complexity of the DNS, with its layers of recursive name servers and such, makes the approach pointless:
The notifications will not be effective because they will typically not reach the party that is potentially at risk.
In addition, it will be trivial for mischief-makers to create floods of useless notifications by conducting deliberately erroneous DNS queries for target TLDs, he said:
anyone can cause the registry operator to send an arbitrary amount of mandatory notifications to any holder of IP address space. It will be highly impractical to detect such attacks or find their source by technical means. On the other hand there are quite a number of motivations for such an attack directed at the recipient or the sender of the notifications. The backlash towards the registry operator, ICANN and other parties in the chain will be even more severe once the volume increases and when it turns out that the notifications are for “non-existing” queries.
With a suitably large botnet, it’s easy to see how an attacker could generate the need for many thousands of mandatory notifications.
If the registry has a manual notification process, such a flood would effectively DDoS the registry’s ability to send the notices, potentially delaying the gTLD.
Even if the process were to be automated, you can imagine how IP address block owners (network admins at ISPs and hosting companies, for example) would respond to receiving notifications, each of which creates work, from hundred of affected gTLD operators.
It’s an interesting view, and one that affected new gTLD applicants (which is most of them) will no doubt point to in their own comments on the name collisions mitigation plan.
The Commonwealth Bank of Australia, which has applied for the new gTLD .cba, has told ICANN that its own systems are to blame for most of the error traffic the string sees at the DNS root.
The company wants ICANN to downgrade its gTLD application to “low risk” from its current delay-laden “uncalculated” status, saying that it can remediate the problem itself.
Since the publication of Interisle Consulting’s name collisions report, CBA said it has discovered that its own systems “make extensive use of ‘.cba’ as a strictly internal domain.”
Leakage is the reason Interisle’s analysis of root error traffic saw so many occurrences of .cba, the bank claims:
As the cause of the name collision is primarily from CBA internal systems and associated certificate use, it is within the CBA realm of control to detect and remediate said systems and internal certificate use.
One has to wonder how CBA can be so confident based merely on an “internal investigation”, apparently without access to the same extensive and highly restricted data set Interisle used.
There are many uses of the string CBA and there can be no guarantees that CBA is the only organization spewing internal DNS queries out onto the internet.
CBA’s comment is however notable for being an example of a bank that is so unconcerned about the potential risks of name collision that it’s happy to let ICANN delegate its dot-brand without additional review.
This will surely help those who are skeptical about Interisle’s report and ICANN’s response to it.
Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.
Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.
The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.
Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.
That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.
The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.
The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.
NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.
The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.
On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.