Latest news of the domain name industry

Recent Posts

ICANN must do more to fight internet security threats [Guest Post]

ICANN and its contracted parties need to do more to tackle security threats, write Dave Piscitello and Lyman Chapin of Interisle Consulting.
The ICANN Registry and Registrar constituencies insist that ICANN’s role with respect to DNS abuse is limited by its Mission “to ensure the stable and secure operation of the internet’s unique identifier systems”, therefore limiting ICANN’s remit to abuse of the identifier systems themselves, and specifically excluding from the remit any harms that arise from the content to which the identifiers point.
In their view, if the harm arises not from the identifier, but from the thing identified, it is outside of ICANN’s remit.
This convenient formulation relieves ICANN and its constituencies of responsibility for the way in which identifiers are used to inflict harm on internet users. However convenient it may be, it is fundamentally wrong.
ICANN’s obligation to operate “for the benefit of the Internet community as a whole” (see Bylaws, “Commitments”) demands that its remit extend broadly to how a domain name (or other Internet identifier) is misused to point to or lure a user or application to content that is harmful, or to host content that is harmful.
Harmful content itself is not ICANN’s concern; the way in which internet identifiers are used to weaponize harmful content most certainly is.
Rather than confront these obligations, however, ICANN is conducting a distracting debate about the kinds of events that should be described as “DNS abuse”. This is tedious and pointless; the persistent overloading of the term “abuse” has rendered it meaningless, ensuring that any attempt to reach consensus on a definition will fail.
ICANN should stop using the term “DNS abuse” and instead use the term “security threat”.
The ICANN Domain Abuse Activity Reporting project and the Governmental Advisory Committee (GAC) use this term, which is also a term of reference for new TLD program obligations (Spec 11) and related reporting activities. It is also widely used in the operational and cybersecurity communities.
Most importantly, the GAC and the DAAR project currently identify and seek to measure an initial set of security threats that are a subset of a larger set of threats that are recognized as criminal acts in jurisdictions in which a majority of domain names are registered.
ICANN should acknowledge the GAC’s reassertion in its Hyderabad Communique that the set of security threats identified in its Beijing correspondence to the ICANN Board were not an exhaustive list but merely examples. The GAC smartly recognized that the threat landscape is constantly evolving.
ICANN should not attempt to artificially narrow the scope of the term “security threat” by crafting its own definition.
It should instead make use of an existing internationally recognized criminal justice treaty. The Council of Europe’s Convention on Cybercrime is a criminal justice treaty that ICANN could use as a reference for identifying security threats that the Treaty recognizes as criminal acts.
The Convention is recognized by countries in which a sufficiently large percentage domain names are registered that it can serve the community and Internet users more effectively and fairly than any definition that ICANN might concoct.
ICANN should also acknowledge that the set of threats that fall within its remit must include all security events (“realized security threats”) in which a domain name is used during the execution of an attack for purposes of deception, for infringement on copyrights, for attacks that threaten democracies, or for operation of criminal infrastructures that are operated for the purpose of launching attacks or facilitating criminal (often felony) acts.
What is that remit?
ICANN policy and contracts must ensure that contracted parties (registrars and registries) collaborate with public and private sector authorities to disrupt or mitigate:

  • illegal interception or computer-related forgery,
  • attacks against computer systems or devices,
  • illegal access, data interference, or system interference,
  • infringement of intellectual property and related rights,
  • violation of laws to ensure fair and free elections or undermine democracies, and
  • child abuse and human trafficking.

We note that the Convention on Cybercrime identifies or provides Guidance Notes for these most prevalently executed attacks or criminal acts:

  • Spam,
  • Fraud. The forms of fraud that use domain names in criminal messaging include, business email compromise, advance fee fraud, phishing or other identity thefts.
  • Botnet operation,
  • DDoS Attacks: in particular, redirection and amplification attacks that exploit the DNS
  • Identity theft and phishing in relation to fraud,
  • Attacks against critical infrastructures,
  • Malware,
  • Terrorism, and,
  • Election interference.

In all these cases, the misuse of internet identifiers to pursue the attack or criminal activity is squarely within ICANN’s remit.
Registries or registrars should be contractually obliged to take actions that are necessary to mitigate these misuses, including suspension of name resolution, termination of domain name registrations, “unfiltered and unmasked” reporting of security threat activity for both registries and registrars, and publication or disclosure of information that is relevant to mitigating misuses or disrupting cyberattacks.
No one is asking ICANN to be the Internet Police.
The “ask” is to create policy and contractual obligations to ensure that registries and registrars collaborate in a timely and uniform manner. Simply put, the “ask” is to oblige all of the parties to play on the same team and to adhere to the same rules.
This is unachievable in the current self-regulating environment, in which a relatively small number of outlier registries and registrars are the persistent loci of extraordinary percentages of domain names associated with cyberattacks or cybercrimes and the current contracts offer no provisions to suspend or terminate their operations.
This is a guest editorial written by Dave Piscitello and Lyman Chapin, of security consultancy Interisle Consulting Group. Interisle has been an occasional ICANN security contractor, and Piscitello until last year was employed as vice president of security and ICT coordination on ICANN staff. The views expressed in this piece do not necessary reflect DI’s own.

New gTLD applicants get a way to avoid name collision delay

Kevin Murphy, October 9, 2013, Domain Tech

ICANN has given blessed relief to many new gTLD applicants by wiping potentially months off their path to delegation.
Its New gTLD Program Committee this week adopted a new “New gTLD Collision Occurrence Management Plan” which aims to tackle the problem of clashes between new gTLDs and names used on private networks.
The good news is that the previous categorization of strings according to risk, which would have delayed “uncalculated risk” gTLDs by months pending further study, has been scrapped.
The two “high risk” strings — .home and .corp — don’t catch a break, however. ICANN says it will continue to refuse to delegate them “indefinitely”.
For everyone else, ICANN said it will conduct additional studies into the risk of name collisions, above and beyond what Interisle Consulting already produced.
The study will take into account not only the frequency that new gTLDs currently generate NXDOMAIN traffic in the DNS root, but also the number of second-level domains queried, the diversity of requesting sources, and other factors.
Any new gTLD applicant that does not wish to wait for this study will be able to proceed to delegation without delay, but only if they block huge numbers of second-level domains at launch.
The registries will have to block every SLD that was queried in their gTLD according to the Day in the Life of the Internet data that Interisle used in its study.
This list will vary by TLD, but in the most severe cases is likely to extend to tens of thousands of names. In many cases, it’s likely to be a few thousand names.
Fortunately, studies conducted by the likes of Donuts and Neustar indicate that many of these SLDs — maybe even the majority — are likely to be invalid strings, such as those with an underscore or other non-DNS character, or randomly generated 10-character strings of gibberish generated by Google Chrome.
In other words, the actual number of potentially salable domains that registries will have to block may turn out to be much lower than it appears at first glance.
Each SLD will have to be blocked in such a way that it continues to return NXDOMAIN responses, as they all do today.
Because the DITL data represented a 48-hour snapshot in May 2013, and may not include every potentially affected string, ICANN is also proposing to give organizations a way to:

report and request the blocking of a domain name (SLD) that causes demonstrably severe harm as a consequence of name collision occurrences.

The process will allow the deactivation (SLD removal from the TLD zone) of the name for a period of up to two (2) years in order to allow the affected party to effect changes to its network to eliminate the DNS request leakage that causes collisions, or mitigate the harmful impact.

One has to wonder if any trademark lawyers reading this will think: “Ooh, free defensive registration!” It will be interesting to see if any of them give it a cheeky shot.
I’ve got a feeling that most new gTLD applicants will want to take ICANN up on its offer. It’s not an ideal solution for them, but it does give them a way to get into the root relatively quickly.
There’s no telling what ICANN’s additional studies will find, but there’s a chance it could be negative for their string(s) — getting delegated at least mitigates the risk of never getting delegated.
The new ICANN proposal may in some cases interfere with their plans to market and use their TLDs, however.
Take a dot-brand such as .cisco, which the networking company has applied for. Its block list is likely to have about 100,000 strings on it, increasing the chances that useful, brandable SLDs are going to be taken out of circulation for a while.
ICANN is also proposing to conduct an awareness-raising campaign, using the media, to let network operators know about the risks that new gTLDs may present to their networks.
Depending on how effective this is, new registries may be able to forget about getting positive column inches for their launch — if a journalist is handed a negative angle for a story on a plate, they’ll take it.

Verisign targets bank claims in name collisions fight

Kevin Murphy, September 15, 2013, Domain Tech

Verisign has rubbished the Commonwealth Bank of Australia’s claim that its dot-brand gTLD, .cba, is safe.
In a lengthy letter to ICANN today, Verisign senior vice president Pat Kane said that, contrary to CBA’s claims, the bank is only responsible for about 6% of the traffic .cba sees at the root.
It’s the latest volley in the ongoing fight about the security risks of name collisions — the scenario where an applied-for gTLD string is already in broad use on internal networks.
CBA’s application for .cba has been categorized as “uncalculated risk” by ICANN, meaning it faces more reviews and three to six months of delay while its risk profile is assessed.
But in a letter to ICANN last month, CBA said “the cause of the name collision is primarily from CBA internal systems” and “it is within the CBA realm of control to detect and remediate said systems”.
The bank was basically claiming that its own computers use DNS requests for .cba already, and that leakage of those requests onto the internet was responsible for its relatively high risk profile.
At the time we doubted that CBA had access to the data needed to draw this conclusion and Verisign said today that a new study of its own “shows without a doubt that CBA’s initial conclusions are incorrect”.
Since the publication of Interisle Consulting’s independent review into root server error traffic — which led to all applied-for strings being split into risk categories — Verisign has evidently been carrying out its own study.
While Interisle used data collected from almost all of the DNS root servers, Verisign’s seven-week study only looked at data gathered from the A-root and J-root, which it manages.
According to Verisign, .cba gets roughly 10,000 root server queries per day — 504,000 in total over the study window — and hardly any of them come from the bank itself.
Most appear to be from residential apartment complexes in Chiba, Japan, where network admins seem to have borrowed the local airport code — also CBA — to address local devices.
About 80% of the requests seen come from devices using DNS Service Discovery services such as Bonjour, Verisign said.
Bonjour is an Apple-created technology that allows computers to use DNS to automatically discover other LAN-connected devices such as printers and cameras, making home networking a bit simpler.
Another source of the .cba traffic is McAfee’s antivirus software, made by Intel, which Verisign said uses DNS to check whether code is virus-free before executing it.
While error traffic for .cba was seen from 170 countries, Verisign said that Japan — notable for not being Australia — was the biggest source, with almost 400,000 queries (79% of the total). It said:

Our measurement study reveals evidence of a substantial Internet-connected infrastructure in Japan that lies beneath the surface of the public-facing internet, which appears to rely on the non-resolution of the string .CBA.
This infrastructure appear hierarchical and seems to include municipal and private administrative and service networks associated with electronic resource management for office and residential building facilities, as well as consumer devices.

One apartment block in Chiba is is responsible for almost 5% of the daily .cba queries — about 500 per day on average — according to Verisign’s letter, though there were 63 notable sources in total.
ICANN’s proposal for reducing the risk of these name collisions causing problems would require CBA, as the registry, to hunt down and warn organizations of .cba’s impending delegation.
Verisign reiterates the point made by RIPE NCC last month: this would be quite difficult to carry out.
But it does seem that Verisign has done a pretty good job tracking down the organizations that would be affected by .cba being delegated.
The question that Verisign’s letter and presentation does not address is: what would happen to these networks if .cba was delegated?
If .cba is delegated, what will McAfee’s antivirus software do? Will it crash the user’s computer? Will it allow unsafe code to run? Will it cause false positives, blocking users from legitimate content?
Or will it simply fail gracefully, causing no security problems whatsoever?
Likewise, what happens when Bonjour expects .cba to not exist and it suddenly does? Do Apple computers start leaking data about the devices on their local network to unintended third parties?
Or does it, again, cause no security problems whatsoever?
Without satisfactory answers to those questions, maybe name collisions could be introduced by ICANN with little to no effect, meaning the “risk” isn’t really a risk at all.
Answering those questions will of course take time, which means delay, which is not something most applicants want to hear right now.
Verisign’s study targeted CBA because CBA singled itself out by claiming to be responsible for the .cba error traffic, not because CBA is a client of rival registry Afilias.
The bank can probably thank Verisign for its study, which may turn out to be quite handy.
Still, it would be interesting to see Verisign conduct a similar study on, say, .windows (Microsoft), .cloud (Symantec) or .bank (Financial Services Roundtable), which are among the 35 gTLDs with “uncalculated” risk profiles that Verisign promised to provide back-end registry services for before it decided that new gTLDs were dangerous.
You can read Verisign’s letter and presentation here. I’ve rotated the PDF to make the presentation more readable here.

Artemis plans name collision conference next week

Kevin Murphy, August 16, 2013, Domain Tech

Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.
Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.
The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.
Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.
That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.
The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.
The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.
NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.
The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.

On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.

NTAG rubbishes new gTLD collision risk report

Kevin Murphy, August 15, 2013, Domain Policy

The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.
The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.
But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.
NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.
The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.
The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.
NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:

We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.

The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.
That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.
If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.
NTAG said:

None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.

the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.
Verisign said:

This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.

The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).
NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.

Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.

While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.
Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.
For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.
In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)
The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.

Donuts, Uniregistry and Famous Four respond to ICANN’s new gTLD security bombshell

Kevin Murphy, August 6, 2013, Domain Registries

Following the shock news this morning that ICANN wants to delay hundreds of new gTLD applications due to potential security risks, we pinged a few of the biggest applicants for their initial reactions.
Donuts, Uniregistry and Famous Four Media, which combined are responsible for over a fifth of all applications, have all responded so far, so we’re printing their statements here in full.
As a reminder, two reports published by ICANN today a) strongly warn against delegating so-called “dotless” domains and b) present significant evidence that “internal name collisions” are a real and present danger to the security and stability of many private networks.
ICANN, in response to the internal name collision issue, proposed to delay 20% of all new gTLD applications for three to six more months while more research is carried out.
It also wants to ask new gTLD registries to conduct outreach to internet users potentially affected by their delegated gTLD strings.
Of the three, Donuts seems most upset. It sent us the following statement:

One has to wonder about the timing of these reports and the motivations behind them. Donuts believes, and our own research confirms satisfactorily to us, that dotless domains and name collision are not threatening to the stability and security of the domain name system.
Name collisions, such as the NxD (in the technical parlance) collisions studied in this report, happen every day in .com, yet the study did not quantify those and Verisign does not block those names from being registered.
We’re concerned about false impressions being deliberately created and believe the reports are commercially or competitively motivated.
There is little reason to pre-empt dotless domains now when there are ICANN processes in place to evaluate them in due course. We don’t believe that ICANN resources need to be deployed at this point on understanding the potential innovations of possible uses nor any security harms.
We also think that name collision is an overstated issue. Rather than take the overdone step of halting or delaying these TLDs, if the issue really is such a concern, it would be wiser to focus on the second-level names where a conflict could occur.
As the NTIA recently wrote, Verisign’s inconsistencies on technical issues are very troubling. These issues have been thoroughly studied for some time. It’s far past due to conclude this eight-year process an move to delegation

As I haven’t previously heard any reason to doubt Interisle Consulting’s impartiality or question its motivation in writing the name collisions report I asked Donuts for clarification, but the company declined to elaborate.
Interisle has been working with ICANN for some time on various technical studies and is also one of the new gTLD program’s independent evaluators, responsible for registry services evaluations.
Uniregistry CEO Frank Schilling was also unhappy with the report. He sent the following statement:

We are deeply dismayed by this new report, both by its substance and its timing. On the substance, the concerns addressed by the report relate, primarily if not solely, to solvable problems created by third-parties using the DNS in non-standard ways. We expect that any problems will be addressed quickly by the companies and individuals that caused them in the first place.
On ICANN’s timing, it is, come just as the first new gTLDs are prepared to launch, very late and, quite obviously, highly disruptive to the long-standing business plans of the companies that relied on ICANN’s guidebook and stated timelines. Uniregistry believes that the best approach is to move forward with the launch of all new gTLDs on the existing schedule.

Finally, Famous Four Media is slightly more relaxed about the situation, judging by the statement it sent us:

Famous Four Media’s primary concern is the security and stability of the Internet. Since this is in the interest of all parties involved in the new gTLD program from registries to registrants and all in between Famous Four Media welcomes these proposals.
Whilst the latest report, and the consequent ICANN proposals, will inevitably cause delays and additional costs in the launches of new gTLDs, Famous Four Media does not believe it will impact its go-to-market plans significantly. The majority of our TLD strings are considered “low risk” and see this in a very positive light although other applicants might not afford to be as sanguine.

According to the DI PRO New gTLD Application Tracker, which has been updated with the risk levels ICANN says each applied-for gTLD poses, 18 of Famous Four’s 60 original applications are in the riskiest two categories, compared to 23 of Uniregistry’s 54 and 102 of Donuts’ of 307.

New gTLDs are the new Y2K: .corp and .home are doomed and everything else is delayed

Kevin Murphy, August 6, 2013, Domain Registries

The proposed gTLDs .home and .corp create risks to the internet comparable to the Millennium Bug, which terrorized a burgeoning internet at the turn of the century, and should be rejected.
Meanwhile, every other gTLD that has been applied for in the current round could be delayed by months in order to mitigate the risks they pose to internet users.
These are the conclusions ICANN has drawn from Interisle Consulting’s independent study into the problems that could be caused when new gTLDs clash with widely-used internal naming systems.
The extensive study, which drew on 8TB of traffic data provided by 11 of the 13 DNS root server operators, is 197 pages long and absolutely fascinating. It was published by ICANN today.
As Interisle CEO Lyman Chapin reported at the ICANN meeting in Durban a few weeks ago, the large majority of TLDs that have been applied for in the current round already receive large amounts of error traffic:

Of the 1,409 distinct applied-for TLD strings, 1,367 appeared at least once in the 2013 DITL [Day In the Life of the Internet] data with the string at the TLD position.

We’ve previously reported on the volume of queries new gTLDs get, such as the fact that .home gets half a billion hits a day and that 3% of all requests were for strings that have been applied for in the current round.
The extra value in Interisle’s report comes when it starts to figure out how many end points are making these requests, and how many second-level domains they’re looking for.
These are vitally important factors for assessing the scale of the risk of each TLD.
Again, .home and .corp appear to be the most dangerous.
Interisle capped the number of second-level domains it counted in the 2013 data at 100,000 per TLD per root server — 1,100,000 domains in total — and .home was the only TLD string to hit this cap.
Cisco Systems’ proposed .cisco TLD came close, failing to hit the cap in only one of the 11 root servers providing data, while .box and .iinet (both also used widely on home routers) hit the cap on at least one root server.
The lowest count of second-level domains of the 35 listed in the report came from .hsbc, the bank brand, but even that number was a not-inconsiderable 2,000.
Why are these requests being made?
Surprisingly, interactions between a security feature in Google’s own Chrome browser and common residential routers appear to be the biggest cause of queries for non-existent TLDs.
That issue, which impacts mainly .home, accounts for about 46% of the requests counted, according to the report.
In second place, with 15% of the queries, are requests for real domain names that appear to have had a non-existent TLD — again, usually .home — appended by a residential router or cable modem.
Apparent typos — where a user enters a URL but forgets to type the TLD — were a relatively small percentage of requests, coming in at under 1% of queries.
The study also found that bad requests come from many thousands of sources. This table compares the number of requests to the number of sources.
[table id=14 /]
The “Count” column is the number, in thousands, of requests for each TLD string. The “Prefix Count ” column refers to the number of sources providing this traffic, counted by the /24 IP address block (each of which is up to 256 potential hosts).
As you can see, there’s not necessarily a correlation between the number of requests a TLD gets and the number of people making the requests — .google gets queried by more sources than the others, but it’s only ranked 24 in terms of overall query volume, for example.
Interisle concluded from all this that .corp and .home are simply too dangerous to delegate, comparing the problem to the year 2000 bug, where a global effort was required to make sure software could support the four-digit dating scheme required by the turn of the century.
Here’s what the report says about .corp:

users could be taken to the wrong web site (and possibly be exposed to phishing attacks) or told that web sites do not exist when they do, depending on how the .corp TLD is resolved. A corporate mail system might attempt to deliver email to the wrong server, and this could expose sensitive or confidential information to someone who was not supposed to receive it. In essence, everything deployed in the private network would need to be checked.
There are no easy solutions to these problems. In an ideal world, the operators of these private networks would get a timely notification of the new TLD’s delegation and then take action to address these issues. That seems very improbable. Even if ICANN generated sufficient publicity about the new TLD’s delegation, there is no guarantee that this will come to the attention of the management or operators of the private networks that could be jeopardized by the delegation.

It seems reasonable to estimate that the amount of effort involved might be comparable to a wholesale renumbering of the internal network or the Y2K problem.

It notes that applied-for TLDs such as .site, .office, .group and .inc appear to be used in similar ways to .home and .corp, but do not appear to present as broad a risk.
To be clear, the risk we’re talking about here isn’t just people typing the wrong things into browsers, it’s about the infrastructure on many thousands of private networks starting to make the wrong security assumptions about domain names.
ICANN, in response, has outlined a series of measures sure to infuriate many gTLD applicants, but which are consistent with its goal to protect the security and stability of the internet.
They’re also consistent with some of the recommendations put forward by Verisign over the last few months in its campaign to show that new gTLDs pose huge risks.
First, .corp and .home are dead. These two strings have been categorized “high risk” by ICANN, which said:

Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk

Given the Y2K-scale effort required to mitigate the risks, and the fact that the eventual pay-off wouldn’t compensate for the work, I feel fairly confident in saying the two strings will never be delegated.
Another 80% of the applied-for strings have been categorized “low risk”. ICANN has published a spreadsheet explaining which string falls into which category. Low risk does not mean they get off scot-free, however.
First, all registries for low-risk strings will not be allowed to activate any domain names in their gTLD for 120 days after contract signing.
Second, for 30 days after a gTLD is delegated the new registries will have to reach out to the owners of each IP address that attempts to query names in that gTLD, to try to mitigate the risk of internal name collisions.
This, as applicants will no doubt quickly argue, is going to place them under a massive cost burden.
But their outlook is considerably brighter than that of the remaining 20% of applications, which are categorized as “uncalculated risk” and face a further three to six months of delay while ICANN conducts further studies into whether they’re each “high” or “low” risk strings.
In other words, the new gTLD program is about to see its biggest shake-up since the GAC delivered its Advice in Beijing, adding potentially millions in costs and delays for applicants.
ICANN’s proposed mitigation efforts are now open for public comment.
One has to wonder why the hell ICANN didn’t do this study two years ago.

.home gets half a billion hits a day. Could this put new gTLDs at risk?

Kevin Murphy, July 17, 2013, Domain Tech

New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.
ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.
His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.
That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.
Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.
The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.
Some of its findings are surprising:

  • Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
  • Three percent of all requests were for strings that have been applied for in the current round.
  • A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
  • .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com

Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.
Most Queried TLDs
If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.
Here’s the most-queried applied-for strings:
Most Queried TLDs
Chapin was quick to point out that big numbers do not necessarily equate to big security problems.
“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.
“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.
For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.
If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.
In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.
The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.
There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.
Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.
As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.
For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.
Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.
There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.
Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.
One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.
So what are Interisle’s recommendations likely to be?
Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.
For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.
It seems to me that if the second-level request data was available, more mitigation options would be opened up.
ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.
“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”
Without prompting, he addressed the risk of delay to the new gTLD program.
“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”
The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.
Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.
ICANN certainly likes to play things close to the whistle.