Latest news of the domain name industry

Recent Posts

Russian hackers breaking in to NameCheap accounts

Kevin Murphy, September 2, 2014, Domain Registrars

If you have an account at NameCheap, now might be a good time to think about changing your password.
According to the registrar, hackers based in Russia are using a haul of a reported 4.5 billion username/password combinations to attempt to break into its customers’ accounts.
Some attempts have been successful, NameCheap warned.
The attackers are using credentials stolen from third-party sources in a large-scale, automated attempt to log in to user accounts, disguised as regular users, the company said in a blog post.
NameCheap said:

The vast majority of these login attempts have been unsuccessful as the data is incorrect or old and passwords have been changed. As a precaution, we are aggressively blocking the IP addresses that appear to be logging in with the stolen password data. We are also logging these IP addresses and will be exporting blocking rules across our network to completely eliminate access to any Namecheap system or service, as well as making this data available to law enforcement.
While the vast majority of these logins are unsuccessful, some have been successful. To combat this, we’ve temporarily secured the Namecheap accounts that have been affected and are currently contacting customers involved requesting they improve the security for these accounts.

Affected users have been emailed, the company said.
NameCheap suspects the attack is linked to a reported cache of 1.2 billion unique username/password combinations amassed by a hacker group from databases vulnerable to SQL injection.
The registrar pointed out that its own systems haven’t been hacked. Customers should only be vulnerable if they use the same username and password at NameCheap as they use on other sites.

RADAR to be down at least two weeks after hack

ICANN expects its RADAR registrar database to be offline for “at least two weeks” following the discovery of a security vulnerability that exposed users’ login names and encrypted passwords.
ICANN seems to have been quick to act and to disclose the hack.
The attack happened last weekend and ICANN was informed about it by an “internet user” on Tuesday May 27, according to an ICANN spokesperson. RADAR was taken offline and the problem disclosed late May 28.
The spokesperson added that “we do not believe the user is affiliated with a current or previously accredited registrar.”
ICANN isn’t disclosing the nature of the vulnerability, but said RADAR will be offline for some time for a security audit. The spokesperson told DI in an email:

It will be at least two weeks. It is more important to complete a thorough security assessment of the site than to rush this process. First of all, we’re keeping the system offline until we complete a thorough audit of the system. We are also currently engaged in a security review of all systems and procedures at ICANN to assess and implement ongoing improvements as appropriate.

RADAR is a database used by registrars to coordinate stuff like emergency contacts and IP address whitelisting for bulk Whois access.
The downtime is not expected to impact registrants, according to ICANN. The spokesperson said: “Nothing that occurred has raised any concerns that registrants could or would be adversely affected.”

ICANN registrar database hacked

ICANN’s database of registrar contact information has been hacked and user data has been stolen.
The organization announced this morning that the database, known as RADAR, has been taken offline while ICANN conducts a “thorough review” of its security.
ICANN said:

This action was taken as a precautionary measure after it was learned that an unauthorized party viewed data in the system. ICANN has found no evidence of any unauthorized changes to the data in the system. Although the vulnerability has been corrected, RADAR will remain offline until a thorough review of the system is completed.

Users of the system — all registrars — have had their usernames, email addresses and encrypted passwords compromised, ICANN added.
ICANN noted that it’s possible to brute-force a hashed password into plaintext, so it’s enforcing a password reset on all users, but it has no evidence of any user accounts being accessed.
RADAR users may want to think about whether they have the same username/password combinations at other sites.
RADAR is a database used by registrars in critical functions such as domain name transfers.
Registrars can use it, for example, to white-list the IP addresses of rival registrars, enabling them to execute large amounts of Whois queries that would usually be throttled.
The news follows hot on the heels of a screwup in the Centralized Zone Data Service, which enabled any new gTLD registry to view data belonging to rival registries and other CZDS users.

Controlled interruption as a means to prevent name collisions [Guest Post]

Jeff Schmidt, January 8, 2014, Domain Tech

This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.
One of JAS’ commitments during this process was to “float” ideas and solicit feedback. This set of thoughts poses an alternative to the “trial delegation” proposals in SAC062. The idea springs from past DNS-related experiences and has an effect we have named “controlled interruption.”
Learning from the Expired Registration Recovery Policy
Many are familiar with the infamous Microsoft Hotmail domain expiration in 1999. In short, a Microsoft registration for passport.com (Microsoft’s then-unified identity service) expired Christmas Eve 1999, denying millions of users access to the Hotmail email service (and several other Microsoft services) for roughly 20 hours.
Fortunately, a well-intended technology consultant recognized the problem and renewed the registration on Microsoft’s behalf, yielding a nice “thank you” from Microsoft and Network Solutions. Had a bad actor realized the situation, the outcome could have been far different.
The Microsoft Hotmail case and others like it lead to the current Expired Registration Recovery Policy.
More recently, Regions Bank made news when its domains expired, and countless others go unreported. In the case of Regions Bank, the Expired Registration Recovery Policy seemed to work exactly as intended – the interruption inspired immediate action and the problem was solved, resulting in only a bit of embarrassment.
Importantly, there was no opportunity for malicious activity.
For the most part, the Expired Registration Recovery Policy is effective at preventing unintended expirations. Why? We call it the application of “controlled interruption.”
The Expired Registration Recovery Policy calls for extensive notification before the expiration, then a period when “the existing DNS resolution path specified by the Registrant at Expiration (“RAE”) must be interrupted” – as a last-ditch effort to inspire the registrant to take action.
Nothing inspires urgent action more effectively than service interruption.
But critically, in the case of the Expired Registration Recovery Policy, the interruption is immediately corrected if the registrant takes the required action — renewing the registration.
It’s nothing more than another notification attempt – just a more aggressive round after all of the passive notifications failed. In the case of a registration in active use, the interruption will be recognized immediately, inspiring urgent action. Problem solved.
What does this have to do with collisions?
A Trial Delegation Implementing Controlled Interruption
There has been a lot of talk about various “trial delegations” as a technical mechanism to gather additional data regarding collisions and/or attempt to notify offending parties and provide self-help information. SAC062 touched on the technical models for trial delegations and the related issues.
Ideally, the approach should achieve these objectives:

  • Notifies systems administrators of possible improper use of the global DNS;
  • Protects these systems from malicious actors during a “cure period”;
  • Doesn’t direct potentially sensitive traffic to Registries, Registrars, or other third parties;
  • Inspires urgent remediation action; and
  • Is easy to implement and deterministic for all parties.

Like unintended expirations, collisions are largely a notification problem. The offending system administrator must be notified and take action to preserve the security and stability of their system.
One approach to consider as an alternative trial delegation concept would be an application of controlled interruption to help solve this notification problem. The approach draws on the effectiveness of the Expired Registration Recovery Policy with the implementation looking like a modified “Application and Service Testing and Notification (Type II)” trial delegation as proposed in SAC62.
But instead of responding with pointers to application layer listeners, the authoritative nameserver would respond with an address inside 127/8 — the range reserved for localhost. This approach could be applied to A queries directly and MX queries via an intermediary A record (the vast majority of collision behavior observed in DITL data stems from A and MX queries).
Responding with an address inside 127/8 will likely break any application depending on a NXDOMAIN or some other response, but importantly also prevents traffic from leaving the requestor’s network and blocks a malicious actor’s ability to intercede.
In the same way as the Expired Registration Recovery Policy calls for “the existing DNS resolution path specified by the RAE [to] be interrupted”, responding with localhost will hopefully inspire immediate action by the offending party while not exposing them to new malicious activity.
If legacy/unintended use of a DNS name is present, one could think of controlled interruption as a “buffer” prior to use by a legitimate new registrant. This is similar to the CA Revocation Period as proposed in the New gTLD Collision Occurrence Management Plan which “buffers” the legacy use of certificates in internal namespaces from new use in the global DNS. Like the CA Revocation Period approach, a set period of controlled interruption is deterministic for all parties.
Moreover, instead of using the typical 127.0.0.1 address for localhost, we could use a “flag” IP like 127.0.53.53.
Why? While troubleshooting the problem, the administrator will likely at some point notice the strange IP address and search the Internet for assistance. Making it known that new TLDs may behave in this fashion and publicizing the “flag” IP (along with self-help materials) may help administrators isolate the problem more quickly than just using the common 127.0.0.1.
We could also suggest that systems administrators proactively search their logs for this flag IP as a possible indicator of problems.
Why the repeated 53? Preserving the 127.0/16 seems prudent to make sure the IP is treated as localhost by a wide range of systems; the repeated 53 will hopefully draw attention to the IP and provide another hint that the issue is DNS related.
Two controlled interruption periods could even be used — one phase returning 127.0.53.53 for some period of time, and a second slightly more aggressive phase returning 127.0.0.1. Such an approach may cover more failure modes of a wide variety of requestors while still providing helpful hints for troubleshooting.
A period of controlled interruption could be implemented before individual registrations are activated, or for an entire TLD zone using a wildcard. In the case of the latter, this could occur simultaneously with the CA Revocation Period as described in the New gTLD Collision Occurrence Management Plan.
The ability to “schedule” the controlled interruption would further mitigate possible effects.
One concern in dealing with collisions is the reality that a potentially harmful collision may not be identified until months or years after a TLD goes live — when a particular second level string is registered.
A key advantage to applying controlled interruption to all second level strings in a given TLD in advance and at once via wildcard is that most failure modes will be identified during a scheduled time and before a registration takes place.
This has many positive features, including easier troubleshooting and the ability to execute a far less intrusive rollback if a problem does occur. From a practical perspective, avoiding a complex string-by-string approach is also valuable.
If there were to be a catastrophic impact, a rollback could be implemented relatively quickly, easily, and with low risk while the impacted parties worked on a long-term solution. A new registrant and associated new dependencies would likely not be adding complexity at this point.
Request for Feedback
As stated above, one of JAS’ commitments during this process was to “float” ideas and solicit feedback early in the process. Please consider these questions:

  • What unintended consequences may surface if localhost IPs are served in this fashion?
  • Will serving localhost IPs cause the kind of visibility required to inspire action?
  • What are the pros and cons of a “TLD-at-once” wildcard approach running simultaneously with the CA Revocation Period?
  • Is there a better IP (or set of IPs) to use?
  • Should the controlled interruption plan described here be included as part of the mitigation plan? Why or why not?
  • To what extent would this methodology effectively address the perceived problem?
  • Other feedback?

We anxiously await your feedback — in comments to this blog, on the DNS-OARC Collisions list, or directly. Thank you and Happy New Year!

DNS Namespace Collisions: Detection and Response [Guest Post]

Jeff Schmidt, November 28, 2013, Domain Tech

Those tracking the namespace collision issue in Buenos Aries heard a lot regarding the potential response scenarios and capabilities. Because this is an important, deep, and potentially controversial topic, we wanted to get some ideas out early on potential solutions to start the conversation.
Since risk can almost never be driven to zero, a comprehensive approach to risk management contains some level of a priori risk mitigation combined with investment in detection and response capabilities.
In my city of Chicago, we tend to be particularly sensitive about fires. In Chicago, like in most cities, we have a priori protection in the form of building codes, detection in the form of smoke/fire alarms, and response in the form of 9-1-1, sprinklers, and the very capable Chicago Fire Department.
Let’s think a little about what the detection and response capabilities might look like for DNS namespace collisions.
Detection: How do we know there is a problem?
Rapid detection and diagnosis of problems helps to both reduce damage and reduce the time to recovery. Physical security practitioners invest considerably in detection, typically in the form of guards and sensors.
Most meteorological events are detected (with some advance warning) through the use of radars and predictive modeling. Information security practitioners are notoriously light with respect to systematic detection, but we’re getting better!
If there are problematic DNS namespace collisions, the initial symptoms will almost certainly appear through various IT support mechanisms, namely corporate IT departments and the support channels offered by hardware/software/service vendors and Internet Service Providers.
When presented with a new and non-obvious problem, professional and non-professional IT practitioners alike will turn to Internet search engines for answers. This suggests that a good detection/response investment would be to “seed” support vendors/fora with information/documentation about this issue in advance and in a way that will surface when IT folks begin troubleshooting.
We collectively refer to such documentation as “self-help” information. ICANN has already begun developing documentation designed to assist IT support professionals with namespace-related issues.
In the same way that radar gives us some idea where a meteorological storm might hit, we can make reasonable predictions about where issues related to DNS namespace collisions are most likely to first appear.
While problems could appear anywhere, we believe it is most likely that scenarios involving remote (“road warrior”) use cases, branch offices/locations, and Virtual Private Networks are the best places to focus advance preparation.
This educated guess is based on the observation that DNS configurations in these use cases are often brittle due to complexities associated with dynamic and/or location-dependent parameters. Issues may also appear in Small and Medium-sized Enterprises (SMEs) with limited IT sophistication.
This suggests that proactively reaching out to vendors and support mechanisms with a footprint in those areas would also be a wise investment.
Response: Options, Roles, and Responsibilities
In the vast majority of expected cases, the IT professional “detectors” will also be the “responders” and the issue will be resolved without involving other parties. However, let’s consider the situations where other parties may be expected to have a role in response.
For the sake of this discussion, let’s assume that an Internet user is experiencing a problem related to a DNS namespace collision. I use the term “Internet user” broadly as any “consumer” of the global Internet DNS.
At this point in the thought experiment, let’s disregard the severity of the problem. The affected party (or parties) will likely exercise the full range of typical IT support options available to them – vendors, professional support, IT savvy friends and family, and Internet search.
If any of these support vectors are aware of ICANN, they may choose to contact ICANN at any point. Let’s further assume the affected party is unable and/or unwilling to correct the technical problem themselves and ICANN is contacted – directly or indirectly.
There is a critical fork in the road here: Is the expectation that ICANN provide technical “self-help” information or that ICANN will go further and “do something” to technically remedy the issue for the user? The scope of both paths needs substantial consideration.
For the rest of this blog, I want to focus on the various “do something” options. I see a few options; they aren’t mutually exclusive (one could imagine an escalation through these and potentially other options). The options are enumerated for discussion only and order is not meaningful.

  • Option 1: ICANN provides technical support above and beyond “self-help” information to the impacted parties directly, including the provision of services/experts. Stated differently, ICANN becomes an extension of the impacted party’s IT support structure and provides customized/specific troubleshooting and assistance.
  • Option 2: The Registry provides technical support above and beyond “self-help” information to the impacted parties directly, including the provision of services/experts. Stated differently, the Registry becomes an extension of the impacted party’s IT support structure and provides customized/specific troubleshooting and assistance.
  • Option 3: ICANN forwards the issue to the Registry with a specific request to remedy. In this option, assuming all attempts to provide “self-help” are not successful, ICANN would request that the Registry make changes to their zone to technically remedy the issue. This could include temporary or permanent removal of second level names and/or other technical measures that constitute a “registry-level rollback” to a “last known good” configuration.
  • Option 4: ICANN initiates a “root-level rollback” procedure to revert the state of the root zone to a “last known good” configuration, thus (presumably) de-delegating the impacted TLD. In this case, ICANN would attempt – on an emergency basis – to revert the root zone to a state that is not causing harm to the impacted party/parties. Root-level rollback is an impactful and potentially controversial topic and will be the subject of a follow-up blog.

One could imagine all sorts of variations on these options, but I think these are the basic high-level degrees of freedom. We note that ICANN’s New gTLD Collision Occurrence Management Plan and SAC062 contemplate some of these options in a broad sense.
Some key considerations:

  • In the broader sense, what are the appropriate roles and responsibilities for all parties?
  • What are the likely sources to receive complaints when a collision has a deleterious effect?
  • What might the Service Level Agreements look like in the above options? How are they monitored and enforced?
  • How do we avoid the “cure is worse than the disease” problem – limiting the harm without increasing risk of creating new harms and unintended consequences?
  • How do we craft the triggering criteria for each of the above options?
  • How are the “last known good” configurations determined quickly, deterministically, and with low risk?
  • Do we give equal consideration to actors that are following the technical standards vs. those depending on technical happenstance for proper functionality?
  • Are there other options we’re missing?

On Severity of the Harm
Obviously, the severity of the harm can’t be ignored. Short of situations where there is a clear and present danger of bodily harm, severity will almost certainly be measured economically and from multiple points of view. Any party expected to “do something” will be forced to choose between two or more economically motivated actors: users, Registrants, Registrars, and/or Registries experiencing harm.
We must also consider that just as there may be users negatively impacted by new DNS behavior, there may also be users that are depending on the new DNS behavior. A fair and deterministic way to factor severity into the response equation is needed, and the mechanism must be compatible with emergency invocation and the need for rapid action.
Request for Feedback
There is a lot here, which is why we’ve published this early in the process. We eagerly await your ideas, feedback, pushback, corrections, and augmentations.
This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.

These are the top 50 name collisions

Kevin Murphy, November 19, 2013, Domain Tech

Having spent the last 36 hours crunching ICANN’s lists of almost 10 million new gTLD name collisions, the DI PRO collisions database is back online, and we can start reporting some interesting facts.
First, while we reported yesterday that 1,318 new gTLD applicants will be asked to block a total of 9.8 million unique domain names, the number of distinct second-level strings involved is somewhat smaller.
It’s 6,806,050, according to our calculations, still a bewilderingly high number.
The most commonly blocked string, as expected, is “www”. It’s on the block-lists for 1,195 gTLDs, over 90% of the total.
Second is “2010”. I currently have no explanation for this, but I’m wondering if it’s an artifact of the years of Day In The Life data upon which ICANN based its lists.
Protocol-related strings such as “wpad” and “isatap” also rank highly, as do strings matching popular TLDs such as “com”, “org”, “uk” and “de”. Single-character strings are also very popular.
The brand with the most blocks (free trademark protection?) is unsurprisingly Google.
The string “google” appears as an exact match on 930 gTLDs’ lists. It appears as a substring of 1,235 additional blocked strings, such as “google-toolbar” and “googlemaps”.
Facebook, Yahoo, Gmail, YouTube and Hotmail also feature in the top 100 blocked brands.
DI PRO subscribers can search for strings that interest them, discovering how many and which gTLDs they’re blocked in, using the database.
Here’s a table of the top 50 blocked strings.
[table id=22 /]

Demystifying DITL Data [Guest Post]

Kevin White, November 16, 2013, Domain Tech

With all the talk recently about DNS Namespace Collisions, the heretofore relatively obscure Day In The Life (“DITL”) datasets maintained by the DNS-OARC have been getting a lot of attention.
While these datasets are well known to researchers, I’d like to take the opportunity to provide some background and talk a little about how these datasets are being used to research the DNS Namespace Collision issue.
The Domain Name System Operations Analysis and Research Center (“DNS-OARC”) began working with the root server operators to collect data in 2006. The effort was coined “Day In The Life of the Internet (DITL).”
Root server participation in the DITL collection is voluntary and the number of contributing operators has steadily increased; in 2010, all of the 13 root server letters participated. DITL data collection occurs on an annual basis and covers approximately 50 contiguous hours.
DNS-OARC’s DITL datasets are attractive for researching the DNS Namespace Collision issue because:

  • DITL contains data from multiple root operators;
  • The robust annual sampling methodology (with samples dating back to 2006) allows trending; and
  • It’s available to all DNS-OARC Members.

More information on the DITL collection is available on DNS-OARC’s site at https://www.dns-oarc.net/oarc/data/ditl.
Terabytes and terabytes of data
The data consists of the raw network “packets” destined for each root server. Contained within the network packets are the DNS queries. The raw data consists of many terabytes of compressed network capture files and processing the raw data is very time-consuming and resource-intensive.
[table id=20 /]
While several researchers have looked at DITL datasets over the years, the current collisions-oriented research started with Roy Hooper of Demand Media. Roy created a process to iterate through this data and convert it into intermediate forms that are much more usable for researching the proposed new TLDs.
We started with his process and continued working with it; our code is available on GitHub for others to review.
Finding needles in DITL haystacks
The first problem faced by researchers interested in new TLDs is isolating the relatively few queries of interest among many terabytes of traffic that are not of interest.
Each root operator contributes several hundred – or several thousand – files full of captured packets in time-sequential order. These packets contain every DNS query reaching the root that requests information about DNS names falling within delegated and undelegated TLDs.
The first step is to search these packets for DNS queries involving the TLDs of interest. The result is one file per TLD containing all queries from all roots involving that TLD. If the input packet is considered a “horizontal” slice of root DNS traffic, then this intermediary work product is a “vertical” slice per TLD.
These intermediary files are much more manageable, ranging from just a few records to 3 GB. To support additional investigation and debugging, the intermediary files that JAS produces are fully “traceable” such that a record in the intermediary file can be traced back to the source raw network packet.
The DITL data contain quite a bit of noise, primarily DNS traffic that was not actually destined for the root. Our process filters the data by destination IP address so that the only remaining data is that which was originally destined for the root name servers.
JAS has made these intermediary per-TLD files available to DNS-OARC members for further analysis.
Then what?
The intermediary files are comparatively small and easy to parse, opening the door to more elaborate research. For example, JAS has written various “second passes” that classify queries, separate queries that use valid syntax at the second level from those that don’t, detect “randomness,” fit regular expressions to the queries, and more.
We have also checked to confirm that second level queries that look like Punycode IDNs (start with ‘xn--‘) are valid Punycode. It is interesting to note the tremendous volume of erroneous, technically invalid, and/or nonsensical DNS queries that make it to the root.
Also of interest is that the datasets are dominated by query strings that appear random and/or machine-generated.
Google’s Chrome browser generates three random 10-character queries upon startup in an effort to detect network properties. Those “Chrome 10” queries together with a relatively small number of other common patterns comprise a significant proportion of the entire dataset.
Research is being done in order to better understand the source of these machine-generated queries.
More technical details and information on running the process is available on the DNS-OARC web site.

This is a guest post written by Kevin White, VP Technology, JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.

Verisign targets bank claims in name collisions fight

Kevin Murphy, September 15, 2013, Domain Tech

Verisign has rubbished the Commonwealth Bank of Australia’s claim that its dot-brand gTLD, .cba, is safe.
In a lengthy letter to ICANN today, Verisign senior vice president Pat Kane said that, contrary to CBA’s claims, the bank is only responsible for about 6% of the traffic .cba sees at the root.
It’s the latest volley in the ongoing fight about the security risks of name collisions — the scenario where an applied-for gTLD string is already in broad use on internal networks.
CBA’s application for .cba has been categorized as “uncalculated risk” by ICANN, meaning it faces more reviews and three to six months of delay while its risk profile is assessed.
But in a letter to ICANN last month, CBA said “the cause of the name collision is primarily from CBA internal systems” and “it is within the CBA realm of control to detect and remediate said systems”.
The bank was basically claiming that its own computers use DNS requests for .cba already, and that leakage of those requests onto the internet was responsible for its relatively high risk profile.
At the time we doubted that CBA had access to the data needed to draw this conclusion and Verisign said today that a new study of its own “shows without a doubt that CBA’s initial conclusions are incorrect”.
Since the publication of Interisle Consulting’s independent review into root server error traffic — which led to all applied-for strings being split into risk categories — Verisign has evidently been carrying out its own study.
While Interisle used data collected from almost all of the DNS root servers, Verisign’s seven-week study only looked at data gathered from the A-root and J-root, which it manages.
According to Verisign, .cba gets roughly 10,000 root server queries per day — 504,000 in total over the study window — and hardly any of them come from the bank itself.
Most appear to be from residential apartment complexes in Chiba, Japan, where network admins seem to have borrowed the local airport code — also CBA — to address local devices.
About 80% of the requests seen come from devices using DNS Service Discovery services such as Bonjour, Verisign said.
Bonjour is an Apple-created technology that allows computers to use DNS to automatically discover other LAN-connected devices such as printers and cameras, making home networking a bit simpler.
Another source of the .cba traffic is McAfee’s antivirus software, made by Intel, which Verisign said uses DNS to check whether code is virus-free before executing it.
While error traffic for .cba was seen from 170 countries, Verisign said that Japan — notable for not being Australia — was the biggest source, with almost 400,000 queries (79% of the total). It said:

Our measurement study reveals evidence of a substantial Internet-connected infrastructure in Japan that lies beneath the surface of the public-facing internet, which appears to rely on the non-resolution of the string .CBA.
This infrastructure appear hierarchical and seems to include municipal and private administrative and service networks associated with electronic resource management for office and residential building facilities, as well as consumer devices.

One apartment block in Chiba is is responsible for almost 5% of the daily .cba queries — about 500 per day on average — according to Verisign’s letter, though there were 63 notable sources in total.
ICANN’s proposal for reducing the risk of these name collisions causing problems would require CBA, as the registry, to hunt down and warn organizations of .cba’s impending delegation.
Verisign reiterates the point made by RIPE NCC last month: this would be quite difficult to carry out.
But it does seem that Verisign has done a pretty good job tracking down the organizations that would be affected by .cba being delegated.
The question that Verisign’s letter and presentation does not address is: what would happen to these networks if .cba was delegated?
If .cba is delegated, what will McAfee’s antivirus software do? Will it crash the user’s computer? Will it allow unsafe code to run? Will it cause false positives, blocking users from legitimate content?
Or will it simply fail gracefully, causing no security problems whatsoever?
Likewise, what happens when Bonjour expects .cba to not exist and it suddenly does? Do Apple computers start leaking data about the devices on their local network to unintended third parties?
Or does it, again, cause no security problems whatsoever?
Without satisfactory answers to those questions, maybe name collisions could be introduced by ICANN with little to no effect, meaning the “risk” isn’t really a risk at all.
Answering those questions will of course take time, which means delay, which is not something most applicants want to hear right now.
Verisign’s study targeted CBA because CBA singled itself out by claiming to be responsible for the .cba error traffic, not because CBA is a client of rival registry Afilias.
The bank can probably thank Verisign for its study, which may turn out to be quite handy.
Still, it would be interesting to see Verisign conduct a similar study on, say, .windows (Microsoft), .cloud (Symantec) or .bank (Financial Services Roundtable), which are among the 35 gTLDs with “uncalculated” risk profiles that Verisign promised to provide back-end registry services for before it decided that new gTLDs were dangerous.
You can read Verisign’s letter and presentation here. I’ve rotated the PDF to make the presentation more readable here.

Public Suffix List to get monthly new gTLD updates

Kevin Murphy, September 3, 2013, Domain Registries

New gTLDs are set to be added to the widely used Public Suffix List within a month of signing an ICANN registry agreement, according to PSL volunteer Jothan Frakes.
This is pretty good news for new gTLD registries.
The PSL, maintained by volunteers under the Mozilla banner, is used in browsers including Firefox and Chrome, and will be a vital part of making sure new gTLDs “work” out of the box.
If a TLD doesn’t have an entry on the PSL, browsers tend to handle them badly.
For example, after .sx launched last year, Google’s Chrome browser returned search results instead of the intended web site when .sx domain names were typed into the address/search bar.
It also provides a critical security function, telling browsers at which level they should allow domains to set cookies.
According to Frakes, who has been working behind the scenes with other PSL volunteers and ICANN staff to get this process working, new gTLDs will usually hit the PSL within 30 days of an ICANN contract.
Due to the mandatory pre-delegation testing period, new gTLDs should be on the PSL before or at roughly the same time as they are delegated, with plenty of time to spare before they launch.
The process of being added to the PSL should be fairly quick for TLDs that intend to run flat second-level spaces, according to Frakes, but may be more complex if they plan to do something less standard, such as selling third-level domains, for example.
Browser makers may take some time to update their own lists with the PSL updates. Google, with its own huge portfolio of applications, will presumably be incentivized to stay on the ball.

Name collisions comments call for more gTLD delay

Kevin Murphy, August 29, 2013, Domain Registries

The first tranche of responses to Interisle Consulting’s study into the security risks of new gTLDs, and ICANN’s proposal to delay a few hundred strings pending more study, is in.
Comments filed with ICANN before the public comment deadline yesterday fall basically into two camps:

  • Non-applicants (mostly) urging ICANN to proceed with extreme caution. Many are asking for more time to study their own networks so they can get a better handle on their own risk profiles.
  • Applicants shooting holes in Interisle’s study and ICANN’s remeditation plan. They want ICANN to reclassify everything except .home and .corp as low risk, removing delays to delegation and go-live.

They were responding to ICANN’s decision to delay 521 “uncalculated risk” new gTLD applications by three to six months while further research into the risk of name collisions — where a new gTLD could conflict with a TLD already used by internet users in a non-standard way — is carried out.
Proceed with caution
Many commenters stated that more time is needed to analyse the risks posed by name collisions, noting that Interisle studied primarily the volume of queries for non-existent domains, rather than looking deeply into the consequences of delegating colliding gTLDs.
That was a point raised by applicants too, but while applicants conclude that this lack of data should lead ICANN to lift the current delays, others believe that it means more delays are needed.
Two ICANN constituencies seem to generally agree with the findings of the Interisle report.
The Internet Service Providers and Connectivity Providers constituency asked for the public comment period be put on hold until further research is carried out, or for at least 60 days. It noted:

corporations, ISPs and connectivity providers may bear the brunt of the security and customer-experience issues resulting from adverse (as yet un-analyzed) impacts from name collision

these issues, due to their security and customer-experience aspects, fall outside the remit of people who normally participate in the ICANN process, requiring extensive wide-ranging briefings even in corporations that do participate actively in the ICANN process

The At-Large Advisory Committee concurred that the Interisle study does not currently provide enough information to fully gauge the risk of name collisions causing harm.
ALAC said it was “in general concurrence with the proposed risk mitigation actions for the three defined risk categories” anyway, adding:

ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution

Several individual stakeholders agreed with the ISPCP that they need more time to look at their own networks. The Association of Nation Advertisers said:

Our member companies are working diligently to determine if DNS Clash issues are present within their respective networks. However the ANA had to communicate these issues to hundreds of companies, after which these companies must generate new data to determine the potential service failures on their respective networks.

The ANA wants the public comment period extended until November 22 to give its members more time to gather data.
While the ANA can always be relied upon to ask for new gTLDs to be delayed, its request was echoed by others.
General Electric called for three types of additional research:

  • Additional studies of traffic beyond the initial DITL sample.
  • Information and analysis of “use cases” — particular types of queries and traffic — and the consequences of the failure of particular use cases to resolve as intended (particular use cases could have severe consequences even if they might occur infrequently — like hurricanes), and
  • Studies of the time and costs of mitigation.

GE said more time is needed for companies such as itself to conduct impact analyses on their own internal networks and asked ICANN to not delegate any gTLD until the risk is “fully understood”.
Verizon, Heinz and the American Insurers Association have asked for comment deadline extensions for the same reasons.
The Association of Competitive Technology (which has Verisign as a member) said:

ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect.

Numerically, there were far more comments criticizing ICANN’s mitigation proposal. All were filed by new gTLD applicants, whose interests are aligned, however.
Most of these comments, which are far more focused on the details and the data, target perceived deficiencies in Interisle’s report and ICANN’s response to it.
Several very good arguments are made.
The Svalbard problem
First, there is criticism of the cut-off point between “low risk” and “uncalculated risk” strings, which some applicants say is “arbitrary”.
That’s mostly true.
ICANN basically took the list of applied-for strings, ordered by the frequency Interisle found they generate NXDOMAIN responses at the root, and drew a line across it at the 49,842 queries mark.
That’s because 49,842 queries is what .sj, the least-frequently-queried real TLD, received over the same period
If your string, despite not yet existing as a gTLD, already gets more traffic than .sj, it’s classed as “uncalculated risk” and faces more delays, according to ICANN’s plan.
As Directi said in its comments:

The result of this arbitrary selection is that .bio (Rank 281) with 50,000 queries (rounded to the nearest thousand) is part of the “uncategorized risk” list, and is delayed by 3 to 6 months, whereas .engineering (Rank 282) with 49,000 queries (rounded to the nearest thousand) is part of the “low risk” list, and can proceed without any significant delays.

What neither ICANN nor Interisle explained is why this is an appropriate place to draw a line in the sand.
This graphic from DotCLUB Domains illustrates the scale of the problem nicely:
.sj is the ccTLD for Svalbard, a Norwegian territory in the Arctic Circle with fewer than 3,000 inhabitants. The TLD is administered by .no registry Norid, but it’s not possible to register domains there.
Does having more traffic than .sj mean a gTLD is automatically more risky? Does having less mean a gTLD is safe? The ICANN proposal assumes “yes” to both questions, but it doesn’t explain why.
Many applicants say that having more traffic than existing gTLDs does not automatically mean your gTLD poses a risk.
They pointed to Verisign data from 2006, which shows that gTLDs such as .xxx and .asia were already receiving large amounts of traffic prior to their delegation. When they were delegated, the sky did not fall. Indeed, there were no reports of significant security and stability problems.
The New gTLD Applicants Group said:

In fact, the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.
These successful delegations alone demonstrate that there is no need to delay any more than the two most risky strings.

Donuts said:

There is no factual basis in the study recommending halting delegation process of 20% of applied-for strings. As the paper itself says, “The Study did not find enough information to properly classify these strings given the short timeline.” Without evidence of actual harm, the TLDs should proceed to delegation. Such was the case with other TLDs such as .XXX and .ASIA, which were delegated without delay and with no problems post-delegation.

Applicants also believe that the release in June 2012 of the list of all 1,930 applied-for strings may have skewed the data set that Interisle used in its study.
Uniregistry, for example, said:

The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list of applied-for strings.

The argument seems to be that a lot of the NXDOMAIN traffic seen in 2013 is due to people and software querying applied-for TLDs to see if they’re live yet.
It’s quite a speculative argument, but it’s somewhat supported by the fact that many applied-for strings received more queries in 2013 than they did in the equivalent 2012 sampling.
Second-level domains
Some applicants pointed out that there may not be a correlation between the volume of traffic a string receives and the number of second-level domains being queried.
A string might get a bazillion queries for a single second-level domain name. If that domain name is reserved by the registry, the risk of a name collision might be completely eliminated.
The Interisle report did show that number of SLDs and the volume of traffic do not correlate.
For example, .hsbc is ranked 14th in terms of traffic volume but saw requests for just 2,000 domains, whereas .inc, which ranked 15th, saw requests for 73,000 domains.
Unfortunately, the Interisle report only published the SLD numbers for the top 35 strings by query volume, leaving most applicants none the wiser about the possible impact of their own strings.
And ICANN did not factor the number of SLDs into its decision about where to draw the line between “low” and “uncalculated” risk.
Conspiracy theories
Some applicants questioned whether the Interisle data itself was reliable, but I find these arguments poorly supported and largely speculative.
They propose that someone (meaning presumably Verisign, which stands to lose market share when new gTLDs go live, and which kicked off the name collisions debate in the first place) could have gamed the study by generating spurious requests for applied-for gTLDs during the period Interisle’s data was being captured.
Some applicants put forth this view, while other limited their comments to a request that future studies rely only on data collected before now, to avoid tampering at the point of collection in future.
NTAG said:

Query counts are very easily gamed by any Internet connected system, allowing for malicious actors to create the appearance of risk for any string that they may object to in the future. It would be very easy to create the impression of a widespread string collision problem with a home Internet connection and the abuse of the thousands of available open resolvers.

While this kind of mischief is a hypothetical possibility, nobody has supplied any evidence that Interisle’s data was manipulated by anyone.
Some people have privately pointed DI to the fact that Verisign made a substantial donation to the DNS-OARC — the group that collected the data that Interisle used in its study — in July.
The implication is that Verisign was somehow able to manipulate the data after it was captured by DNS-OARC.
I don’t buy this either. We’re talking about a highly complex 8TB data set that took Interisle’s computers a week to process on each pass. The data, under the OARC’s deal with the root server operators, is not allowed to leave its premises. It would not be easily manipulated.
Additionally, DNS-OARC is managed by Internet Systems Consortium — which runs the F-root and is Uniregistry’s back-end registry provider — from its own premises in California.
In short, in the absence of any evidence supporting this conspiracy theory, I find the idea that the Interisle data was hacked after it was collected highly improbable.
What next?
I’ve presented only a summary of some key points here. The full list of comments can be found here. The reply period for comments closes September 17.
Several ICANN constituencies that can usually be relied upon to comment on everything (registrars, intellectual property, business and non-commercial) have not yet commented.
Will ICANN extend the deadline? I suppose it depends on how cautious it wants to be, whether it believes the companies requesting the extension really are conducting their own internal collision studies, and how useful it thinks those studies will be.