Latest news of the domain name industry

Recent Posts

ICANN must do more to fight internet security threats [Guest Post]

ICANN and its contracted parties need to do more to tackle security threats, write Dave Piscitello and Lyman Chapin of Interisle Consulting.

The ICANN Registry and Registrar constituencies insist that ICANN’s role with respect to DNS abuse is limited by its Mission “to ensure the stable and secure operation of the internet’s unique identifier systems”, therefore limiting ICANN’s remit to abuse of the identifier systems themselves, and specifically excluding from the remit any harms that arise from the content to which the identifiers point.

In their view, if the harm arises not from the identifier, but from the thing identified, it is outside of ICANN’s remit.

This convenient formulation relieves ICANN and its constituencies of responsibility for the way in which identifiers are used to inflict harm on internet users. However convenient it may be, it is fundamentally wrong.

ICANN’s obligation to operate “for the benefit of the Internet community as a whole” (see Bylaws, “Commitments”) demands that its remit extend broadly to how a domain name (or other Internet identifier) is misused to point to or lure a user or application to content that is harmful, or to host content that is harmful.

Harmful content itself is not ICANN’s concern; the way in which internet identifiers are used to weaponize harmful content most certainly is.

Rather than confront these obligations, however, ICANN is conducting a distracting debate about the kinds of events that should be described as “DNS abuse”. This is tedious and pointless; the persistent overloading of the term “abuse” has rendered it meaningless, ensuring that any attempt to reach consensus on a definition will fail.

ICANN should stop using the term “DNS abuse” and instead use the term “security threat”.

The ICANN Domain Abuse Activity Reporting project and the Governmental Advisory Committee (GAC) use this term, which is also a term of reference for new TLD program obligations (Spec 11) and related reporting activities. It is also widely used in the operational and cybersecurity communities.

Most importantly, the GAC and the DAAR project currently identify and seek to measure an initial set of security threats that are a subset of a larger set of threats that are recognized as criminal acts in jurisdictions in which a majority of domain names are registered.

ICANN should acknowledge the GAC’s reassertion in its Hyderabad Communique that the set of security threats identified in its Beijing correspondence to the ICANN Board were not an exhaustive list but merely examples. The GAC smartly recognized that the threat landscape is constantly evolving.

ICANN should not attempt to artificially narrow the scope of the term “security threat” by crafting its own definition.

It should instead make use of an existing internationally recognized criminal justice treaty. The Council of Europe’s Convention on Cybercrime is a criminal justice treaty that ICANN could use as a reference for identifying security threats that the Treaty recognizes as criminal acts.

The Convention is recognized by countries in which a sufficiently large percentage domain names are registered that it can serve the community and Internet users more effectively and fairly than any definition that ICANN might concoct.

ICANN should also acknowledge that the set of threats that fall within its remit must include all security events (“realized security threats”) in which a domain name is used during the execution of an attack for purposes of deception, for infringement on copyrights, for attacks that threaten democracies, or for operation of criminal infrastructures that are operated for the purpose of launching attacks or facilitating criminal (often felony) acts.

What is that remit?

ICANN policy and contracts must ensure that contracted parties (registrars and registries) collaborate with public and private sector authorities to disrupt or mitigate:

  • illegal interception or computer-related forgery,
  • attacks against computer systems or devices,
  • illegal access, data interference, or system interference,
  • infringement of intellectual property and related rights,
  • violation of laws to ensure fair and free elections or undermine democracies, and
  • child abuse and human trafficking.

We note that the Convention on Cybercrime identifies or provides Guidance Notes for these most prevalently executed attacks or criminal acts:

  • Spam,
  • Fraud. The forms of fraud that use domain names in criminal messaging include, business email compromise, advance fee fraud, phishing or other identity thefts.
  • Botnet operation,
  • DDoS Attacks: in particular, redirection and amplification attacks that exploit the DNS
  • Identity theft and phishing in relation to fraud,
  • Attacks against critical infrastructures,
  • Malware,
  • Terrorism, and,
  • Election interference.

In all these cases, the misuse of internet identifiers to pursue the attack or criminal activity is squarely within ICANN’s remit.

Registries or registrars should be contractually obliged to take actions that are necessary to mitigate these misuses, including suspension of name resolution, termination of domain name registrations, “unfiltered and unmasked” reporting of security threat activity for both registries and registrars, and publication or disclosure of information that is relevant to mitigating misuses or disrupting cyberattacks.

No one is asking ICANN to be the Internet Police.

The “ask” is to create policy and contractual obligations to ensure that registries and registrars collaborate in a timely and uniform manner. Simply put, the “ask” is to oblige all of the parties to play on the same team and to adhere to the same rules.

This is unachievable in the current self-regulating environment, in which a relatively small number of outlier registries and registrars are the persistent loci of extraordinary percentages of domain names associated with cyberattacks or cybercrimes and the current contracts offer no provisions to suspend or terminate their operations.

This is a guest editorial written by Dave Piscitello and Lyman Chapin, of security consultancy Interisle Consulting Group. Interisle has been an occasional ICANN security contractor, and Piscitello until last year was employed as vice president of security and ICT coordination on ICANN staff. The views expressed in this piece do not necessary reflect DI’s own.

New gTLD applicants get a way to avoid name collision delay

Kevin Murphy, October 9, 2013, Domain Tech

ICANN has given blessed relief to many new gTLD applicants by wiping potentially months off their path to delegation.

Its New gTLD Program Committee this week adopted a new “New gTLD Collision Occurrence Management Plan” which aims to tackle the problem of clashes between new gTLDs and names used on private networks.

The good news is that the previous categorization of strings according to risk, which would have delayed “uncalculated risk” gTLDs by months pending further study, has been scrapped.

The two “high risk” strings — .home and .corp — don’t catch a break, however. ICANN says it will continue to refuse to delegate them “indefinitely”.

For everyone else, ICANN said it will conduct additional studies into the risk of name collisions, above and beyond what Interisle Consulting already produced.

The study will take into account not only the frequency that new gTLDs currently generate NXDOMAIN traffic in the DNS root, but also the number of second-level domains queried, the diversity of requesting sources, and other factors.

Any new gTLD applicant that does not wish to wait for this study will be able to proceed to delegation without delay, but only if they block huge numbers of second-level domains at launch.

The registries will have to block every SLD that was queried in their gTLD according to the Day in the Life of the Internet data that Interisle used in its study.

This list will vary by TLD, but in the most severe cases is likely to extend to tens of thousands of names. In many cases, it’s likely to be a few thousand names.

Fortunately, studies conducted by the likes of Donuts and Neustar indicate that many of these SLDs — maybe even the majority — are likely to be invalid strings, such as those with an underscore or other non-DNS character, or randomly generated 10-character strings of gibberish generated by Google Chrome.

In other words, the actual number of potentially salable domains that registries will have to block may turn out to be much lower than it appears at first glance.

Each SLD will have to be blocked in such a way that it continues to return NXDOMAIN responses, as they all do today.

Because the DITL data represented a 48-hour snapshot in May 2013, and may not include every potentially affected string, ICANN is also proposing to give organizations a way to:

report and request the blocking of a domain name (SLD) that causes demonstrably severe harm as a consequence of name collision occurrences.

The process will allow the deactivation (SLD removal from the TLD zone) of the name for a period of up to two (2) years in order to allow the affected party to effect changes to its network to eliminate the DNS request leakage that causes collisions, or mitigate the harmful impact.

One has to wonder if any trademark lawyers reading this will think: “Ooh, free defensive registration!” It will be interesting to see if any of them give it a cheeky shot.

I’ve got a feeling that most new gTLD applicants will want to take ICANN up on its offer. It’s not an ideal solution for them, but it does give them a way to get into the root relatively quickly.

There’s no telling what ICANN’s additional studies will find, but there’s a chance it could be negative for their string(s) — getting delegated at least mitigates the risk of never getting delegated.

The new ICANN proposal may in some cases interfere with their plans to market and use their TLDs, however.

Take a dot-brand such as .cisco, which the networking company has applied for. Its block list is likely to have about 100,000 strings on it, increasing the chances that useful, brandable SLDs are going to be taken out of circulation for a while.

ICANN is also proposing to conduct an awareness-raising campaign, using the media, to let network operators know about the risks that new gTLDs may present to their networks.

Depending on how effective this is, new registries may be able to forget about getting positive column inches for their launch — if a journalist is handed a negative angle for a story on a plate, they’ll take it.

Verisign targets bank claims in name collisions fight

Kevin Murphy, September 15, 2013, Domain Tech

Verisign has rubbished the Commonwealth Bank of Australia’s claim that its dot-brand gTLD, .cba, is safe.

In a lengthy letter to ICANN today, Verisign senior vice president Pat Kane said that, contrary to CBA’s claims, the bank is only responsible for about 6% of the traffic .cba sees at the root.

It’s the latest volley in the ongoing fight about the security risks of name collisions — the scenario where an applied-for gTLD string is already in broad use on internal networks.

CBA’s application for .cba has been categorized as “uncalculated risk” by ICANN, meaning it faces more reviews and three to six months of delay while its risk profile is assessed.

But in a letter to ICANN last month, CBA said “the cause of the name collision is primarily from CBA internal systems” and “it is within the CBA realm of control to detect and remediate said systems”.

The bank was basically claiming that its own computers use DNS requests for .cba already, and that leakage of those requests onto the internet was responsible for its relatively high risk profile.

At the time we doubted that CBA had access to the data needed to draw this conclusion and Verisign said today that a new study of its own “shows without a doubt that CBA’s initial conclusions are incorrect”.

Since the publication of Interisle Consulting’s independent review into root server error traffic — which led to all applied-for strings being split into risk categories — Verisign has evidently been carrying out its own study.

While Interisle used data collected from almost all of the DNS root servers, Verisign’s seven-week study only looked at data gathered from the A-root and J-root, which it manages.

According to Verisign, .cba gets roughly 10,000 root server queries per day — 504,000 in total over the study window — and hardly any of them come from the bank itself.

Most appear to be from residential apartment complexes in Chiba, Japan, where network admins seem to have borrowed the local airport code — also CBA — to address local devices.

About 80% of the requests seen come from devices using DNS Service Discovery services such as Bonjour, Verisign said.

Bonjour is an Apple-created technology that allows computers to use DNS to automatically discover other LAN-connected devices such as printers and cameras, making home networking a bit simpler.

Another source of the .cba traffic is McAfee’s antivirus software, made by Intel, which Verisign said uses DNS to check whether code is virus-free before executing it.

While error traffic for .cba was seen from 170 countries, Verisign said that Japan — notable for not being Australia — was the biggest source, with almost 400,000 queries (79% of the total). It said:

Our measurement study reveals evidence of a substantial Internet-connected infrastructure in Japan that lies beneath the surface of the public-facing internet, which appears to rely on the non-resolution of the string .CBA.

This infrastructure appear hierarchical and seems to include municipal and private administrative and service networks associated with electronic resource management for office and residential building facilities, as well as consumer devices.

One apartment block in Chiba is is responsible for almost 5% of the daily .cba queries — about 500 per day on average — according to Verisign’s letter, though there were 63 notable sources in total.

ICANN’s proposal for reducing the risk of these name collisions causing problems would require CBA, as the registry, to hunt down and warn organizations of .cba’s impending delegation.

Verisign reiterates the point made by RIPE NCC last month: this would be quite difficult to carry out.

But it does seem that Verisign has done a pretty good job tracking down the organizations that would be affected by .cba being delegated.

The question that Verisign’s letter and presentation does not address is: what would happen to these networks if .cba was delegated?

If .cba is delegated, what will McAfee’s antivirus software do? Will it crash the user’s computer? Will it allow unsafe code to run? Will it cause false positives, blocking users from legitimate content?

Or will it simply fail gracefully, causing no security problems whatsoever?

Likewise, what happens when Bonjour expects .cba to not exist and it suddenly does? Do Apple computers start leaking data about the devices on their local network to unintended third parties?

Or does it, again, cause no security problems whatsoever?

Without satisfactory answers to those questions, maybe name collisions could be introduced by ICANN with little to no effect, meaning the “risk” isn’t really a risk at all.

Answering those questions will of course take time, which means delay, which is not something most applicants want to hear right now.

Verisign’s study targeted CBA because CBA singled itself out by claiming to be responsible for the .cba error traffic, not because CBA is a client of rival registry Afilias.

The bank can probably thank Verisign for its study, which may turn out to be quite handy.

Still, it would be interesting to see Verisign conduct a similar study on, say, .windows (Microsoft), .cloud (Symantec) or .bank (Financial Services Roundtable), which are among the 35 gTLDs with “uncalculated” risk profiles that Verisign promised to provide back-end registry services for before it decided that new gTLDs were dangerous.

You can read Verisign’s letter and presentation here. I’ve rotated the PDF to make the presentation more readable here.

Artemis plans name collision conference next week

Kevin Murphy, August 16, 2013, Domain Tech

Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.

Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.

The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.

Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.

That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.

The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.

The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.

NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.

The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.

On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.

NTAG rubbishes new gTLD collision risk report

Kevin Murphy, August 15, 2013, Domain Policy

The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.

The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.

But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.

NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.

The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.

The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.

NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:

We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.

The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.

That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.

If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.

NTAG said:

None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.

the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.

Verisign said:

This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.

The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).

NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.

Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.

While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.

Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.

For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.

In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)

The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.

  • Page 1 of 2
  • 1
  • 2
  • >