Latest news of the domain name industry

Recent Posts

ICANN reveals gTLD objections appeals process

Kevin Murphy, February 12, 2014, Domain Policy

Two new gTLD applicants would get the opportunity to formally appeal String Confusion Objection decisions that went against them, under plans laid out by ICANN today.
DERCars and United TLD (Rightside), which lost SCOs for their .cars and .cam applications respectively, would be the only parties able to appeal “inconsistent” objection rulings.
DERCars was told that its .cars was too similar to Google’s .car, forcing the two bids into a contention set. But Google lost similar SCO cases against two other .cars applicants.
Likewise, Rightside’s .cam application was killed off by a Verisign SCO that stated .cam and .com were too similar, despite two other .cam applicants prevailing in virtually identical cases.
Now ICANN plans to give both losing applicants the right to appeal these decisions to a three-person panel of “Last Resort” operated by the International Centre for Dispute Resolution.
ICDR was the body overseeing the original SCO process too.
Notably, ICANN’s new plan would not give Verisign and Google the right to appeal the two .cars/.cam cases they lost.

Only the applicant for the application that was objected to in the underlying SCO and lost (“Losing Applicant”) would have the option of whether to have the Expert Determination from that SCO reviewed.

There seems to be a presumption by ICANN already that what you might call the “minority” decision — ie, the one decision that disagreed with the other two — was the inconsistent one.
I wonder if that’s fair on Verisign.
Verisign lost two .cam SCO cases but won one, and only the one it won is open for appeal. But the two cases it lost were both decided by the same ICDR panelist, Murray Lorne Smith, on the same grounds. The decisions on .cam were really more 50-50 than they look.
According to the ICANN plan, there are two ways an appeal could go: the panel could decide that the original ruling should be reversed, or not. The standard of the review is:

Could the Expert Panel have reasonably come to the decision reached on the underlying SCO through an appropriate application of the standard of review as set forth in the Applicant Guidebook and procedural rules?

The appeals panelists would basically be asked to decide whether the original panelists are competent or not.
If the rulings were not reversed, the inconsistency would remain in place, making the contention sets for .car, .cars and .cam stay rather confusing.
ICANN said it would pay the appeals panel’s costs.
The plan (pdf) is now open for public comment.

MarkMonitor infiltrated by Syrian hackers targeting Facebook

Kevin Murphy, February 6, 2014, Domain Registrars

The corporate brand protection registrar MarkMonitor was reportedly hacked yesterday by the group calling itself the Syrian Electronic Army, in an unsuccessful attempt to take out Facebook.
While MarkMonitor refused to confirm or deny the claims, the SEA, which has been conducting a campaign against high-profile western web sites for the last couple of years, tweeted several revealing screenshots.
One was a screen capture of a DomainTools Whois lookup for facebook.com, which does not appear to have been cached by DomainTools.


Another purported to be a cap of Facebook’s control panel at the registrar.


The SEA tweeted more caps purporting to show it had access to domains belonging to Amazon and Yahoo!.
In response to an inquiry, MarkMonitor rather amusingly told DI “we do not comment on our clients — including neither confirming nor denying whether or not a company is a client.”
This despite the fact that the company publishes a searchable database of its clients on its web site.
The attackers were unable to take down Facebook itself because the company has rather wisely chosen to set its domain to use Verisign’s Registry Lock anti-hijacking service.
Registry Lock prevents domains’ DNS settings being changed automatically via registrar control panels. Instead, registrants need to provide a security pass phrase over the phone.

It’s official: Verisign has balls of steel

Kevin Murphy, October 18, 2013, Domain Registries

Verisign has spent the last six months telling anyone who will listen that new gTLDs will kill Japanese people and cause electricity grids to fail, so you’d expect the company to be a little coy about its own activities that (applying Verisign logic) endanger life and the global economy.
But apparently not.
Verisign today decided to use the same blog it has been using to play up the risks indicated by NXDOMAIN traffic in new gTLDs to plug its own service that actively encourages people to register error-traffic domains.
The company has launched DomainScope, which combines several older “domain discovery” tools — DomainFinder, DomainScore and DomainCountdown — under one roof.
According to an unsigned corporate blog post, with my emphasis:

DomainScope enables users to discover domain name registration opportunities through learning about the recent history of a domain name, understanding a domain name’s DNS traffic patterns, and knowing which domains are available that are receiving traffic.

That’s right, Verisign is giving malicious hackers the ability, for free, to find out which .com, .net and .tv domains currently receive NXDOMAIN traffic, so that the hackers can pay Verisign to register them and cause mayhem.
I used the service today see what mischief might be possible, and hit paydirt on my first query.
Typing in “mail” as the search query, ordering the results by “Traffic Score” — a 1 to 10 measure of how much error traffic a domain already gets — I got these results:

You’ll notice (click to enlarge if you don’t) that the third result, with a 9.9 out of 10 score, is netsoolmail.net.
That caught my attention for obvious reasons, and a little Googling seems to confirm that it’s a typo of netsolmail.net, a domain Network Solutions uses for its mail servers (or possibly a spam filter).
Network Solutions is of course a top-ten registrar with millions of mostly high-end customers.
So what?
Well, if Verisign’s arguments are to be believed, this poses a huge risk of information leakage — something that should be avoided at all costs in new gTLDs but which is apparently just fine in .com and .net.
Emails set to go to netsoolmail.net will fail today due to an NXDOMAIN response. But what happens when somebody registers that domain (which is likely to happen about 10 minutes after this post is published)?
Do they suddenly start receiving thousands of sensitive emails intended for NetSol’s customers?
Could NetSol’s spam filters all start to fail, causing SOMEBODY TO DIE! from a dodgy Viagra?
I don’t know. No clue. Probably not.
But there’s a risk, right? Even if it’s a very small risk (as Verisign argues), shouldn’t ICANN be preventing Verisign from promoting these domains, maybe using some kind of massive block-list?
Data leakage is important enough to Versign that it was the headline risk it posed in a recent report aimed at getting new gTLDs delayed.
In an August “technical report” entitled “New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact Analysis”, somebody from Verisign wrote (pdf):

once delegated, the registrants under new gTLDs have the ability to register specific domains for targeted collisions

This form of information leakage can violate privacy of users, provide a competitive advantage between business rivals, expose details of corporate network infrastructures, or even be used to infer details about geographical locations of network assets or users

What the report fails to mention is that registrants today have this ability, and that Verisign is actively encouraging the practice.
In Yiddish they call what Verisign has done today chutzpah.
In British English, we call it taking the piss.

Verisign targets bank claims in name collisions fight

Kevin Murphy, September 15, 2013, Domain Tech

Verisign has rubbished the Commonwealth Bank of Australia’s claim that its dot-brand gTLD, .cba, is safe.
In a lengthy letter to ICANN today, Verisign senior vice president Pat Kane said that, contrary to CBA’s claims, the bank is only responsible for about 6% of the traffic .cba sees at the root.
It’s the latest volley in the ongoing fight about the security risks of name collisions — the scenario where an applied-for gTLD string is already in broad use on internal networks.
CBA’s application for .cba has been categorized as “uncalculated risk” by ICANN, meaning it faces more reviews and three to six months of delay while its risk profile is assessed.
But in a letter to ICANN last month, CBA said “the cause of the name collision is primarily from CBA internal systems” and “it is within the CBA realm of control to detect and remediate said systems”.
The bank was basically claiming that its own computers use DNS requests for .cba already, and that leakage of those requests onto the internet was responsible for its relatively high risk profile.
At the time we doubted that CBA had access to the data needed to draw this conclusion and Verisign said today that a new study of its own “shows without a doubt that CBA’s initial conclusions are incorrect”.
Since the publication of Interisle Consulting’s independent review into root server error traffic — which led to all applied-for strings being split into risk categories — Verisign has evidently been carrying out its own study.
While Interisle used data collected from almost all of the DNS root servers, Verisign’s seven-week study only looked at data gathered from the A-root and J-root, which it manages.
According to Verisign, .cba gets roughly 10,000 root server queries per day — 504,000 in total over the study window — and hardly any of them come from the bank itself.
Most appear to be from residential apartment complexes in Chiba, Japan, where network admins seem to have borrowed the local airport code — also CBA — to address local devices.
About 80% of the requests seen come from devices using DNS Service Discovery services such as Bonjour, Verisign said.
Bonjour is an Apple-created technology that allows computers to use DNS to automatically discover other LAN-connected devices such as printers and cameras, making home networking a bit simpler.
Another source of the .cba traffic is McAfee’s antivirus software, made by Intel, which Verisign said uses DNS to check whether code is virus-free before executing it.
While error traffic for .cba was seen from 170 countries, Verisign said that Japan — notable for not being Australia — was the biggest source, with almost 400,000 queries (79% of the total). It said:

Our measurement study reveals evidence of a substantial Internet-connected infrastructure in Japan that lies beneath the surface of the public-facing internet, which appears to rely on the non-resolution of the string .CBA.
This infrastructure appear hierarchical and seems to include municipal and private administrative and service networks associated with electronic resource management for office and residential building facilities, as well as consumer devices.

One apartment block in Chiba is is responsible for almost 5% of the daily .cba queries — about 500 per day on average — according to Verisign’s letter, though there were 63 notable sources in total.
ICANN’s proposal for reducing the risk of these name collisions causing problems would require CBA, as the registry, to hunt down and warn organizations of .cba’s impending delegation.
Verisign reiterates the point made by RIPE NCC last month: this would be quite difficult to carry out.
But it does seem that Verisign has done a pretty good job tracking down the organizations that would be affected by .cba being delegated.
The question that Verisign’s letter and presentation does not address is: what would happen to these networks if .cba was delegated?
If .cba is delegated, what will McAfee’s antivirus software do? Will it crash the user’s computer? Will it allow unsafe code to run? Will it cause false positives, blocking users from legitimate content?
Or will it simply fail gracefully, causing no security problems whatsoever?
Likewise, what happens when Bonjour expects .cba to not exist and it suddenly does? Do Apple computers start leaking data about the devices on their local network to unintended third parties?
Or does it, again, cause no security problems whatsoever?
Without satisfactory answers to those questions, maybe name collisions could be introduced by ICANN with little to no effect, meaning the “risk” isn’t really a risk at all.
Answering those questions will of course take time, which means delay, which is not something most applicants want to hear right now.
Verisign’s study targeted CBA because CBA singled itself out by claiming to be responsible for the .cba error traffic, not because CBA is a client of rival registry Afilias.
The bank can probably thank Verisign for its study, which may turn out to be quite handy.
Still, it would be interesting to see Verisign conduct a similar study on, say, .windows (Microsoft), .cloud (Symantec) or .bank (Financial Services Roundtable), which are among the 35 gTLDs with “uncalculated” risk profiles that Verisign promised to provide back-end registry services for before it decided that new gTLDs were dangerous.
You can read Verisign’s letter and presentation here. I’ve rotated the PDF to make the presentation more readable here.

Famous Four says that Demand Media’s .cam should be rejected

Kevin Murphy, September 6, 2013, Domain Policy

Demand Media’s application for .cam should be rejected because it lost a String Confusion Objection filed by .com registry Verisign, according to rival applicant Famous Four Media.
“The process in the applicant guidebook is now clear: AC Webconnecting and dot Agency Limited proceed to resolve the contention set, and United TLD’s application cannot proceed,” chief legal officer Peter Young told DI.
dot Agency is Famous Four’s applicant for .cam, which along with AC Webconnecting survived identical challenges filed by Verisign. United TLD is the applicant subsidiary of Demand Media.
Serious questions were raised about the SCO process after two International Centre for Dispute Resolution panelists reached opposition conclusions in the three .cam/.com cases last month.
Demand Media subsequently called for an ICANN investigation into the process, with vice president Statton Hammock writing:

String confusion objections are meant to be applicant agnostic and have nothing to do with the registration or use of the new gTLD.

However, Famous Four thinks it has found a gotcha in a letter (pdf) written by a lawyer representing Demand which opposed consolidation of the three .cam cases, which stated:

Consolidation has the potential to prejudice the Applicants if all Applicants’ arguments are evaluated collectively, without regard to each Applicant’s unique plan for the .cam gTLD and their arguments articulating why such plans would not cause confusion.

In other words, Demand argued that the proposed usage of the TLD should be taken into account before the ICDR panel ruled against it, and now it saying usage should not have been taken into account.
Famous Four’s Young said:

Whether or not one ascribes to the view that usage should not be taken into account, and we believe that it should (otherwise we would not have argued it), the fact is that United TLD were very explicit prior to the publication that usage should indeed be taken into account.

The SCO debate expanded yesterday when the GNSO Council spent some time discussing .cam and other SCO discrepancies during its regular monthly meeting.
Concerns are such that the Council intends to inform the ICANN board of directors and its New gTLD Program Committee that it is looking into the issue.
The NGPC, has “Update on String Similarity” on its agenda for a meeting on Tuesday, which will no doubt try to figure out what, if anything, needs to be done.

Name collisions comments call for more gTLD delay

Kevin Murphy, August 29, 2013, Domain Registries

The first tranche of responses to Interisle Consulting’s study into the security risks of new gTLDs, and ICANN’s proposal to delay a few hundred strings pending more study, is in.
Comments filed with ICANN before the public comment deadline yesterday fall basically into two camps:

  • Non-applicants (mostly) urging ICANN to proceed with extreme caution. Many are asking for more time to study their own networks so they can get a better handle on their own risk profiles.
  • Applicants shooting holes in Interisle’s study and ICANN’s remeditation plan. They want ICANN to reclassify everything except .home and .corp as low risk, removing delays to delegation and go-live.

They were responding to ICANN’s decision to delay 521 “uncalculated risk” new gTLD applications by three to six months while further research into the risk of name collisions — where a new gTLD could conflict with a TLD already used by internet users in a non-standard way — is carried out.
Proceed with caution
Many commenters stated that more time is needed to analyse the risks posed by name collisions, noting that Interisle studied primarily the volume of queries for non-existent domains, rather than looking deeply into the consequences of delegating colliding gTLDs.
That was a point raised by applicants too, but while applicants conclude that this lack of data should lead ICANN to lift the current delays, others believe that it means more delays are needed.
Two ICANN constituencies seem to generally agree with the findings of the Interisle report.
The Internet Service Providers and Connectivity Providers constituency asked for the public comment period be put on hold until further research is carried out, or for at least 60 days. It noted:

corporations, ISPs and connectivity providers may bear the brunt of the security and customer-experience issues resulting from adverse (as yet un-analyzed) impacts from name collision

these issues, due to their security and customer-experience aspects, fall outside the remit of people who normally participate in the ICANN process, requiring extensive wide-ranging briefings even in corporations that do participate actively in the ICANN process

The At-Large Advisory Committee concurred that the Interisle study does not currently provide enough information to fully gauge the risk of name collisions causing harm.
ALAC said it was “in general concurrence with the proposed risk mitigation actions for the three defined risk categories” anyway, adding:

ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution

Several individual stakeholders agreed with the ISPCP that they need more time to look at their own networks. The Association of Nation Advertisers said:

Our member companies are working diligently to determine if DNS Clash issues are present within their respective networks. However the ANA had to communicate these issues to hundreds of companies, after which these companies must generate new data to determine the potential service failures on their respective networks.

The ANA wants the public comment period extended until November 22 to give its members more time to gather data.
While the ANA can always be relied upon to ask for new gTLDs to be delayed, its request was echoed by others.
General Electric called for three types of additional research:

  • Additional studies of traffic beyond the initial DITL sample.
  • Information and analysis of “use cases” — particular types of queries and traffic — and the consequences of the failure of particular use cases to resolve as intended (particular use cases could have severe consequences even if they might occur infrequently — like hurricanes), and
  • Studies of the time and costs of mitigation.

GE said more time is needed for companies such as itself to conduct impact analyses on their own internal networks and asked ICANN to not delegate any gTLD until the risk is “fully understood”.
Verizon, Heinz and the American Insurers Association have asked for comment deadline extensions for the same reasons.
The Association of Competitive Technology (which has Verisign as a member) said:

ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect.

Numerically, there were far more comments criticizing ICANN’s mitigation proposal. All were filed by new gTLD applicants, whose interests are aligned, however.
Most of these comments, which are far more focused on the details and the data, target perceived deficiencies in Interisle’s report and ICANN’s response to it.
Several very good arguments are made.
The Svalbard problem
First, there is criticism of the cut-off point between “low risk” and “uncalculated risk” strings, which some applicants say is “arbitrary”.
That’s mostly true.
ICANN basically took the list of applied-for strings, ordered by the frequency Interisle found they generate NXDOMAIN responses at the root, and drew a line across it at the 49,842 queries mark.
That’s because 49,842 queries is what .sj, the least-frequently-queried real TLD, received over the same period
If your string, despite not yet existing as a gTLD, already gets more traffic than .sj, it’s classed as “uncalculated risk” and faces more delays, according to ICANN’s plan.
As Directi said in its comments:

The result of this arbitrary selection is that .bio (Rank 281) with 50,000 queries (rounded to the nearest thousand) is part of the “uncategorized risk” list, and is delayed by 3 to 6 months, whereas .engineering (Rank 282) with 49,000 queries (rounded to the nearest thousand) is part of the “low risk” list, and can proceed without any significant delays.

What neither ICANN nor Interisle explained is why this is an appropriate place to draw a line in the sand.
This graphic from DotCLUB Domains illustrates the scale of the problem nicely:
.sj is the ccTLD for Svalbard, a Norwegian territory in the Arctic Circle with fewer than 3,000 inhabitants. The TLD is administered by .no registry Norid, but it’s not possible to register domains there.
Does having more traffic than .sj mean a gTLD is automatically more risky? Does having less mean a gTLD is safe? The ICANN proposal assumes “yes” to both questions, but it doesn’t explain why.
Many applicants say that having more traffic than existing gTLDs does not automatically mean your gTLD poses a risk.
They pointed to Verisign data from 2006, which shows that gTLDs such as .xxx and .asia were already receiving large amounts of traffic prior to their delegation. When they were delegated, the sky did not fall. Indeed, there were no reports of significant security and stability problems.
The New gTLD Applicants Group said:

In fact, the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.
These successful delegations alone demonstrate that there is no need to delay any more than the two most risky strings.

Donuts said:

There is no factual basis in the study recommending halting delegation process of 20% of applied-for strings. As the paper itself says, “The Study did not find enough information to properly classify these strings given the short timeline.” Without evidence of actual harm, the TLDs should proceed to delegation. Such was the case with other TLDs such as .XXX and .ASIA, which were delegated without delay and with no problems post-delegation.

Applicants also believe that the release in June 2012 of the list of all 1,930 applied-for strings may have skewed the data set that Interisle used in its study.
Uniregistry, for example, said:

The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list of applied-for strings.

The argument seems to be that a lot of the NXDOMAIN traffic seen in 2013 is due to people and software querying applied-for TLDs to see if they’re live yet.
It’s quite a speculative argument, but it’s somewhat supported by the fact that many applied-for strings received more queries in 2013 than they did in the equivalent 2012 sampling.
Second-level domains
Some applicants pointed out that there may not be a correlation between the volume of traffic a string receives and the number of second-level domains being queried.
A string might get a bazillion queries for a single second-level domain name. If that domain name is reserved by the registry, the risk of a name collision might be completely eliminated.
The Interisle report did show that number of SLDs and the volume of traffic do not correlate.
For example, .hsbc is ranked 14th in terms of traffic volume but saw requests for just 2,000 domains, whereas .inc, which ranked 15th, saw requests for 73,000 domains.
Unfortunately, the Interisle report only published the SLD numbers for the top 35 strings by query volume, leaving most applicants none the wiser about the possible impact of their own strings.
And ICANN did not factor the number of SLDs into its decision about where to draw the line between “low” and “uncalculated” risk.
Conspiracy theories
Some applicants questioned whether the Interisle data itself was reliable, but I find these arguments poorly supported and largely speculative.
They propose that someone (meaning presumably Verisign, which stands to lose market share when new gTLDs go live, and which kicked off the name collisions debate in the first place) could have gamed the study by generating spurious requests for applied-for gTLDs during the period Interisle’s data was being captured.
Some applicants put forth this view, while other limited their comments to a request that future studies rely only on data collected before now, to avoid tampering at the point of collection in future.
NTAG said:

Query counts are very easily gamed by any Internet connected system, allowing for malicious actors to create the appearance of risk for any string that they may object to in the future. It would be very easy to create the impression of a widespread string collision problem with a home Internet connection and the abuse of the thousands of available open resolvers.

While this kind of mischief is a hypothetical possibility, nobody has supplied any evidence that Interisle’s data was manipulated by anyone.
Some people have privately pointed DI to the fact that Verisign made a substantial donation to the DNS-OARC — the group that collected the data that Interisle used in its study — in July.
The implication is that Verisign was somehow able to manipulate the data after it was captured by DNS-OARC.
I don’t buy this either. We’re talking about a highly complex 8TB data set that took Interisle’s computers a week to process on each pass. The data, under the OARC’s deal with the root server operators, is not allowed to leave its premises. It would not be easily manipulated.
Additionally, DNS-OARC is managed by Internet Systems Consortium — which runs the F-root and is Uniregistry’s back-end registry provider — from its own premises in California.
In short, in the absence of any evidence supporting this conspiracy theory, I find the idea that the Interisle data was hacked after it was collected highly improbable.
What next?
I’ve presented only a summary of some key points here. The full list of comments can be found here. The reply period for comments closes September 17.
Several ICANN constituencies that can usually be relied upon to comment on everything (registrars, intellectual property, business and non-commercial) have not yet commented.
Will ICANN extend the deadline? I suppose it depends on how cautious it wants to be, whether it believes the companies requesting the extension really are conducting their own internal collision studies, and how useful it thinks those studies will be.

String confusion in disarray as Demand’s .cam loses against Verisign’s .com

Kevin Murphy, August 20, 2013, Domain Policy

Demand Media is demanding an ICANN review of its objections policy, after its applied-for new gTLD .cam was beaten in a String Confusion Objection by .com registry Verisign.
A International Centre for Dispute Resolution panelist has ruled (pdf) that .cam and .com are too confusingly similar to coexist, meaning Demand’s bid for .cam must be rejected by ICANN.
But the ruling by Urs Laeuchli conflicts with two other ICDR panel decisions on .cam, which both found that the string is NOT confusingly similar to .com and therefore can be delegated.
So while Demand’s .cam bid, under a strict reading of the rules, is now supposed to be rejected, applications for identical strings filed by AC Webhosting and dotAgency can go ahead.
ICANN has been thrown a curve ball it is not yet fully prepared to deal with.
As Akram Atallah, president of ICANN’s Generic Domains Division, told DI last week, it’s possible that the policy or the implementation of that policy may need to be revisited by ICANN and the community.
United TLD, the Demand Media subsidiary that applied for .cam, is now calling for precisely that, with vice president of business and legal affairs Statton Hammock writing today:

String confusion objections are meant to be applicant agnostic and have nothing to do with the registration or use of the new gTLD. What matters in string confusion objections is whether a string is visually, aurally or, according to ICANN’s Applicant Guidebook, otherwise “so nearly resembles another that it is likely to deceive or cause confusion.” Individuals may disagree on whether .CAM and .COM are similarly confusing, but there can be no mistake that United TLD’s .CAM string, AC Webhosting’s .CAM string, and dotAgency Limited’s .CAM string are all identical. Either all three applications should move forward or none should move forward.

The .cam cases are not alone in presenting ICANN with SCO problems.
Last week, Donuts’ bid for .pets was ruled confusingly similar to Google’s .pet, despite previous ICDR cases finding that plurals and singulars are not too confusing to coexist.
Where the .cam panelists disagreed
While there were three .cam cases, two of them were decided by the same panelist. It seems that both panelists were provided with very similar sets of evidence in all three cases.
It’s relevant to note that neither panelist — unlike some of their colleagues in other cases — thought it was appropriate to apply trademark law such as the DuPont factors in their decisions.
They did, however, consider the expected use cases of .cam.
All three applicants take .cam as short for “webcam” or “camera” and would target registrants interested in those fields (a lot of the use will likely be pornographic — AC Webconnecting is a porn firm after all).
But all three applicants also want to run “open” gTLDs, with no registration restrictions.
ICDR panelist Murray Smith was in charge of both the AC Webconnecting and dotAgency cases. He addressed expected usage explicitly in dotAgency, and explained why:

It is not just the visual, phonetic and conceptual similarity between the words that must be taken into account. In my view the greater emphasis should be focused on the use of the disputed extensions in the context of modern Internet usage. It is this context that compels the conclusion that an average Internet user would not be confused and would know that a .com website is probably a commercial website while a .cam websites would be something more focused in a particular field.

In AC Webconnecting, he wrote:

I agree that a consumer would quickly realize that a .cam website is likely associated with photography or camera use and is different than a .com website in use generally by a myriad of commercial entities.

So he’s putting the “greater emphasis” on usage — a factor that is not explicitly mentioned in the Applicant Guidebook’s description of the SCO and which may quite often differ between applicants.
Right there, in Smith’s interpretation of his task, we have a reason why SCOs will produce different results for identical strings.
I find Smith’s thinking baffling for a couple of reasons.
First, “a consumer would quickly realize that a .cam website is likely associated with photography” seems to ignore the existence of a bazillion .com web sites that are also associated with photography.
When did “commercial entities” and “photography or camera use” become mutually exclusive? Is photographyblog.com not confusingly similar to photographyblog.cam?
Second, he ignores the fact that basically anyone will be able to register a .cam web site for basically any purpose. None of the applicants want to restrict the gTLD to camera-related stuff.
ICDR panelist Laeuchli, in the Demand Media .cam case, raised this precise point, saying:

“.com” and “.cam” would use the same channels appealing to a broad audience. Even though according to Applicant, its envisioned TLD will “likely appeal” to a specific audience, it plans to operate “.cam” as an open gTLD. This would lead to extensive overlap.

Panelist Smith has some other notions about confusion that seem to defy common sense. He wrote in the AC Webconnecting case:

The .com TLD is the most widely recognized string in the Internet world. No reasonable Internet user would fail to recognize the .com TLD. The very reputation of the .com name serves to limit the potential for an average Internet user to be confused by the proposed .cam TLD. It is indeed unlikely that an online consumer would confuse a .com website with a .cam website.

Does this not strike anyone else as bad thinking?
It seems to me to be a little like saying that it’s perfectly okay to market a brand of carbonated beverage called Cuke, because Coke is so famous that nobody could possibly be confused. I don’t know where the law stands on that issue, but I’m pretty sure Coke wouldn’t be happy about it.
There’s also some weirdness in Laeuchli’s decision in the Demand case.
He puts some weight on the similarity scores produced by the controversial Sword algorithm in his decision, but apparently without doing even the basic research. He writes in his findings:

No matter what the standards and purpose the ICANN SWORD algorithm includes, it has comparative value.

Since pairs such as “God” and “dog” (85%) reach similarity scores of 84% and higher, how much more similar would “cxm” and “cxm” be (x being replaced with a vowel)!

The answer is that, according to Sword, they’re less similar. Sword scores “cam” v “com” at 63%.
Laeuchli knows it’s 63%, because he makes reference to that fact in his summary of Verisign’s evidence. He doesn’t need to speculate about the number based on what “god” v “dog” scores (and if he did the “dog” v “god” query himself, why on earth didn’t he just query “com” v “cam” too?)
His finding that .cam and .com will cause probable confusion seems to be based largely on expert witness testimony provided by both Verisign and Demand, in which he found Verisign’s more persuasive.
This evidence seems to have largely comprised the opinions of linguists, examining mouth shapes and acoustic frequencies, and market research looking into internet user behavior. As none of it has been published, it’s difficult to judge which side had the better arguments.
But it’s undeniably about the similarity of the strings, rather than the proposed usage, which makes Demand Media’s statement today — that SCOs “are meant to be applicant agnostic and have nothing to do with the registration or use of the new gTLD” — quite confusing.
Demand lost its case based on the string similarity, whereas the other two applicants won theirs based on the usage.
Perhaps Demand senses that its .cam application will not be immediately rejected if ICANN reopens the debate about string similarity. If think it’s probably correct.

Artemis plans name collision conference next week

Kevin Murphy, August 16, 2013, Domain Tech

Artemis Internet, the NCC Group subsidiary applying for .secure, is to run a day-long conference devoted to the topic of new gTLD name collisions in San Francisco next week.
Google, PayPal and DigiCert are already lined up to speak at the event, and Artemis says it expects 60 to 70 people, many of them from major new gTLD applicants, to show up.
The free-to-attend TLD Security Forum will discuss the recent Interisle Consulting report into name collisions, which compared the problem in some cases to the Millennium Bug and recommended extreme caution when approving new gTLDs.
Brad Hill, head of ecosystem security at PayPal, will speak to “Paypal’s Concerns and Recommendations on new TLDs”, according to the agenda.
That’s notable because PayPal is usually positioned as being aligned with the other side of the debate — it’s the only company to date Verisign has been able to quote from when it tries to show support for its own concerns about name collisions.
The Interisle report led to ICANN recommending months of delay for hundreds of new gTLD strings — basically every string that already gets more daily root server error traffic than legitimate queries for .sj, the existing TLD with the fewest look-ups.
The New TLD Applicants Group issued its own commentary on these recommendations, apparently drafted by Artemis CTO Alex Stamos, earlier this week, calling for all strings except .home and .corp to be treated as low risk.
NTAG also said in its report that it has been discussing with SSL certificate authorities ways to potentially speed up risk-mitigation for the related problem of internal name certificate collisions, so it’s also notable that DigiCert’s Dan Timpson is slated to speak at the Forum.
The event may be webcast for those unable to attend in person, according to Artemis. If it is, DI will be “there”.

On the same topic, ICANN yesterday published a video interview with DNS inventor Paul Mockapetris, in which he recounted some name collision anecdotes from the Mesolithic period of the internet. It’s well worth a watch.

NTAG rubbishes new gTLD collision risk report

Kevin Murphy, August 15, 2013, Domain Policy

The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.
The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.
But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.
NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.
The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.
The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.
NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:

We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.

The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.
That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.
If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.
NTAG said:

None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.

the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.
Verisign said:

This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.

The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).
NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.

Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.

While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.
Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.
For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.
In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)
The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.

Verisign confirms .gov downtime, blames algorithm

Kevin Murphy, August 15, 2013, Domain Tech

Verisign this morning confirmed yesterday’s reports that the .gov top-level domain went down for some internet users due to a DNSSEC problem, which it said was related to an algorithm change.
In a posting to various mailing lists, Verisign principal engineer Duane Wessels said:

On the morning of August 14, a relatively small number of networks may have experienced an operational disruption related to the signing of the .gov zone. In preparation for a previously announced algorithm rollover, a software defect resulted in publishing the .gov zone signed only with DNSSEC algorithm 8 keys rather than with both algorithm 7 and 8. As a result .gov name resolution may have failed for validating recursive name servers. Upon discovery of the issue, Verisign took prompt action to restore the valid zone.
Verisign plans to proceed with the previously announced .gov algorithm rollover at the end of the month with the zone being signed with both algorithms for a period of approximately 10 days.

This clarifies that the problem was slightly different to what had been assumed yesterday.
It was related to change of the cryptographic algorithm used to create .gov’s DNSSEC keys, a relatively rare event, rather than a scheduled key rollover, which is a rather more frequent occurrence.
The problem would only have made .gov domains (and consequently web sites, email, etc) inaccessible for users of networks where DNSSEC validation is strictly enforced, which is quite small.
The US ISP with the strongest support for DNSSEC is Comcast. Since turning on its validators it has reported dozens of instances of DNSSEC failing — mostly in second-level .gov domains, where DNSSEC is mandated by US policy.
On two other occasions Comcast has blogged about the whole .gov TLD failing DNSSEC validation due to problems keeping keys up to date.
The general problem is widespread enough, and the impact severe enough, that Comcast has had to create an entirely new technology to prevent borked key rollovers making web sites go dark for its customers.
Called Negative Trust Anchors, it’s basically a Band-Aid that allows the ISP to deliberately ignore DNSSEC on a given domain while it waits for that domain’s owner to sort out its key problem.
The technology was created following the widely reported nasa.gov outage last year.
It’s really little wonder that so few organizations are interested in deploying DNSSEC today.
Yesterday’s .gov problem may have been minor, lasting only an hour or two, but had the affected TLD been .com, and had DNSSEC deployment been more widespread, everyone on the planet would have noticed.
Under ICANN contract, DNSSEC is mandatory for new gTLDs at the top level, but not the second level.