Latest news of the domain name industry

Recent Posts

.xxx sales spike 1,000% during discount

Kevin Murphy, September 5, 2013, Domain Registries

ICM Registry saw an over 1,000% spike in .xxx domain name registrations in May, during which it offered new registrations at a steep discount over its regular price.
The numbers were still relatively small. The registry saw 13,136 adds during the period, compared to 1,131 in April and 1,836 in May 2012, according to ICANN reports published today.
Average add-years rose sequentially from 1.34 to 1.88 (compared to a gTLD industry average of 1.23), according to TLD Health Check, with total add-years up over 1,500% to 24,663.
TLD Health Check
For the whole of May, ICM offered .xxx domains — which usually carry a registry fee of $62 — for the same price as .com domains. The promotion applied to any length of registration, from one to 10 years.
There were 747 10-year registrations in May. A small number, but exactly 100 more than ICM saw during its first month of general availability in December 2011. There were similar numbers of three and five-year sales, and over 1,100 two-year registrations.
The company ended the month with 120,409 domains under management.

Name collisions comments call for more gTLD delay

Kevin Murphy, August 29, 2013, Domain Registries

The first tranche of responses to Interisle Consulting’s study into the security risks of new gTLDs, and ICANN’s proposal to delay a few hundred strings pending more study, is in.
Comments filed with ICANN before the public comment deadline yesterday fall basically into two camps:

  • Non-applicants (mostly) urging ICANN to proceed with extreme caution. Many are asking for more time to study their own networks so they can get a better handle on their own risk profiles.
  • Applicants shooting holes in Interisle’s study and ICANN’s remeditation plan. They want ICANN to reclassify everything except .home and .corp as low risk, removing delays to delegation and go-live.

They were responding to ICANN’s decision to delay 521 “uncalculated risk” new gTLD applications by three to six months while further research into the risk of name collisions — where a new gTLD could conflict with a TLD already used by internet users in a non-standard way — is carried out.
Proceed with caution
Many commenters stated that more time is needed to analyse the risks posed by name collisions, noting that Interisle studied primarily the volume of queries for non-existent domains, rather than looking deeply into the consequences of delegating colliding gTLDs.
That was a point raised by applicants too, but while applicants conclude that this lack of data should lead ICANN to lift the current delays, others believe that it means more delays are needed.
Two ICANN constituencies seem to generally agree with the findings of the Interisle report.
The Internet Service Providers and Connectivity Providers constituency asked for the public comment period be put on hold until further research is carried out, or for at least 60 days. It noted:

corporations, ISPs and connectivity providers may bear the brunt of the security and customer-experience issues resulting from adverse (as yet un-analyzed) impacts from name collision

these issues, due to their security and customer-experience aspects, fall outside the remit of people who normally participate in the ICANN process, requiring extensive wide-ranging briefings even in corporations that do participate actively in the ICANN process

The At-Large Advisory Committee concurred that the Interisle study does not currently provide enough information to fully gauge the risk of name collisions causing harm.
ALAC said it was “in general concurrence with the proposed risk mitigation actions for the three defined risk categories” anyway, adding:

ICANN must assure that such residual risk is not transferred to third parties such as current registry operators, new gTLD applicants, registrants, consumers and individual end users. In particular, the direct and indirect costs associated with proposed mitigation actions should not have to be borne by registrants, consumers and individual end users. The Board must err on the side of caution

Several individual stakeholders agreed with the ISPCP that they need more time to look at their own networks. The Association of Nation Advertisers said:

Our member companies are working diligently to determine if DNS Clash issues are present within their respective networks. However the ANA had to communicate these issues to hundreds of companies, after which these companies must generate new data to determine the potential service failures on their respective networks.

The ANA wants the public comment period extended until November 22 to give its members more time to gather data.
While the ANA can always be relied upon to ask for new gTLDs to be delayed, its request was echoed by others.
General Electric called for three types of additional research:

  • Additional studies of traffic beyond the initial DITL sample.
  • Information and analysis of “use cases” — particular types of queries and traffic — and the consequences of the failure of particular use cases to resolve as intended (particular use cases could have severe consequences even if they might occur infrequently — like hurricanes), and
  • Studies of the time and costs of mitigation.

GE said more time is needed for companies such as itself to conduct impact analyses on their own internal networks and asked ICANN to not delegate any gTLD until the risk is “fully understood”.
Verizon, Heinz and the American Insurers Association have asked for comment deadline extensions for the same reasons.
The Association of Competitive Technology (which has Verisign as a member) said:

ICANN should slow or temporarily suspend the process of delegating TLDs at risk of causing problems due to their frequency of appearance in queries to the root. While we appreciate the designation of .home and .corp as high risk, there are many other TLDs which will also have a significant destructive effect.

Numerically, there were far more comments criticizing ICANN’s mitigation proposal. All were filed by new gTLD applicants, whose interests are aligned, however.
Most of these comments, which are far more focused on the details and the data, target perceived deficiencies in Interisle’s report and ICANN’s response to it.
Several very good arguments are made.
The Svalbard problem
First, there is criticism of the cut-off point between “low risk” and “uncalculated risk” strings, which some applicants say is “arbitrary”.
That’s mostly true.
ICANN basically took the list of applied-for strings, ordered by the frequency Interisle found they generate NXDOMAIN responses at the root, and drew a line across it at the 49,842 queries mark.
That’s because 49,842 queries is what .sj, the least-frequently-queried real TLD, received over the same period
If your string, despite not yet existing as a gTLD, already gets more traffic than .sj, it’s classed as “uncalculated risk” and faces more delays, according to ICANN’s plan.
As Directi said in its comments:

The result of this arbitrary selection is that .bio (Rank 281) with 50,000 queries (rounded to the nearest thousand) is part of the “uncategorized risk” list, and is delayed by 3 to 6 months, whereas .engineering (Rank 282) with 49,000 queries (rounded to the nearest thousand) is part of the “low risk” list, and can proceed without any significant delays.

What neither ICANN nor Interisle explained is why this is an appropriate place to draw a line in the sand.
This graphic from DotCLUB Domains illustrates the scale of the problem nicely:
.sj is the ccTLD for Svalbard, a Norwegian territory in the Arctic Circle with fewer than 3,000 inhabitants. The TLD is administered by .no registry Norid, but it’s not possible to register domains there.
Does having more traffic than .sj mean a gTLD is automatically more risky? Does having less mean a gTLD is safe? The ICANN proposal assumes “yes” to both questions, but it doesn’t explain why.
Many applicants say that having more traffic than existing gTLDs does not automatically mean your gTLD poses a risk.
They pointed to Verisign data from 2006, which shows that gTLDs such as .xxx and .asia were already receiving large amounts of traffic prior to their delegation. When they were delegated, the sky did not fall. Indeed, there were no reports of significant security and stability problems.
The New gTLD Applicants Group said:

In fact, the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.
These successful delegations alone demonstrate that there is no need to delay any more than the two most risky strings.

Donuts said:

There is no factual basis in the study recommending halting delegation process of 20% of applied-for strings. As the paper itself says, “The Study did not find enough information to properly classify these strings given the short timeline.” Without evidence of actual harm, the TLDs should proceed to delegation. Such was the case with other TLDs such as .XXX and .ASIA, which were delegated without delay and with no problems post-delegation.

Applicants also believe that the release in June 2012 of the list of all 1,930 applied-for strings may have skewed the data set that Interisle used in its study.
Uniregistry, for example, said:

The sole fact that queries are being received at the root level does not itself present a security risk, especially after the release to the public of the list of applied-for strings.

The argument seems to be that a lot of the NXDOMAIN traffic seen in 2013 is due to people and software querying applied-for TLDs to see if they’re live yet.
It’s quite a speculative argument, but it’s somewhat supported by the fact that many applied-for strings received more queries in 2013 than they did in the equivalent 2012 sampling.
Second-level domains
Some applicants pointed out that there may not be a correlation between the volume of traffic a string receives and the number of second-level domains being queried.
A string might get a bazillion queries for a single second-level domain name. If that domain name is reserved by the registry, the risk of a name collision might be completely eliminated.
The Interisle report did show that number of SLDs and the volume of traffic do not correlate.
For example, .hsbc is ranked 14th in terms of traffic volume but saw requests for just 2,000 domains, whereas .inc, which ranked 15th, saw requests for 73,000 domains.
Unfortunately, the Interisle report only published the SLD numbers for the top 35 strings by query volume, leaving most applicants none the wiser about the possible impact of their own strings.
And ICANN did not factor the number of SLDs into its decision about where to draw the line between “low” and “uncalculated” risk.
Conspiracy theories
Some applicants questioned whether the Interisle data itself was reliable, but I find these arguments poorly supported and largely speculative.
They propose that someone (meaning presumably Verisign, which stands to lose market share when new gTLDs go live, and which kicked off the name collisions debate in the first place) could have gamed the study by generating spurious requests for applied-for gTLDs during the period Interisle’s data was being captured.
Some applicants put forth this view, while other limited their comments to a request that future studies rely only on data collected before now, to avoid tampering at the point of collection in future.
NTAG said:

Query counts are very easily gamed by any Internet connected system, allowing for malicious actors to create the appearance of risk for any string that they may object to in the future. It would be very easy to create the impression of a widespread string collision problem with a home Internet connection and the abuse of the thousands of available open resolvers.

While this kind of mischief is a hypothetical possibility, nobody has supplied any evidence that Interisle’s data was manipulated by anyone.
Some people have privately pointed DI to the fact that Verisign made a substantial donation to the DNS-OARC — the group that collected the data that Interisle used in its study — in July.
The implication is that Verisign was somehow able to manipulate the data after it was captured by DNS-OARC.
I don’t buy this either. We’re talking about a highly complex 8TB data set that took Interisle’s computers a week to process on each pass. The data, under the OARC’s deal with the root server operators, is not allowed to leave its premises. It would not be easily manipulated.
Additionally, DNS-OARC is managed by Internet Systems Consortium — which runs the F-root and is Uniregistry’s back-end registry provider — from its own premises in California.
In short, in the absence of any evidence supporting this conspiracy theory, I find the idea that the Interisle data was hacked after it was collected highly improbable.
What next?
I’ve presented only a summary of some key points here. The full list of comments can be found here. The reply period for comments closes September 17.
Several ICANN constituencies that can usually be relied upon to comment on everything (registrars, intellectual property, business and non-commercial) have not yet commented.
Will ICANN extend the deadline? I suppose it depends on how cautious it wants to be, whether it believes the companies requesting the extension really are conducting their own internal collision studies, and how useful it thinks those studies will be.

NTAG rubbishes new gTLD collision risk report

Kevin Murphy, August 15, 2013, Domain Policy

The New gTLD Applicants Group has slated Interisle Consulting’s report into the risk of new gTLDs causing security problems on the internet, saying the problem is “overstated”.
The group, which represents applicants for hundreds of gTLDs and has a non-voting role in ICANN’s GNSO, called on ICANN to reclassify hundreds of “Uncalculated” risk strings as “Low” risk, meaning they would not face as substantial a delay before or uncertainty about their eventual delegation.
But NTAG said it “agreed” that the high-risk .corp and .home “should be delayed while further studies are conducted”. The current ICANN proposal is actually to reject both of these strings.
NTAG was responding to ICANN’s proposal earlier this month to delay 523 applications (for 279 strings) by three to six months while further studies are carried out.
The proposal was based on Interisle’s study of DNS root server logs, which showed many millions of daily queries for gTLDs that currently do not exist but have been applied for.
The worry is that delegating those strings would cause problems such as downtime or data leakage, where sensitive information intended for a recipient on the same local network would be sent instead to a new gTLD registry or one of its (possibly malicious) registrants.
NTAG reckons the risk presented by Interisle has been overblown, and it presented a point-by-point analysis of its own. It called for everything except .corp and .home to be categorized “Low” risk, saying:

We recognize that a small number of applied for names may possibly pose a risk to current operations, but we believe very strongly that there is no quantitative basis for holding back strings that pose less measurable threat than almost all existing TLDs today. This is why we urge the board to proceed with the applications classified as “Unknown Risk” using the mitigations recommended by staff for “Low Risk” strings. We believe the 80% of strings classified as “Low Risk” should proceed immediately with no additional mitigations.

The group pointed to a recent analysis by Verisign (which, contrarily, was trying to show that new gTLDs should be delayed) which included data about previous new gTLD delegations.
That report (pdf) said that .xxx was seeing 4,018 look-ups per million queries at the DNS root (PPM) before it was delegated. The number for .asia was 2,708.
If you exclude .corp and .home, both of those PPM numbers are multiples larger than the equivalent measures of query volume for every applied-for gTLD today, also according to Verisign’s data.
NTAG said:

None of these strings pose any more risk than .xxx, .asia and other currently operating TLDs.

the least “dangerous” current gTLD on the chart, .sx, had 331 queries per million in 2006. This is a higher density of NXDOMAIN queries than all but five proposed new TLDs. 4 Again, .sx was launched successfully in 2012 with none of the problems predicted in these reports.

Verisign’s report, which sought to provide a more qualitative risk analysis based on some data-supported guesses about where the error traffic is coming from and why, anticipated this interpretation.
Verisign said:

This could indicate that there is nothing to worry about when adding new TLDs, because there was no global failure of DNS when this was done before. Alternately, one might conclude that traffic volumes are not the only indicator of risk, and the semantic meaning of strings might also play a role. We posit that in some cases, those strings with semantic meanings, and which are in common use (such as in speech, writing, etc.) pose a greater risk for naming collision.

The company spent most of its report making somewhat tenuous correlations between its data (such as a relatively large number of requests for .medical from Japanese IP addresses) and speculative impacts (such as “undiagnosed system failures” at “a healthcare provider in Japan”).
NTAG, by contrast, is playing down the potential for negative outcomes, saying that in many cases the risks introduced by new gTLDs are no different from collision risks at the second level in existing TLDs.

Just as the NTAG would not ask ICANN to halt .com registrations while a twelve month study is performed on these problems, we believe there is no reason to introduce a delay in diversifying the Internet’s namespace due to these concerns.

While it stopped short of alleging shenanigans this time around, NTAG also suggested that future studies of root server error traffic could be gamed if botnets were engaged to crapflood the roots.
Its own mitigation plan, which addresses Interisle’s specific concerns, says that most of the reasons that non-existent TLDs are being looked up are either not a problem or can be easily mitigated.
For example, it says that queries for .youtube that arrived in the form of a request for “www.youtube” are probably browser typos and that there’s no risk for users if they’re taken to the YouTube dot-brand instead of youtube.com.
In another example, it points out that requests for “.cisco” or “.toshiba” without any second-level domains won’t resolve anyway, if dotless domains are banned in those TLDs. (NTAG, which has influential members in favor of dotless domains, stopped short of asking for a blanket ban.)
The Interisle report, and ICANN’s proposal to deal with it, are open for public comment until September 17. NTAG’s response is remarkably quick off the mark, for guessable reasons.

The UK is going nuts about porn and Go Daddy and Nominet are helping

Kevin Murphy, August 9, 2013, Domain Policy

In recent months the unhinged right of the British press has been steadily cajoling the UK government into “doing something about internet porn”, and the government has been responding.
I’ve been itching to write about the sheer level of badly informed claptrap being aired in the media and halls of power, but until recently the story wasn’t really in my beat.
Then, this week, the domain name industry got targeted. To its shame, it responded too.
Go Daddy has started banning certain domains from its registration path and Nominet is launching a policy consultation to determine whether it should ban some strings outright from its .uk registry.
It’s my beat now. I can rant.
For avoidance of doubt, you’re reading an op-ed, written with a whisky glass in one hand and the other being used to periodically wipe flecks of foam from the corner of my mouth.
It also uses terminology DI’s more sensitive readers may not wish to read. Best click away now if that’s you.
The current political flap surrounding internet regulation seems emerged from the confluence of a few high-profile sexually motivated murders and a sudden awareness by the mainstream media — now beyond the point of dipping their toes in the murky social media waters of Twitter — of trolls.
(“Troll” is the term, rightly or wrongly, the mainstream media has co-opted for its headlines. Basically, they’re referring to the kind of obnoxious assholes who relentlessly bully others, sometimes vulnerable individuals and sometimes to the point of suicide, online.)
In May, a guy called Mark Bridger was convicted of abducting and murdering a five-year-old girl called April Jones. It was broadly believed — including by the judge — that the abduction was sexually motivated.
It was widely reported that Bridger had spent the hours leading up to the murder looking at child abuse imagery online.
It was also reported — though far less frequently — that during the same period he had watched a loop of a rape scene from the 2009 cinematic-release horror movie Last House On The Left
He’d recorded the scene on a VHS tape when it was shown on free-to-air British TV last year.
Of the two technologies he used to get his rocks off before committing his appalling crime, which do you think the media zeroed in on: the amusingly obsolete VHS or the golly-it’s-all-so-new-and-confusing internet?
Around about the same time, another consumer of child abuse material named Stuart Hazell was convicted of the murder of 12-year-old Tia Sharp. Again it was believed that the motive was sexual.
While the government had been talking about a porn crackdown since 2011, it wasn’t until last month that the prime minister, David Cameron, sensed the time was right to announce a two-pronged attack.
First, Cameron said he wants to make it harder for people to access child abuse imagery online. A noble objective.
His speech is worth reading in full, as it contains some pretty decent ideas about helping law enforcement catch abusers and producers of abuse material that weren’t well-reported.
But it also contained a call for search engines such as Bing and Google to maintain a black-list of CAM-related search terms. People search for these terms will never get results, but they might get a police warning.
This has been roundly criticized as unworkable and amounting to censorship. If the government’s other initiatives are any guide, it’s likely to produce false positives more often than not.
Second, Cameron said he wants to make internet porn opt-in in the UK. When you sign up for a broadband account, you’ll have to check a box confirming that you want to have access to legal pornography.
This is about “protecting the children” in the other sense — helping to make sure young minds are not corrupted by exposure to complex sexual ideas they’re almost certainly not ready for.
The Open Rights Group has established that the opt-in process will look a little like this:

Notice how there are 10 categories and only one of them is related to pornography? As someone who writes about ICANN on a daily basis, I’m pretty worried about “esoteric materials” being blocked.
As a related part of this move, the government has already arranged with the six largest Wi-Fi hot-spot operators in the country to have porn filters turned on by default.
I haven’t personally tested these networks, but they’re apparently using the kind of lazy keyword filters that are already blocking access to newspaper reports about Cameron’s speech.
Censorship, in the name of “protecting the children” is already happening here in the UK.
Which brings me to Nominet and Go Daddy
Last Sunday, a guy called John Carr wrote a blog post about internet porn in the UK.
I can’t pretend I’ve ever heard of Carr, and he seems to have done a remarkably good job of staying out of Google, but apparently he’s a former board member of the commendable CAM-takedown charity the Internet Watch Foundation and a government adviser on online child safety.
He’d been given a preview of some headline-grabbing research conducted by MetaCert — a web content categorization company best known before now for working with .xxx operator ICM Registry — breaking down internet porn by the countries it is hosted in.
Because the British rank was surprisingly high, the data was widely reported in the British press on Monday. The Daily Mail — a right-wing “quality” tabloid whose bread and butter is bikini shots of D-list teenage celebrities — on Monday quoted Carr as saying:

Nominet should have a policy that websites registered under the national domain name do not contain depraved or disgusting words. People should not be able to register websites that bring disgrace to this country under the national domain name.

Now, assuming you’re a regular DI reader and have more than a passing interest in the domain name industry, you already know how ludicrous a thing to say this is.
Network Solutions, when it had a monopoly on .com domains, had a “seven dirty words” ban for a long time, until growers of shitake mushrooms and Scunthorpe Council pointed out that it was stupid.
You don’t even need to be a domain name aficionado to have been forwarded the hilarious “penisland.net” and “therapistfinder.com” memes — they’re as old as the hills, in internet terms.
Assuming he was not misquoted, a purported long-time expert in internet filtering such as Carr should be profoundly, deeply embarrassed to have made such a pronouncement to a national newspaper.
If he really is a government adviser on matters related to the internet, he’s self-evidently the wrong man for the job.
Nevertheless, other newspapers picked up the quotes and the story and ran with it, and now Ed Vaizey, the UK’s minister for culture, communications and creative industries, is “taking it seriously”.
Vaizey is the minister most directly responsible for pretending to understand the domain name system. As a result, he has quite a bit of pull with Nominet, the .uk registry.
Because Vaizey for some reason believes Carr is to be taken seriously, Nominet, which already has an uncomfortably cozy relationship with the government, has decided to “review our approach to registrations”.
It’s going to launch “an independently-chaired policy review” next month, which will invite contributions from “stakeholders”.
The move is explicitly in response to “concerns” about its open-doors registration policy “raised by an internet safety commentator and subsequently reported in the media.”
Carr’s blog post, in other words.
Nominet — whose staff are not stupid — already knows that what Carr is asking for is pointless and unworkable. It said:

It is important to take into account that the majority of concerns related to illegality online are related to a website’s content – something that is not known at the point of registration of a domain name.

But the company is playing along anyway, allowing a badly informed blogger and a credulous politician to waste its and its community’s time with a policy review that will end in either nothing or censorship.
What makes the claims of Carr and the Sunday Times all the more extraordinary is that the example domain names put forward to prove their points are utterly stupid.
Carr published on his blog a screenshot of Go Daddy’s storefront informing him that the domain rapeher.co.uk is available for registration, and wrote:

www.rapeher.co.uk is a theoretical possibility, as are the other ones shown. However, I checked. Nominet did not dispute that I could have completed the sale and used that domain.

Why has it not occurred to Nominet to disallow names of that sort? Nominet needs to institute an urgent review of its naming policies

To be clear, rapeher.co.uk did not exist at the time Carr wrote his blog. He’s complaining about an unregistered domain name.
A look-up reveals that kill-all-jews.co.uk isn’t registered either. Does that mean Nominet has an anti-Semitic registration policy?
As a vegetarian, I’m shocked and appalled to discovered that vegetarians-smell-of-cabbage.co.uk is unregistered too. Something must be done!
Since Carr’s post was published and the Sunday Times and Daily Mail in turn reported its availability, five days ago, nobody has registered rapeher.co.uk, despite the potential traffic the publicity could garner.
Nobody is interested in rapeher.co.uk except John Carr, the Sunday Times and the Daily Mail. Not even a domainer with a skewed moral compass.
And yet Go Daddy has took it upon itself, apparently in response to a call from the Sunday Times, to preemptively ban rapeher.co.uk, telling the newspaper:

We are withdrawing the name while we carry out a review. We have not done this before.

This is what you see if you try to buy rapeher.co.uk today:

Is that all it takes to get a domain name censored from the market-leading registrar? A call from a journalist?
If so, then I demand the immediate “withdrawal” of rapehim.co.uk, which is this morning available for registration.

Does Go Daddy not take male rape seriously? Is Go Daddy institutionally sexist? Is Go Daddy actively encouraging male rape?
These would apparently be legitimate questions, if I was a clueless government adviser or right-leaning tabloid hack under orders to stir the shit in Middle England.
Of the other two domains cited by the Sunday Times — it’s not clear if they were suggested by Carr or MetaCert or neither — one of them isn’t even a .co.uk domain name, it’s the fourth-level subdomain incestrape.neuken.co.uk.
There’s absolutely nothing Nominet, Go Daddy, or anyone else could do, at the point of sale, to stop that domain name being created. They don’t sell fourth-level registrations.
The page itself is a link farm, probably auto-generated, written in Dutch, containing a single 200×150-pixel pornographic image — one picture! — that does not overtly imply either incest or rape.
The links themselves all lead to .com or .nl web sites that, while certainly pornographic, do not appear on cursory review to contain any obviously illegal content.
The other domain cited by the Daily Mail is asian-rape.co.uk. Judging by searches on several Whois services, Google and Archive.org, it’s never been registered. Not ever. Not even after the Mail’s article was published.
It seems that the parasitic Daily Mail really, really doesn’t understand domain names and thought it wouldn’t make a difference if it added a hyphen to the domain that the Sunday Times originally reported, which was asianrape.co.uk.
I can report that asianrape.co.uk is in fact registered, but it’s been parked at Sedo for a long time and contains no pornographic content whatsoever, legal or otherwise.
It’s possible that these are just idiotic examples picked by a clueless reporter, and Carr did allude in his post to the existence of .uk “rape” domains that are registered, so I decided to go looking for them.
First, I undertook a series of “rape”-related Google searches that will probably be enough to get me arrested in a few years’ time, if the people apparently guiding policy right now get their way.
I couldn’t find any porn sites using .uk domain names containing the string “rape” in the first 200 results, no matter how tightly I refined my query.
So I domain-dipped for a while, testing out a couple dozen “rape”-suggestive .co.uk domains conjured up by my own diseased mind. All I found were unregistered names and parked pages.
I Googled up some rape-themed porn sites that use .com addresses — these appear to exist in abundance, though few appear to contain the offending string in the domain itself — and couldn’t find any that have bothered to even defensively register their matching .co.uk.
So I turned to Alexa’s list of the top one million most-popular domains. Parsing that (.csv), I counted 277 containing the string “rape”, only 32 of which (11%) could be loosely said to be using it in the sense of a sexual assault.
Whether those 32 sites contain legal or illegal pornographic content, I couldn’t say. I didn’t check. None of them were .uk addresses anyway.
Most of the non-rapey ones were about grapes.
I’m not going to pretend that my research was scientific, neither am I saying that there are no rape-themed .co.uk porn sites out there, I’m just saying that I tried for a few hours to find one and I couldn’t.
What I did find were dozens of legitimate uses of the string.
So if Nominet bans the word “rape” from domain name registrations under .uk — which is what Carr seems to want to happen — what happens to rapecrisis.org.uk?
Does the Post Office have to give up grapevine.co.uk, which it uses to help prevent crime? Does the eBay tools provider Terapeak have to drop its UK presence? Are “skyscrapers” too phallic now? Is the Donald Draper Fan Club doomed?
And what about the fine fellows at yorkshirerapeseedoil.co.uk or chilterncoldpressedrapeseedoil.co.uk?
If these examples don’t convince you that a policy of preemptive censorship would be damaging and futile, allow me to put the question in terms the Daily Mail might understand: why does Ed Vaizey hate farmers?

NTIA alarmed as Verisign hints that it will not delegate new gTLDs

Kevin Murphy, August 5, 2013, Domain Tech

Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.
The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.
In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.
The letters (pdf and pdf) were published by ICANN over the weekend, over two months after the first was sent.
Verisign senior VP Pat Kane wrote in the May letter:

we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.
We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.

we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.

Kane’s concerns were first outlined by Verisign in its March 2013 open letter to ICANN, which also expressed serious worries about issues such as internal name collisions.
Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.
These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:

the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program

That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.
Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.
NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:

NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.

Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.
This would be “sufficient for the delegation of new gTLDs”, she wrote.
Could Verisign block new gTLDs?
It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.
Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.
NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.
Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.
In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.
To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.
That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.
What’s next?
Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.
According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.
These documents are likely to be published within weeks, according to the NTIA letter.
Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.

Report names and shames most-abused TLDs

Kevin Murphy, July 11, 2013, Domain Services

Newish gTLDs .tel and .xxx are among the most secure top-level domains, while .cn and .pw are the most risky.
That’s according to new gTLD services provider Architelos, which today published a report analyzing the prevalence of abuse in each TLD.
Assigning an “abuse per million domains” score to each TLD, the company found .tel the safest with 0 and .cn the riskiest, with a score of 30,406.
Recently relaunched .pw, which has had serious problems with spammers, came in just behind .cn, with a score of 30,151.
Generally, the results seem to confirm that the more tightly controlled the registration process and the more expensive the domain, the less likely it is to see abuse.
Norway’s .no and ICM Registry’s .xxx scored 17 and 27, for example.
Surprisingly, the free ccTLD for Tokelau, .tk, which is now the second-largest TLD in the world, had only 224 abusive domains per million under management, according to the report..
Today’s report ranked TLDs with over 100,000 names under management. Over 90% of the abusive domains used to calculate the scores were related to spam, rather than anything more nefarious.
The data was compiled from Architelos’ NameSentry service, which aggregates abusive URLs from numerous third-party sources and tallies up the number of times each TLD appears.
The methodology is very similar to the one DI PRO uses in TLD Health Check, but Architelos uses more data sources. NameSentry is also designed to automate the remediation workflow for registries.

ICANN offers to split the cost of GAC “safeguards” with new gTLD registries

Kevin Murphy, June 28, 2013, Domain Policy

All new gTLD applicants will have to abide by stricter rules on security and Whois accuracy under government-mandated changes to their contracts approved by the ICANN board.
At least one of the new obligations is likely to laden new gTLDs registries with additional ongoing costs. In another case, ICANN appears ready to shoulder the financial burden instead.
The changes are coming as a result of ICANN’s New gTLD Program Committee, which on on Tuesday voted to adopt six more pieces of the Governmental Advisory Committee’s advice from March.
This chunk of advice, which deals exclusively with security-related issues, was found in the GAC’s Beijing communique (pdf) under the heading “Safeguards Applicable to all New gTLDs”.
Here’s what ICANN has decided to do about it.
Mandatory Whois checks
The GAC wanted all registries to conduct mandatory checks of Whois data at least twice a year, notifying registrars about any “inaccurate or incomplete records” found.
Many new gTLD applicants already offered to do something similar in their applications.
But ICANN, in response to the GAC advice, has volunteered to do these checks itself. The NGPC said:

ICANN is concluding its development of a WHOIS tool that gives it the ability to check false, incomplete or inaccurate WHOIS data

Given these ongoing activities, ICANN (instead of Registry Operators) is well positioned to implement the GAC’s advice that checks identifying registrations in a gTLD with deliberately false, inaccurate or incomplete WHOIS data be conducted at least twice a year. To achieve this, ICANN will perform a periodic sampling of WHOIS data across registries in an effort to identify potentially inaccurate records.

While the resolution is light on detail, it appears that new gTLD registries may well be taken out of the loop completely, with ICANN notifying their registrars instead about inaccurate Whois records.
It’s not the first time ICANN has offered to shoulder potentially costly burdens that would otherwise encumber registry operators. It doesn’t get nearly enough credit from new gTLD applicants for this.
Contractually banning abuse
The GAC wanted new gTLD registrants contractually forbidden from doing bad stuff like phishing, pharming, operating botnets, distributing malware and from infringing intellectual property rights.
These obligations should be passed to the registrants by the registries via their contracts with registrars, the GAC said.
ICANN’s NGPC has agreed with this bit of advice entirely. The base new gTLD Registry Agreement is therefore going to be amended to include a new mandatory Public Interest Commitment reading:

Registry Operator will include a provision in its Registry-Registrar Agreement that requires Registrars to include in their Registration Agreements a provision prohibiting Registered Name Holders from distributing malware, abusively operating botnets, phishing, piracy, trademark or copyright infringement, fraudulent or deceptive practices, counterfeiting or otherwise engaging in activity contrary to applicable law, and providing (consistent with applicable law and any related procedures) consequences for such activities including suspension of the domain name.

The decision to include it as a Public Interest Commitment, rather than building it into the contract proper, is noteworthy.
PICs will be subject to a Public Interest Commitment Dispute Resolution Process (PICDRP) which allows basically anyone to file a complaint about a registry suspected of breaking its commitments.
ICANN would act as the enforcer of the ruling, rather than the complainant. Registries that lose PICDRP cases face consequences up to an including the termination of their contracts.
In theory, by including the GAC’s advice as a PIC, ICANN is handing a loaded gun to anyone who might want to shoot down a new gTLD registry in future.
However, the proposed PIC language seems to be worded in such a way that the registry would only have to include the anti-abuse provisions in its contract in order to be in compliance.
Right now, the way the PIC is worded, I can’t see a registry getting terminated or otherwise sanctioned due to a dispute about an instance of copyright infringement by a registrant, for example.
I don’t think there’s much else to get excited about here. Every registry or registrar worth a damn already prohibits its customers from doing bad stuff, if only to cover their own asses legally and keep their networks clean; ICANN merely wants to formalize these provisions in its chain of contracts.
Actually fighting abuse
The third through sixth pieces of GAC advice approved by ICANN this week are the ones that will almost certainly add to the cost of running a new gTLD registry.
The GAC wants registries to “periodically conduct a technical analysis to assess whether domains in its gTLD are being used to perpetrate security threats such as pharming, phishing, malware, and botnets.”
It also wants registries to keep records of what they find in these analyses, to maintain a complaints mechanism, and to shut down any domains found to be perpetrating abusive behavior.
ICANN has again gone the route of adding a new mandatory PIC to the base Registry Agreement. It reads:

Registry Operator will periodically conduct a technical analysis to assess whether domains in the TLD are being used to perpetrate security threats, such as pharming, phishing, malware, and botnets. Registry Operator will maintain statistical reports on the number of security threats identified and the actions taken as a result of the periodic security checks. Registry Operator will maintain these reports for the term of the Agreement unless a shorter period is required by law or approved by ICANN, and will provide them to ICANN upon request.

You’ll notice that the language is purposefully vague on how registries should carry out these checks.
ICANN said it will convene a task force or GNSO policy development process to figure out the precise details, enabling new gTLD applicants to enter into contracts as soon as possible.
It means, of course, that applicants could wind up signing contracts without being fully apprised of the cost implications. Fighting abuse costs money.
There are dozens of ways to scan TLDs for abusive behavior, but the most comprehensive ones are commercial services.
ICM Registry, for example, decided to pay Intel/McAfee millions of dollars — a dollar or two per domain, I believe — for it to run daily malware scans of the entire .xxx zone.
More recently, Directi’s .PW Registry chose to sign up to Architelos’ NameSentry service to monitor abuse in its newly relaunched ccTLD.
There’s going to be a fight about the implementation details, but one way or the other the PIC would make registries scan their zones for abuse.
What the PIC does not state, and where it may face queries from the GAC as a result, is what registries must do when they find abusive behavior in their gTLDs. There’s no mention of mandatory domain name suspension, for example.
But in an annex to Tuesday’s resolution, ICANN’s NGPC said the “consequences” part of the GAC advice would be addressed as part of the same future technical implementation discussions.
In summary, the NGPC wants registries to be contractually obliged to contractually oblige their registrars to contractually oblige their registrants to not do bad stuff, but there are not yet any obligations relating to the consequences, to registrants, of ignoring these rules.
This week’s resolutions are the second big batch of decisions ICANN has taken regarding the GAC’s Beijing communique.
Earlier this month, it accepted some of the GAC’s direct advice related to certain specific gTLDs it has a problem with, the RAA and intergovernmental organizations and pretended to accept other advice related to community objections.
The NGPC has yet to address the egregiously incompetent “Category 1” GAC advice, which was the subject of a public comment period.

ICM price cut sees 10 times more .xxx sales

ICM Registry saw 13,348 newly added .xxx domain name registrations in May, a period during which it and its registrars were offering the names at .com prices.
That’s more than 10 times the volume it shifted in January, the last month for which official numbers are available.
ICM dropped the registry fee for .xxx from $62 to $7.85 for the whole month, ostensibly (though not actually, I suspect) as part of its antitrust settlement with PornTube owner Manwin Licensing.
Registrants could register names for periods of up to 10 years at the promotional pricing, and registrants appear to have taken advantage.
The number of add-years for May was 25,733, according to ICM, an average of 1.9 years per name. That’s compared to its January rate of 1.37, when the .com average, for comparison, was 1.24.
About a quarter of the newly added names had been previously registered at full price and later allowed to drop.
The .xxx namespace now holds over 122,000 domains, still off its December 2012 peak of 142,000, according to the company.

Porn.com owner buys porn.xxx

Kevin Murphy, May 22, 2013, Domain Sales

PimpRoll, a pornography publisher and owner of porn.com, has bought the domain name porn.xxx from registry manager ICM Registry, it has just been announced.
The domain is already live. The site appears to be distinct from porn.com, but PimpRoll said it plans to build another “tube” site there.
The price of the domain was not disclosed, but PimpRoll is known to have paid $9.5 million for its .com address.
I’d guess we’re talking about low six figures for the .xxx, which was reserved by ICM as a “premium” name.
ICM said in a press release that the buyer will also automatically qualify for porn.sex, porn.porn and porn.adult under ICM’s Grandfathering Program, should it be awarded those gTLDs by ICANN.

New gTLDs applicants should brace for GAC delays

Kevin Murphy, May 12, 2013, Domain Policy

New gTLD applicants affected by Governmental Advisory Committee advice may be about to find that their launch runway is quite a bit longer than they hoped.
That’s the message that seems to be coming through subtly from ICANN and the GAC itself — via last week’s applicant update webinar and GAC chair Heather Dryden — right now.
Dryden made it clear in an official ICANN interview, recorded early last week, that the GAC expects its Beijing communique to be “fully taken into account”, lest governments abandon ICANN altogether.
But at the same time she seemed to suggest that the rest of the community may have misunderstood the GAC’s intentions, due in part to the fact that its deliberations were held in private.
Here’s a slice of the interview with Brad White, ICANN’s media relations chief:

WHITE: Suppose the [ICANN] board in the end says “thank you very much for the advice, we’ve looked at it, but we’re moving on” and basically ignores a lot of that advice?
DRYDEN: I think it would be a very immediate reaction, questioning the value of participating in the Governmental Advisory Committee. If it is going to be the place for governments to come and raise their concern and influence the decision making that occurs at ICANN then we have to be able to demonstrate that the advice generated is fully taken into account or to the maximum extent appropriate taken in and in this way governments understand that the GAC is useful mechanism for them.

WHITE: What you seem to be saying is there is concern about whether or not some governments might pull out from that multi-stakeholder model?
DRYDEN: Right, right why would they come? How would they justify coming to the GAC meetings? Why would they support this model if in fact it’s there aren’t channels available to them and appropriate to their role and perspective as a government?

Under ICANN’s bylaws, the board of directors does not have to adopt GAC advice wholesale.
It is able to disagree with, and essentially overrule, the GAC, but only after they’ve tried “in good faith and in a timely and efficient manner, to find a mutually acceptable solution”.
The only time this has happened before was in February 2011, when discussions covered the final details of the new gTLD program and the imminent approval of the .xxx gTLD.
Then, the ICANN board and the GAC gathered in Brussels for two days of intense face-to-face discussions, which was followed by multiple “scorecard” drafts and follow-up talks.
It seems very likely that we’re going to see something similar for the Beijing advice, if for no other reason than the communique is vague enough that ICANN will need a lot of clarification before it acts.
So does this mean delay for new gTLD applicants? Probably.
Dryden, asked about the GAC’s agenda for the ICANN public meeting in Durban this July, said:

There may well also be aspects of safeguard advice that we would discuss further with the board or with the community or would need to, particularly the implementation aspects of some of the new safeguards that the GAC identified.

The “safeguard” advice is the large section of the Beijing communique that attempts to impose broad new obligations on over 500 new gTLDs in “regulated or professional sectors”.
Dryden appeared to acknowledge the criticism that much of the advice appears unworkable to many, saying:

The intent behind this was to provide a reminder or to reinforce the importance of preexisting obligations and the applicability of national laws and really not to impose new burdens on applicants or registrants.
However, there are measures proposed in that safeguard advice where there are real implementation questions and so we think this is a very good focus for discussions now in the community with the GAC and with the board around that particular aspect of the advice.

The safeguard advice is currently open for public comment. I outline some of the many implementation questions in this post.
White put to Dryden DI’s criticism that the communique was a “perplexing, frustrating mess” aimed at using the DNS to solve wider problems with the internet.
For example, the GAC appears to want to use ICANN contracts use introduce new ways to enforce copyrights and data security regulations, something perhaps better addressed by legislation.
She responded:

It’s really not intended to impose a new global regulatory regime. It is intended to be consistent with ICANN’s existing role and serve as a reminder to those that have applied of what is really involved with implementing if they are successful a string globally as well as really wanting to emphasize that some of those strings raise particular sensitivities for governments

So have we misunderstood the GAC’s intentions? That seems to be the message.
Watch the whole Dryden interview here:

Based on current evidence, I’d say that any applicant covered by the Beijing communique that still believes they have a chance of signing a contract before July is kidding itself.
The ICANN board’s new gTLD program committee met on Wednesday to discuss its response to the Beijing communique. The results of this meeting should be published in the next few days.
But there’s little doubt in my mind that ICANN doesn’t have enough time before Durban to pick through the advice, consult with the GAC, and come up with a mutually acceptable solution.
Quite apart from the complexity of and lack of detail in the GAC’s requests, there’s the simple matter of logistics.
Getting a representative quorum of GAC members in the same room as the ICANN board for a day or two at some point in the next 60 days would be challenging, based on past performance.
I think it’s much more likely that a day or two will be added to the Durban meeting (before its official start) to give the board and GAC the kind of time they need to thrash this stuff out.
ICANN’s latest program timetable, discussed during a webinar on Thursday night, extended the deadline for the ICANN board’s response to the GAC from the first week of June to the end of June.
On the call, program manager Christine Willett confirmed that this date assumes the board adopts all of the advice — it does not take into account so-called “bylaws consultations”.
While it seems clear that all 518 applications (or more) affected by the “safeguards” advice won’t be signing anything before Durban, it’s less clear whether the remaining applicants will feel an impact too.