Latest news of the domain name industry

Recent Posts

New gTLDs are the new Y2K: .corp and .home are doomed and everything else is delayed

Kevin Murphy, August 6, 2013, Domain Registries

The proposed gTLDs .home and .corp create risks to the internet comparable to the Millennium Bug, which terrorized a burgeoning internet at the turn of the century, and should be rejected.
Meanwhile, every other gTLD that has been applied for in the current round could be delayed by months in order to mitigate the risks they pose to internet users.
These are the conclusions ICANN has drawn from Interisle Consulting’s independent study into the problems that could be caused when new gTLDs clash with widely-used internal naming systems.
The extensive study, which drew on 8TB of traffic data provided by 11 of the 13 DNS root server operators, is 197 pages long and absolutely fascinating. It was published by ICANN today.
As Interisle CEO Lyman Chapin reported at the ICANN meeting in Durban a few weeks ago, the large majority of TLDs that have been applied for in the current round already receive large amounts of error traffic:

Of the 1,409 distinct applied-for TLD strings, 1,367 appeared at least once in the 2013 DITL [Day In the Life of the Internet] data with the string at the TLD position.

We’ve previously reported on the volume of queries new gTLDs get, such as the fact that .home gets half a billion hits a day and that 3% of all requests were for strings that have been applied for in the current round.
The extra value in Interisle’s report comes when it starts to figure out how many end points are making these requests, and how many second-level domains they’re looking for.
These are vitally important factors for assessing the scale of the risk of each TLD.
Again, .home and .corp appear to be the most dangerous.
Interisle capped the number of second-level domains it counted in the 2013 data at 100,000 per TLD per root server — 1,100,000 domains in total — and .home was the only TLD string to hit this cap.
Cisco Systems’ proposed .cisco TLD came close, failing to hit the cap in only one of the 11 root servers providing data, while .box and .iinet (both also used widely on home routers) hit the cap on at least one root server.
The lowest count of second-level domains of the 35 listed in the report came from .hsbc, the bank brand, but even that number was a not-inconsiderable 2,000.
Why are these requests being made?
Surprisingly, interactions between a security feature in Google’s own Chrome browser and common residential routers appear to be the biggest cause of queries for non-existent TLDs.
That issue, which impacts mainly .home, accounts for about 46% of the requests counted, according to the report.
In second place, with 15% of the queries, are requests for real domain names that appear to have had a non-existent TLD — again, usually .home — appended by a residential router or cable modem.
Apparent typos — where a user enters a URL but forgets to type the TLD — were a relatively small percentage of requests, coming in at under 1% of queries.
The study also found that bad requests come from many thousands of sources. This table compares the number of requests to the number of sources.
[table id=14 /]
The “Count” column is the number, in thousands, of requests for each TLD string. The “Prefix Count ” column refers to the number of sources providing this traffic, counted by the /24 IP address block (each of which is up to 256 potential hosts).
As you can see, there’s not necessarily a correlation between the number of requests a TLD gets and the number of people making the requests — .google gets queried by more sources than the others, but it’s only ranked 24 in terms of overall query volume, for example.
Interisle concluded from all this that .corp and .home are simply too dangerous to delegate, comparing the problem to the year 2000 bug, where a global effort was required to make sure software could support the four-digit dating scheme required by the turn of the century.
Here’s what the report says about .corp:

users could be taken to the wrong web site (and possibly be exposed to phishing attacks) or told that web sites do not exist when they do, depending on how the .corp TLD is resolved. A corporate mail system might attempt to deliver email to the wrong server, and this could expose sensitive or confidential information to someone who was not supposed to receive it. In essence, everything deployed in the private network would need to be checked.
There are no easy solutions to these problems. In an ideal world, the operators of these private networks would get a timely notification of the new TLD’s delegation and then take action to address these issues. That seems very improbable. Even if ICANN generated sufficient publicity about the new TLD’s delegation, there is no guarantee that this will come to the attention of the management or operators of the private networks that could be jeopardized by the delegation.

It seems reasonable to estimate that the amount of effort involved might be comparable to a wholesale renumbering of the internal network or the Y2K problem.

It notes that applied-for TLDs such as .site, .office, .group and .inc appear to be used in similar ways to .home and .corp, but do not appear to present as broad a risk.
To be clear, the risk we’re talking about here isn’t just people typing the wrong things into browsers, it’s about the infrastructure on many thousands of private networks starting to make the wrong security assumptions about domain names.
ICANN, in response, has outlined a series of measures sure to infuriate many gTLD applicants, but which are consistent with its goal to protect the security and stability of the internet.
They’re also consistent with some of the recommendations put forward by Verisign over the last few months in its campaign to show that new gTLDs pose huge risks.
First, .corp and .home are dead. These two strings have been categorized “high risk” by ICANN, which said:

Given the risk level presented by these strings, ICANN proposes not to delegate either one until such time that an applicant can demonstrate that its proposed string should be classified as low risk

Given the Y2K-scale effort required to mitigate the risks, and the fact that the eventual pay-off wouldn’t compensate for the work, I feel fairly confident in saying the two strings will never be delegated.
Another 80% of the applied-for strings have been categorized “low risk”. ICANN has published a spreadsheet explaining which string falls into which category. Low risk does not mean they get off scot-free, however.
First, all registries for low-risk strings will not be allowed to activate any domain names in their gTLD for 120 days after contract signing.
Second, for 30 days after a gTLD is delegated the new registries will have to reach out to the owners of each IP address that attempts to query names in that gTLD, to try to mitigate the risk of internal name collisions.
This, as applicants will no doubt quickly argue, is going to place them under a massive cost burden.
But their outlook is considerably brighter than that of the remaining 20% of applications, which are categorized as “uncalculated risk” and face a further three to six months of delay while ICANN conducts further studies into whether they’re each “high” or “low” risk strings.
In other words, the new gTLD program is about to see its biggest shake-up since the GAC delivered its Advice in Beijing, adding potentially millions in costs and delays for applicants.
ICANN’s proposed mitigation efforts are now open for public comment.
One has to wonder why the hell ICANN didn’t do this study two years ago.

NTIA alarmed as Verisign hints that it will not delegate new gTLDs

Kevin Murphy, August 5, 2013, Domain Tech

Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.
The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.
In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.
The letters (pdf and pdf) were published by ICANN over the weekend, over two months after the first was sent.
Verisign senior VP Pat Kane wrote in the May letter:

we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.
We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.

we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.

Kane’s concerns were first outlined by Verisign in its March 2013 open letter to ICANN, which also expressed serious worries about issues such as internal name collisions.
Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.
These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:

the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program

That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.
Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.
NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:

NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.

Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.
This would be “sufficient for the delegation of new gTLDs”, she wrote.
Could Verisign block new gTLDs?
It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.
Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.
NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.
Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.
In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.
To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.
That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.
What’s next?
Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.
According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.
These documents are likely to be published within weeks, according to the NTIA letter.
Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.

“Risky” gTLDs could be sacrificed to avoid delay

Kevin Murphy, July 20, 2013, Domain Tech

Google and other members of the New gTLD Applicant Group are happy to let ICANN put their applications on hold in response to security concerns raised by Verisign.
During the ICANN 46 Public Forum in Durban on Thursday, NTAG’s Alex Stamos — CTO of .secure applicant Artemis — said that agreement had been reached that about half a dozen applications could be delayed:

NTAG has consensus that we are willing to allow these small numbers of TLDs that have a significant real risk to be delayed until technical implementations can be put in place. There’s going to be no objection from the NTAG on that.

While he didn’t name the strings, he was referring to gTLDs such as .home and .corp, which were highlighted earlier in the week as having large amounts of error traffic at the DNS root.
There’s a worry, originally expressed by Verisign in April and independent consultant Interisle this week, that collisions between new gTLDs and widely-used internal network names will lead to data leakage and other security problems.
Google’s Jordyn Buchanan also took the mic at the Public Forum to say that Google will gladly put its uncontested application for .ads — which Interisle says gets over 5 million root queries a day — on hold until any security problems are mitigated.
Two members of the board described Stamos’ proposal as “reasonable”.
Both Stamos and ICANN CEO Fadi Chehade indirectly criticised Verisign for the PR campaign it has recently built around its new gTLD security concerns, which has led to somewhat one-sided articles in the tech press and mainstream media such as the Washington Post.
Stamos said:

What we do object to is the use of the risk posed by a small, tiny, tiny fraction — my personal guess would be six, seven, eight possible name spaces that have any real impact — to then tar the entire project with a big brush. For contracted parties to go out to the Washington Post and plant stories about the 911 system not working because new TLDs are turned on is completely irresponsible and is clearly not about fixing the internet but is about undermining the internet and undermining new gTLDs.

Later, in response to comments on the same topic from the Association of National Advertisers, which suggested that emergency services could fail if new gTLDs go live, Chehade said:

Creating an unnecessary alarm is equally irresponsible… as publicly responsible members of one community, let’s measure how much alarm we raise. And in the trademark case, with all due respect it ended up, frankly, not looking good for anyone at the end.

That’s a reference to the ANA’s original campaign against new gTLDs, which wound up producing not much more than a lot of column inches about an utterly pointless Congressional hearing in late 2011.
Chehade and the ANA representative this time agreed publicly to work together on better terms.

.home gets half a billion hits a day. Could this put new gTLDs at risk?

Kevin Murphy, July 17, 2013, Domain Tech

New gTLDs could be in jeopardy following the results of a study into the security risks they may pose.
ICANN is likely to be told to put in place measures to mitigate the risk of new gTLDs causing problems, and chief security officer Jeff Moss said “deadlines will have to move” if global DNS resolution is put at risk.
His comments referred to the potential for clashes between applied-for new gTLD strings and non-existent TLDs that are nevertheless already widely used on internal networks.
That’s a problem that has been increasingly highlighted by Verisign in recent months. The difference here is that the study’s author does not have a .com monopoly to protect.
Interisle Consulting, which has been hired by ICANN to look into the problem, today released some of its preliminary findings during a session at the ICANN 47 meeting in Durban, South Africa.
The company looked at domain name look-up data collected from one of the DNS root servers over a 48-hour period, in an attempt to measure the potential scope of the clash problem.
Some of its findings are surprising:

  • Of the 1,408 strings originally applied for in the current new gTLD round, only 14 do not currently have any root traffic.
  • Three percent of all requests were for strings that have been applied for in the current round.
  • A further 19% of requests were for strings that could potentially be applied for in future rounds (that is, the TLD was syntactically well-formed and not a banned string such as .local).
  • .home, the most frequently requested invalid TLD, received over a billion queries over the 48-hour period. That’s compared to 8.5 billion for .com

Here’s a list of the top 17 invalid TLDs by traffic, taken from Interisle’s presentation (pdf) today.
Most Queried TLDs
If the list had been of the top 100 requested TLDs, 13 of them would have been strings that have been applied for in the current round, Interisle CEO Lyman Chapin said in the session.
Here’s the most-queried applied-for strings:
Most Queried TLDs
Chapin was quick to point out that big numbers do not necessarily equate to big security problems.
“Just occurrence doesn’t tell you a lot about whether that’s a good thing, a bad thing, a neutral thing, it just tells you how often the string appears,” he said.
“An event that occurs very frequently but has no negative side effects is one thing, an event that occurs very infrequently but has a really serious side effect, like a meteor strike — it’s always a product of those two factors that leads you to an assessment of risk,” he said.
For example, the reason .ice appears prominently on the list appears to be solely due to an electricity producer in Costa Rica, which “for some reason is blasting .ice requests out to the root”, Chapin said.
If the bad requests are only coming from a small number of sources, that’s a relatively simple problem to sort out — you just call up the guy responsible and tell him to sort out his network.
In cases like .home, where much of the traffic is believed to be coming from millions of residential DSL routers, that’s a much trickier problem.
The reverse is also true, however: a small number of requests doesn’t necessarily mean a low-impact risk.
There may be a relatively small number of requests for .hospital, for example, but if the impact is even a single life support machine blinking off… probably best not delegate that gTLD.
Chapin said that the full report, which ICANN said could be published in about two weeks, does contain data on the number of sources of requests for each invalid TLD. Today’s presentation did not, however.
As well as the source of the request, the second-level domains being requested is also an important factor, but it does not seem to have been addressed by this study.
For example, .home may be getting half a billion requests a day, but if all of those requests are for bthomehub.home — used today by the British ISP BT in its residential routers — the .home registry might be able to eliminate the risk of data leakage by simply giving BT that domain.
Likewise, while .hsbc appears on the list it’s actually been applied for by HSBC as a single-registrant gTLD, so the risk of delegating it to the DNS root may be minimal.
There was no data on second-level domains in today’s presentation and it does not appear that the full Interisle report contains it either. More study may be needed.
Donuts CEO Paul Stahura also took to the mic to asked Chapin whether he’d compared the invalid TLD requests to requests for invalid second-level domains in, say, .com. He had not.
One of Stahura’s arguments, which were expounded at length in the comment thread on this DI blog post, is that delegating TLDs with existing traffic is little different to allowing people to register .com domains with existing traffic.
So what are Interisle’s recommendations likely to be?
Judging by today’s presentation, the company is going to present a list of risk-mitigation options that are pretty similar to what Verisign has previously recommended.
For example, some strings could be permanently banned, or there could be a “trial run” — what Verisign called an “ephemeral delegation” — for each new gTLD to test for impact before full delegation.
It seems to me that if the second-level request data was available, more mitigation options would be opened up.
ICANN chief security officer Jeff Moss, who was on today’s panel, was asked what he would recommend to ICANN CEO Fadi Chehade today in light of the report’s conclusions.
“I am not going to recommend we do anything that has any substantial SSR impact,” said Moss. “If we find any show-stoppers, if we find anything that suggests impact for global DNS, we won’t do it. It’s not worth the risk.”
Without prompting, he addressed the risk of delay to the new gTLD program.
“People sometimes get hung up on the deadline, ‘How will you know before the deadline?’,” he said. “Well, deadlines can move. If there’s something we find that is a show-stopper, deadlines will have to move.”
The full report, expected to be published in two weeks, will be opened for public comment, ICANN confirmed.
Assuming the report is published on time and has a 30-day comment period, that brings us up to the beginning of September, coincidentally the same time ICANN expects the first new gTLD to be delegated.
ICANN certainly likes to play things close to the whistle.

Report names and shames most-abused TLDs

Kevin Murphy, July 11, 2013, Domain Services

Newish gTLDs .tel and .xxx are among the most secure top-level domains, while .cn and .pw are the most risky.
That’s according to new gTLD services provider Architelos, which today published a report analyzing the prevalence of abuse in each TLD.
Assigning an “abuse per million domains” score to each TLD, the company found .tel the safest with 0 and .cn the riskiest, with a score of 30,406.
Recently relaunched .pw, which has had serious problems with spammers, came in just behind .cn, with a score of 30,151.
Generally, the results seem to confirm that the more tightly controlled the registration process and the more expensive the domain, the less likely it is to see abuse.
Norway’s .no and ICM Registry’s .xxx scored 17 and 27, for example.
Surprisingly, the free ccTLD for Tokelau, .tk, which is now the second-largest TLD in the world, had only 224 abusive domains per million under management, according to the report..
Today’s report ranked TLDs with over 100,000 names under management. Over 90% of the abusive domains used to calculate the scores were related to spam, rather than anything more nefarious.
The data was compiled from Architelos’ NameSentry service, which aggregates abusive URLs from numerous third-party sources and tallies up the number of times each TLD appears.
The methodology is very similar to the one DI PRO uses in TLD Health Check, but Architelos uses more data sources. NameSentry is also designed to automate the remediation workflow for registries.

Artemis signs 30 anchor tenants for .secure gTLD

Artemis, the NCC Group subsidiary applying for .secure, says it has signed up 30 big-name customers for its expensive, high-security new gTLD offering.
CTO Alex Stamos said that the list includes three “too big to fail” banks and three of the four largest social networking companies. They’ve all signed letters of intent to use .secure domains, he said.
He was speaking at a small gathering of customers and potential customers in London yesterday, to which DI was invited on the condition that we not report the name of anyone else in attendance.
Artemis is doing this outreach despite the facts that a) .secure is still in a two-way contention set and b) deep-pocketed online retailer Amazon is the other applicant.
Stamos told DI he’s confident that Artemis will win .secure one way or the other — hopefully Amazon’s single-registrant bid will run afoul of ICANN’s current rethink of “closed generics”.
He expects to launch .secure in the second or third quarter of next year with a few dozen registrants live from pretty much the start.
The London event yesterday, which was also attended by executives from a few household names, was the second of three the company has planned. New York was the first and there’ll soon be one in California.
I’m hearing so many stories about new gTLD applicants that still haven’t figured out their go-to-market strategies recently that it was refreshing to see one that seems to be on the ball.
Artemis’ vision for .secure is also probably the most technologically innovative proposed gTLD that I’m currently aware of.
As the name suggests, security is the order of the day. Registrants would be vetted during the lengthy registration process and the domain names themselves would be manually approved.
Not only will there not be any typosquatting, but there’s even talk of registering common typos on behalf of registrants.
Registrants would also be expected to adhere to levels of security on their web sites (mandatory HTTPS, for example) and email systems (mandatory TLS). Domains would be scanned daily for malware and would have manual penetration testing at least annually.
Emerging security standards would be deployed make sure that browsers would only trust SSL certificates provided by Artemis (or, more likely, its CA partner) when handling connections to .secure sites.
Many of the policies are still being worked out, sometimes in conversation with an emerging “community” of the aforementioned anchor tenants, but there’s one thing that’s pretty clear:
This is not a domain name play.
If you buy a .secure domain name, you’re really buying an NCC managed security service that allows you to use a domain name, as opposed to an easily-copied image, as your “trust mark”.
Success for .secure, if it goes live as planned, won’t be measured in registration volume. I wouldn’t expect it to be much bigger than .museum, the tiniest TLD today, within its first few years.
Prices for .secure have not yet been disclosed, but I’m expecting them to be measured in the tens of thousands of dollars. If “a domain” costs $50,000 a year, don’t be surprised.
Artemis’ .secure would however be available to any enterprise that can afford it and can pass its stringent security tests, which makes it more “open” than Amazon’s vaguely worded closed generic bid.
Other ICANN accredited registrars will technically be allowed to sell .secure domains, but the Registry-Registrar Agreement will be written in such a way as to make it economically non-viable for them to do so.
Overall, the company has a bold strategy with some significant challenges.
I wonder how enthusiastic enterprises will be about using .secure if their customers start to assume that their regular domain name (which may even be a dot-brand) is implicitly insecure.
Artemis is also planning to expose some information about how well its registrants are complying with their security obligations to end users, which may make some potential registrants nervous.
Even without this exposure, simply complying appears to be quite a resource-intensive ongoing process and not for the faint-hearted.
However, that’s in keeping with the fact that it’s a managed security service — companies buy these things in order to help secure their systems, not cover up problems.
Stamos also said that its eligibility guidelines are being crafted with its customers in such a way that registrants will only ever be kicked out of .secure if they’re genuinely bad actors.
Artemis’ .secure is a completely new concept for the gTLD industry, and I wouldn’t like to predict whether it will work or not, but the company seems to be going about its pre-sales marketing and outreach in entirely the correct way.

ICANN offers to split the cost of GAC “safeguards” with new gTLD registries

Kevin Murphy, June 28, 2013, Domain Policy

All new gTLD applicants will have to abide by stricter rules on security and Whois accuracy under government-mandated changes to their contracts approved by the ICANN board.
At least one of the new obligations is likely to laden new gTLDs registries with additional ongoing costs. In another case, ICANN appears ready to shoulder the financial burden instead.
The changes are coming as a result of ICANN’s New gTLD Program Committee, which on on Tuesday voted to adopt six more pieces of the Governmental Advisory Committee’s advice from March.
This chunk of advice, which deals exclusively with security-related issues, was found in the GAC’s Beijing communique (pdf) under the heading “Safeguards Applicable to all New gTLDs”.
Here’s what ICANN has decided to do about it.
Mandatory Whois checks
The GAC wanted all registries to conduct mandatory checks of Whois data at least twice a year, notifying registrars about any “inaccurate or incomplete records” found.
Many new gTLD applicants already offered to do something similar in their applications.
But ICANN, in response to the GAC advice, has volunteered to do these checks itself. The NGPC said:

ICANN is concluding its development of a WHOIS tool that gives it the ability to check false, incomplete or inaccurate WHOIS data

Given these ongoing activities, ICANN (instead of Registry Operators) is well positioned to implement the GAC’s advice that checks identifying registrations in a gTLD with deliberately false, inaccurate or incomplete WHOIS data be conducted at least twice a year. To achieve this, ICANN will perform a periodic sampling of WHOIS data across registries in an effort to identify potentially inaccurate records.

While the resolution is light on detail, it appears that new gTLD registries may well be taken out of the loop completely, with ICANN notifying their registrars instead about inaccurate Whois records.
It’s not the first time ICANN has offered to shoulder potentially costly burdens that would otherwise encumber registry operators. It doesn’t get nearly enough credit from new gTLD applicants for this.
Contractually banning abuse
The GAC wanted new gTLD registrants contractually forbidden from doing bad stuff like phishing, pharming, operating botnets, distributing malware and from infringing intellectual property rights.
These obligations should be passed to the registrants by the registries via their contracts with registrars, the GAC said.
ICANN’s NGPC has agreed with this bit of advice entirely. The base new gTLD Registry Agreement is therefore going to be amended to include a new mandatory Public Interest Commitment reading:

Registry Operator will include a provision in its Registry-Registrar Agreement that requires Registrars to include in their Registration Agreements a provision prohibiting Registered Name Holders from distributing malware, abusively operating botnets, phishing, piracy, trademark or copyright infringement, fraudulent or deceptive practices, counterfeiting or otherwise engaging in activity contrary to applicable law, and providing (consistent with applicable law and any related procedures) consequences for such activities including suspension of the domain name.

The decision to include it as a Public Interest Commitment, rather than building it into the contract proper, is noteworthy.
PICs will be subject to a Public Interest Commitment Dispute Resolution Process (PICDRP) which allows basically anyone to file a complaint about a registry suspected of breaking its commitments.
ICANN would act as the enforcer of the ruling, rather than the complainant. Registries that lose PICDRP cases face consequences up to an including the termination of their contracts.
In theory, by including the GAC’s advice as a PIC, ICANN is handing a loaded gun to anyone who might want to shoot down a new gTLD registry in future.
However, the proposed PIC language seems to be worded in such a way that the registry would only have to include the anti-abuse provisions in its contract in order to be in compliance.
Right now, the way the PIC is worded, I can’t see a registry getting terminated or otherwise sanctioned due to a dispute about an instance of copyright infringement by a registrant, for example.
I don’t think there’s much else to get excited about here. Every registry or registrar worth a damn already prohibits its customers from doing bad stuff, if only to cover their own asses legally and keep their networks clean; ICANN merely wants to formalize these provisions in its chain of contracts.
Actually fighting abuse
The third through sixth pieces of GAC advice approved by ICANN this week are the ones that will almost certainly add to the cost of running a new gTLD registry.
The GAC wants registries to “periodically conduct a technical analysis to assess whether domains in its gTLD are being used to perpetrate security threats such as pharming, phishing, malware, and botnets.”
It also wants registries to keep records of what they find in these analyses, to maintain a complaints mechanism, and to shut down any domains found to be perpetrating abusive behavior.
ICANN has again gone the route of adding a new mandatory PIC to the base Registry Agreement. It reads:

Registry Operator will periodically conduct a technical analysis to assess whether domains in the TLD are being used to perpetrate security threats, such as pharming, phishing, malware, and botnets. Registry Operator will maintain statistical reports on the number of security threats identified and the actions taken as a result of the periodic security checks. Registry Operator will maintain these reports for the term of the Agreement unless a shorter period is required by law or approved by ICANN, and will provide them to ICANN upon request.

You’ll notice that the language is purposefully vague on how registries should carry out these checks.
ICANN said it will convene a task force or GNSO policy development process to figure out the precise details, enabling new gTLD applicants to enter into contracts as soon as possible.
It means, of course, that applicants could wind up signing contracts without being fully apprised of the cost implications. Fighting abuse costs money.
There are dozens of ways to scan TLDs for abusive behavior, but the most comprehensive ones are commercial services.
ICM Registry, for example, decided to pay Intel/McAfee millions of dollars — a dollar or two per domain, I believe — for it to run daily malware scans of the entire .xxx zone.
More recently, Directi’s .PW Registry chose to sign up to Architelos’ NameSentry service to monitor abuse in its newly relaunched ccTLD.
There’s going to be a fight about the implementation details, but one way or the other the PIC would make registries scan their zones for abuse.
What the PIC does not state, and where it may face queries from the GAC as a result, is what registries must do when they find abusive behavior in their gTLDs. There’s no mention of mandatory domain name suspension, for example.
But in an annex to Tuesday’s resolution, ICANN’s NGPC said the “consequences” part of the GAC advice would be addressed as part of the same future technical implementation discussions.
In summary, the NGPC wants registries to be contractually obliged to contractually oblige their registrars to contractually oblige their registrants to not do bad stuff, but there are not yet any obligations relating to the consequences, to registrants, of ignoring these rules.
This week’s resolutions are the second big batch of decisions ICANN has taken regarding the GAC’s Beijing communique.
Earlier this month, it accepted some of the GAC’s direct advice related to certain specific gTLDs it has a problem with, the RAA and intergovernmental organizations and pretended to accept other advice related to community objections.
The NGPC has yet to address the egregiously incompetent “Category 1” GAC advice, which was the subject of a public comment period.

Verisign steps up anti-gTLD campaign with attack on ICANN’s war chest

Verisign wants ICANN to publish a list of all the reasons it might be sued over the new gTLD program, claiming security and stability risks might be one of them.
In the latest salvo fired in its war against new gTLDs, the company now suggests that the $115 million “risk fund” surplus that ICANN has accumulated is for fending off lawsuits when it breaks the internet.
In a letter (pdf) sent Friday, Verisign asks ICANN to justify the existence of this war chest in light of the fact that it has managed to secure legal indemnities from pretty much everyone involved in the program.
It attempts to link the risk fund to the possible security risks of introducing new gTLDs to the internet, which Verisign has been haranguing ICANN about for the last few months.
“We believe ICANN should be forthcoming about the risks it is shifting and the need for the substantial risk reserve fund, in particular,” the letter, signed by general counsel Richard Goshorn, says.
It’s been well known for a few years that $60,000 of each $185,000 new gTLD application fee was to be allocated to a risk fund created to cover unexpected extra program costs.
The reserve was designed to cover things like underestimating the costs or time needed to evaluate applications, but also, crucially, the lawsuits that ICANN expected but has not yet received.
The cash pile is often to referred to, usually with black humor, as the “legal defense fund”.
Now Verisign seems to be saying that the legal risks are not limited to trademark disputes or the usual antitrust nonsense, but to the security risks ICANN is “transferring” to others.
As we’ve been reporting for the last few months, Verisign has suddenly decided that new gTLDs pose a risk to the internet, largely due to the potential for clashes between newly delegated strings and the unnofficial domains that many organizations already use on their intranets.
For a great discussion on the merits of this argument check out this DI article and comment thread.
With the latest letter, Verisign suggests that ICANN knows it might be sued for messing up corporate intranets, but is keeping that fact quiet.
Referring to a report it issued in March, when its security concerns first emerged, it says:

We believe that ICANN may have established and be maintaining the Risk Reserve in such a high amount in anticipation of significant claims relating to one or more risks identified in the Verisign Report.

If ICANN does get sued on these grounds, the defense cost will effectively have been covered by new gTLD applicants (and therefore their customers, assuming the costs are passed on), Verisign says.
It’s therefore asking for ICANN to disclose the reasons why its risk fund is so big, “in particular, the details regarding what ‘possible litigation’ factored into ICANN’s decisions”.
In other words, Verisign is asking ICANN to publish a list of reasons people might sue it, something I can’t imagine its general counsel agreeing to any time soon.
Is this an effort to shame ICANN into taking its security concerns more seriously, or just more FUD designed to disrupt the new gTLD program and protect its .com dominance?
Opinions, no doubt, will be split.

Verisign says people might die if new gTLDs are delegated

Kevin Murphy, June 2, 2013, Domain Policy

If there was any doubt in your mind that Verisign is trying to delay the launch of new gTLDs, its latest letter to ICANN and the Governmental Advisory Committee advice should settle it.
The company has ramped up its anti-expansion rhetoric, calling on the GAC to support its view that launching new gTLDs now will put the security and stability of the internet at risk.
People might die if some strings are delegated, Verisign says.
Among other things, Verisign is now asking for:

  • Each new gTLD to be individually vetted for its possible security impact, with particular reference to TLDs that clash with widely-used internal network domains (eg, .corp).
  • A procedure put in place to throttle the addition of new gTLDs, should a security problem arise.
  • A trial period for each string ICANN adds to the root, so that new gTLDs can be tested for security impact before launching properly.
  • A new process for removing delegated gTLDs from the root if they cause problems.

In short, the company is asking for much more than it has to date — and much more that is likely to frenzy its rivals — in its ongoing security-based campaign against new gTLDs.
The demands came in Verisign’s response to the GAC’s Beijing communique, which detailed government concerns about hundreds of applied-for gTLDs and provided frustratingly vague remediation advice.
Verisign has provided one of the most detailed responses to the GAC advice of any ICANN has received to date, discussing how each item could be resolved and/or clarified.
In general, it seems to support the view that the advice should be implemented, but that work is needed to figure out the details.
In many cases, it’s proposing ICANN community working groups. In others, it says each affected registry should negotiate individual contract terms with ICANN.
But much of the 12-page letter talks about the security problems that Verisign suddenly found itself massively concerned about in March, a week after ICANN started publishing Initial Evaluation results.
The letter reiterates the potential problem that when a gTLD is delegated that is already widely used on internal networks, security problems such as spoofing could arise.
Verisign says there needs to be an “in-depth study” at the DNS root to figure out which strings are risky, even if the volume of traffic they receive today is quite low.
It also says each string should be phased in with an “ephemeral root delegation” — basically a test-bed period for each new gTLD — and that already-delegated strings should be removed if they cause problems:

A policy framework is needed in order to codify a method for braking or throttling new delegations (if and when these issues occur) either in the DNS or in dependent systems that provides some considerations as to when removing an impacting string from the root will occur.

While it’s well-known that strings such as .home and .corp may cause issues due to internal name clashes and their already high volume of root traffic, Verisign seems to want every string to be treated with the same degree of caution.
Lives may be on the line, Verisign said:

The problem is not just with obvious strings like .corp, but strings that have even small query volumes at the root may be problematic, such as those discussed in SAC045. These “outlier” strings with very low query rates may actually pose the most risks because they could support critical devices including emergency communication systems or other such life-supporting networked devices.

We believe the GAC, and its member governments, would undoubtedly share our fundamental concern.

The impact of pretty much every recommendation made in the letter would be to delay or prevent the delegation of new gTLDs.
A not unreasonable interpretation of this is that Verisign is merely trying to protect its $800 million .com business by keeping competitors out of the market for as long as possible.
Remember, Verisign adds roughly 2.5 million new .com domains every month, at $7.85 a pop.
New gTLDs may well put a big dent in that growth, and Verisign doesn’t have anything to replace it yet. It can’t raise prices any more, and the patent licensing program it has discussed has yet to bear fruit.
But because the company also operates the primary DNS root server, it has a plausible smokescreen for shutting down competition under the guise of security and stability.
If that is what is happening, one could easily make the argument that it is abusing its position.
If, on the other hand, Verisign’s concerns are legitimate, ICANN would be foolhardy to ignore its advice.
ICANN CEO Fadi Chehade has made it clear publicly, several times, that new gTLDs will not be delegated if there’s a good reason to believe they will destabilize the internet.
The chair of the SSAC has stated that the internal name problem is largely dealt with, at least as far as SSL certificates go.
The question now for ICANN — the organization and the community — is whether Verisign is talking nonsense or not.

Is the .home new gTLD doomed? ICANN poses study of security risks

Kevin Murphy, May 22, 2013, Domain Tech

ICANN has set up a study into whether certain applied-for new gTLD strings pose a security risk to the internet, admitting that some gTLDs may be rejected as a result.
Its board of directors on Saturday approved new research into the risk of new gTLD clashes with “internal name certificates”, saying that the results could kill off some gTLD applications.
In its rationale, the board stated:

it is possible that study might uncover risks that result in the requirement to place special safeguards for gTLDs that have conflicts. It is also possible that some new gTLDs may not be eligible for delegation.

Internal name certificates are the same digital certificates used in secure, web-based SSL transactions, but assigned to domain names in private, non-standard namespaces.
Many companies have long used non-existent TLDs such as .corp, .mail and .home on their private networks and quite often they obtain SSL certs from the usual certificate authorities in order to enable encryption between corporate resources and their internal users.
The problem is that browsers and other applications on laptops and other mobile devices can attempt to access these private namespaces from anywhere, not only from the local network.
If ICANN should set these TLD strings live in the authoritative DNS root, registrants of clashing domain names might be able to hijack traffic intended for secure resources and, for example, steal passwords.
That’s obviously a worry, but it’s one that did not occur to ICANN’s Security and Stability Advisory Committee until late last year, when it immediately sought out the help of the CA/Browser Forum.
It turned out the the CA/Browser forum, an alliance of certificate authorities and browser makers, was already on the case. It has put in new rules that state certificates issued to private TLDs that match new gTLDs will be revoked 120 days after ICANN signs a contract with the new gTLD registry.
But it’s still not entirely clear whether this will sufficiently mitigate risk. Not every CA is a member of the Forum, and some enterprises might find 120 day revocation windows challenging to work with.
Verisign recently highlight the internal certificate problem, along with many other potential risks, in an open letter to ICANN.
But both ICANN CEO Fadi Chehade and the chair of SSAC, Patrick Falstrom, have said that the potential security problems are already being addressed and not a reason to delay new gTLDs.
The latest board resolution appears to modify that position.
The board has now asked CEO Fadi Chehade and SSAC to “consider the potential security impacts of applied-for new-gTLD strings in relation to this usage.”
The Root Server Stability Advisory Committee and the CA/Browser Forum will also be tapped for data.
While the study will, one assumes, not be limited to any specific applied-for gTLD strings, it’s well known that some strings are more risky than others.
The root server operators already receive vast amounts of erroneous DNS traffic looking for .home and .corp, for example. If any gTLD applications are at risk, it’s those.
There are 10 remaining applications for .home and five for .corp.