Latest news of the domain name industry

Recent Posts

Search all new gTLD collision block lists

Kevin Murphy, October 31, 2013, Domain Services

DI PRO subscribers can now see which strings appear most often in new gTLD registries’ block-lists and search for strings — such as trademarks or premium strings — that interest them.
We’ve just launched the New gTLD Collisions Database.
Currently, it indexes all 14,493 unique strings that ICANN has told the first 13 new gTLD registries to block — due to the risk of collisions with internal networks — when they launch.
By default the strings are ranked by how many gTLDs have been told to block them.
You’ll see immediately that “www” is currently blocked in all 13 registries, suggesting that it’s likely to be blocked in the vast majority of new gTLDs.
Users can also search for a string in order to see how many, and which, new gTLDs are going to have to block it.
We’re hoping that the service will prove useful to trademark owners that want to see which “freebie” blocked strings they stand to benefit from, and in which gTLDs.
For example, we can already see that 10 meaningful strings containing “nike” are to be blocked. For “facebook”, it’s four registries. For “google”, it’s currently three strings across six gTLDs.
The service will also hopefully be useful to registries that want to predict which strings ICANN may tell them to block. We’re seeing a lot of gambling terms showing up in non-gambling TLDs, for example.
Here’s a screenshot of sample output for the search “cars”.
DI PRO
As ICANN publishes lists for more gTLDs, the database will grow and become more useful and time-saving.
Comments, suggestions and bug reports as always to kevin@domainincite.com

First collision block-lists out now. How painful will they be for new gTLDs?

Kevin Murphy, October 19, 2013, Domain Registries

ICANN has published the name collision block-lists for the first four new gTLDs, and they making pretty interesting reading.
The four registries in question will be required to block between 104 and 680 unique second-level domains from their gTLDs if they want to use the fastest path to delegation on offer.
The four gTLDs with lists published this morning are: .сайт (Russian “.site”), .онлайн (Russian “.online”), شبكة. (Arabic “.web”) and .游戏 (Chinese “.games”).
These were the first four new gTLDs with signed Registry Agreements. ICANN seems to be following the order contracts were signed, rather than the official prioritization number.
So what’s on the lists?
Gibberish
The first thing to note is that, as expected, ICANN has helpfully removed invalid strings (such as those with underscores) and gibberish Google Chrome strings from the lists, greatly reducing their size.
The block-lists are based on Day In The Life Of The Internet data, which recorded DNS root queries for applied-for gTLDs over 48-hour periods between 2006 and 2013.
According to ICANN, “a significant proportion” of the DITL queries were for the nonsense 10-character strings that Chrome generates and sometimes accidentally sends to the public DNS.
Because these “appear to present minimal risk if filtered from the block lists”, ICANN has made an effort to automatically remove as many as possible, while acknowledging it may not have caught them all. The human eye is good at spotting meaningless strings, software is not so adept.
All four lists still contain plenty of gibberish strings, according to this human eye, but mostly they’re not of 10 characters in length.
IDNs
All four lists published today are for non-Latin domain names and are presumably expecting their registries to be mostly populated with IDN.IDN domain names.
As such, the impact of their mostly Latin block-lists may be even smaller than it first appears.
For example, if we look at the list for .сайт, which has 680 strings to block, we discover that only 80 of them are IDNs (beginning with xn--). I assume they’re all, like the gTLD, in Cyrillic script.
I haven’t decoded all of these strings from Punycode and translated them from Russian, but the fact is there’s only 80 of them, which may not be unduly punitive on CORE Association’s launch plan.
At the other end of the spectrum, Donuts will only have to block 13 IDN strings from its .游戏 (Chinese .games) gTLD, and the ASCII strings on its list are mostly numeric or gibberish.
There’s very probably some potentially valuable generic strings on these lists, of course, which could impact the landrush purse, but it’s beyond this monoglot’s expertise to pick them out.
Trademarks
A small number of Latin-script brands appear on all four lists.
Donuts will have to block nokia.游戏, htc.游戏 and ipad.游戏 in its Chinese “.games”, for example. CORE will have to block iphone.сайт and brazzersnetwork.онлайн. DotShabaka Registry will have to block شبكة.redbull.
The impact of this on the registries could be minimal — a few fewer sunrise sales, assuming the brand owner intended to defensively register.
If the blocked brand was a potential launch partner it could be much more annoying and even a launch-delaying factor. It’s not yet clear how registries and brand owners will be able to get these names unblocked.
Bear in mind that registries are not allowed to activate these domains in any sense for any use — they must continue to return NXDOMAIN error responses as they do today.
I’m sure ipad.游戏 (“ipad.games”) could have some value to Apple — and to Donuts, in the unlikely event it managed to persuade Apple to be an anchor tenant — but it’s no longer available.
ICANN will deliver full mitigation plans for each gTLD, which may often include releasing blocked names to their ‘rightful’ owner, but that’s not expected for some months.
Generic terms
A number of generic dictionary terms are getting blocked, which may prove irksome for those registries with long lists. For example, CORE will have to block photo.сайт and forum.сайт.
So far, .онлайн has by far the longest list of ASCII generics to block — stuff like “football”, “drinks”, “poker” and “sex”. Even weirdness like “herpesdating” and “musclefood”.
As it’s an IDN, this might not be too painful, but once ICANN starts publishing lists for Latin gTLDs we might start seeing some serious impact on registries’ ability to sell and market premium domains.
Shurely shome mishtake
There are a few strings on these lists that are just weird, or are likely to prove annoying to registries.
All four of these gTLDs are going to have to block “www” at the second level, for example, which could impact their registry marketing — www.tld is regularly used by TLD registries.
It is going to be really problematic if “www” shows up on the block-lists for dot-brand registries — many applicants say “www.” is likely to be the default landing page for their dot-brand.
The only string that ICANN says it won’t put on any block-list is “nic”, which was once the standard second-level for every TLD’s registry web site but doesn’t really have mass recognition nowadays.
The block-lists also include two-letter strings, most of which correspond to ccTLDs and all of which are already banned by the base Registry Agreement for precisely that reason.
There’s no reason for these two-letter names to be on the lists, but I don’t see their presence causing any major additional heartaches for registries.
So is this good news or what?
As the four block-lists to be released so far are for IDN gTLDs, and because I don’t speak Chinese, Arabic or Russian, it’s a difficult call today to say how painful this is going to be.
There are plenty of reasons to be worried if you’re a new gTLD applicant, certainly.
Premium names will be taken out of play.
You may lose possible anchor tenants.
Your planned registry-use domain names may be banned.
If you’re a dot-brand, you’d better start thinking of alternatives to “www.”.
But the block-lists are expected to be temporary, pending permanent mitigation, and they’re so far quite small in terms of meaningful strings, so on balance I’d say so far it’s not looking too bad.
On the other hand, nothing on the published lists jumps out at me like a massive security risk, so the whole exercise might be completely pointless and futile anyway.

It’s official: Verisign has balls of steel

Kevin Murphy, October 18, 2013, Domain Registries

Verisign has spent the last six months telling anyone who will listen that new gTLDs will kill Japanese people and cause electricity grids to fail, so you’d expect the company to be a little coy about its own activities that (applying Verisign logic) endanger life and the global economy.
But apparently not.
Verisign today decided to use the same blog it has been using to play up the risks indicated by NXDOMAIN traffic in new gTLDs to plug its own service that actively encourages people to register error-traffic domains.
The company has launched DomainScope, which combines several older “domain discovery” tools — DomainFinder, DomainScore and DomainCountdown — under one roof.
According to an unsigned corporate blog post, with my emphasis:

DomainScope enables users to discover domain name registration opportunities through learning about the recent history of a domain name, understanding a domain name’s DNS traffic patterns, and knowing which domains are available that are receiving traffic.

That’s right, Verisign is giving malicious hackers the ability, for free, to find out which .com, .net and .tv domains currently receive NXDOMAIN traffic, so that the hackers can pay Verisign to register them and cause mayhem.
I used the service today see what mischief might be possible, and hit paydirt on my first query.
Typing in “mail” as the search query, ordering the results by “Traffic Score” — a 1 to 10 measure of how much error traffic a domain already gets — I got these results:

You’ll notice (click to enlarge if you don’t) that the third result, with a 9.9 out of 10 score, is netsoolmail.net.
That caught my attention for obvious reasons, and a little Googling seems to confirm that it’s a typo of netsolmail.net, a domain Network Solutions uses for its mail servers (or possibly a spam filter).
Network Solutions is of course a top-ten registrar with millions of mostly high-end customers.
So what?
Well, if Verisign’s arguments are to be believed, this poses a huge risk of information leakage — something that should be avoided at all costs in new gTLDs but which is apparently just fine in .com and .net.
Emails set to go to netsoolmail.net will fail today due to an NXDOMAIN response. But what happens when somebody registers that domain (which is likely to happen about 10 minutes after this post is published)?
Do they suddenly start receiving thousands of sensitive emails intended for NetSol’s customers?
Could NetSol’s spam filters all start to fail, causing SOMEBODY TO DIE! from a dodgy Viagra?
I don’t know. No clue. Probably not.
But there’s a risk, right? Even if it’s a very small risk (as Verisign argues), shouldn’t ICANN be preventing Verisign from promoting these domains, maybe using some kind of massive block-list?
Data leakage is important enough to Versign that it was the headline risk it posed in a recent report aimed at getting new gTLDs delayed.
In an August “technical report” entitled “New gTLD Security, Stability, Resiliency Update: Exploratory Consumer Impact Analysis”, somebody from Verisign wrote (pdf):

once delegated, the registrants under new gTLDs have the ability to register specific domains for targeted collisions

This form of information leakage can violate privacy of users, provide a competitive advantage between business rivals, expose details of corporate network infrastructures, or even be used to infer details about geographical locations of network assets or users

What the report fails to mention is that registrants today have this ability, and that Verisign is actively encouraging the practice.
In Yiddish they call what Verisign has done today chutzpah.
In British English, we call it taking the piss.

Name collision block-lists to be published this week

Kevin Murphy, October 17, 2013, Domain Registries

ICANN will begin to publish the lists of domains that new gTLD registries must block at launch as early as this week, according to an updated name collisions plan released last night.
Registries that have already signed contracts with ICANN will be given their block-lists “before the end of this week”, ICANN said.
Registries that were not able to sign contracts because they’d been given an “uncalculated risk” categorization will now be invited, in priority order, to contracting.
The base Registry Agreement itself has been updated — unilaterally — to include provisions requiring registries to block second-level names deemed risky when they are delegated.
For each contracted gTLD, ICANN will provide what it’s calling a SLD Collision Occurrence Assessment, which will outline the steps registries need to take to mitigate their own collision risk.
It is also expected to contain a list of SLDs that have been seen on the Day In The Life Of The Internet data sets, collected from root server operators over 48-hour periods between 2006 and 2013.
Using previous years’ DITL data is news to me, and could potentially greatly expand the number of SLDs — already expected to be in the thousands in many cases — that registries are obliged to block.
“Most” new gTLD applicants are expected to be eligible for what ICANN calls an “alternative path to delegation”, in which the registry simply blocks the SLDs on an ICANN-provided list, gets delegated, and deals with the SLD Collision Occurrence Assessment at a later date.
Here’s how ICANN described the timetable for this:

For Registry Operators with executed registry agreements the Assessments and SLD lists will be posted to the specific TLD’s registry agreement page on the ICANN website. The first of these will be available before the end of this week.
In the coming weeks ICANN will post the alternative path eligibility assessments and SLD lists for all applied-for gTLDs.

In other words, if you haven’t already signed a contract there’s not yet a firm date on when you’ll find out how many — and which — names you’re expected to block, or even if you’re eligible for the alternative delegation path.

Some gTLD applicants welcome ICANN’s clash plan

Kevin Murphy, October 11, 2013, Domain Registries

Some new gTLD applicants, including two of the bigger portfolio applicants, have grudgingly accepted ICANN’s latest name collisions remediation plan as a generally positive development.
ICANN this week scrapped its three-tier categorization of applications, implicitly accepting that it was based on a flawed risk analysis, and instead said new gTLDs can be delegated without delay if the registries promise to block every potentially impacted second-level domain.
You may recall that yesterday dotShabaka Registry said on DI that the plan was a “dog’s breakfast” and criticized ICANN for not taking more account of applicants’ comments.
But others are more positive, if not exactly upbeat, welcoming the opportunity to avoid the six-month delays ICANN’s earlier mitigation plan would have imposed on many strings.
Uniregistry CEO Frank Schilling congratulated ICANN for reframing the debate, in light of Verisign’s ongoing campaign to persuade everyone that name collisions will be hugely risky. He told DI:

There has been a great deal of FUD surrounding name collisions from incumbent registry operators who are trying to negatively shape the utility of the new gTLDs they will be competing against.
I think it was important for ICANN to take control of the conversation in the name of common sense. These types of collisions are ultimately minor in the grand scheme and they occur each and every day in existing namespaces like .com, without the internet melting down.
I think anything that shapes conversation in a way that accelerates the process and sides with common sense is good, I have not yet thought of how this latest change can be gamed to the downside of new G’s.

Uniregistry has 51 remaining new gTLD applications, 20 of which were categorized as “uncalculated risk” and faced considerable delays under ICANN’s original plan.
Schilling’s take was not unique among applicants we talked on and off the record.
Top Level Domain Holdings is involved with 77 current applications as back-end provider — and as applicant in most of them — and also faced “uncalculated” delay on many.
CEO Antony Van Couvering welcomed ICANN’s plan less than warmly and raised questions about the future studies it plans to conduct, criticizing ICANN’s apparent lack of trust in its community:

Basically the move is positive. I characterize it as getting out of jail in exchange for some community service — definitely a trade I’ll make.
On the other hand, the decision betrays ICANN’s basic lack of confidence in its own staff and in the ICANN community. You can see this in the vagueness of the study parameters, because it’s not at all clear what the consultant will be studying or what criteria will be used to make any recommendations — or indeed if anything can be said beyond mere data collection.
But more important, they are hiring an outside consultant when the world’s experts on the subject are all here already, many willing to work for free. ICANN either doesn’t think it can trust its community and/or doesn’t know how to engage them. So they punt on the issue and hire a consultant. It’s a behavior you can see in poorly-run companies anywhere, and it’s discouraging for ICANN’s future.

Similar questions were posed and answered by ICANN’s former new gTLD program supremo Kurt Pritz, in a comment on DI last night. Pritz is now an independent consultant working with new gTLD applicants and others.
He speculated that ICANN’s main concern is not appeasing Verisign and its new allies in the Association of National Advertisers, but rather attempting to head off future governmental interference.
Apparently speaking on his own behalf, Pritz wrote:

The greatest concern is the big loss: some well-spoken individual going to the US Congress or the European Commission and saying, “those lunatics are about to delegate dangerous TLDs, there will be c-o-l-l-i-s-i-o-n-s!!!” All the self-interested parties (acting rationally self-interested) will echo that complaint.
And someone in a governmental role will listen, and the program might be at jeopardy.
So ICANN is taking away all the excuses of those claiming technical risk. By temporarily blocking ALL of the SLDs seen in the day-in-the-life data and by putting into place a process to address new SLD queries that might raise a risk of harm, ICANN is delegating TLDs that are several orders of magnitude safer on this issue than all of the hundreds of TLDs that have already been delegated.

Are you a new gTLD applicant? What do you think? Is ICANN’s plan good news for you?

dotShabaka Diary — Day 17, Collisions plan is a dog’s breakfast

Kevin Murphy, October 10, 2013, Domain Policy

The seventeenth installment of dotShabaka Registry’s journal, charting its progress towards becoming one of the first new gTLDs to go live, written by general manager Yasmin Omer.

Thursday 10 October 2013
As regular readers of this journal will know, we have been frustrated by the lack of certainty surrounding the new gTLD program.
Other industries would have picketed the building of the regulator with suitably angry placards being waved and a catchy song. Unfortunately in the domain name industry, angry blogs serve as a replacement to chaining ourselves to Fadi’s swivel chair.
So as a compromise, I ask readers to hum their favourite protest tune while reading our latest tale of woe.
Flippant commentary aside, the document ICANN released on name collisions yesterday (New gTLD Collision Occurrence Management) is a perfect example of what many applicants find challenging about ICANN staff’s use of the public comment process.
Despite the many detailed studies undertaken by a number of applicants and reported through the public comment process, it would appear that many of the recommendations or proposed solutions have been ignored by ICANN staff and the NGPC in favour of something that resembles a ‘dog’s breakfast’.
You’ll recall that ICANN made some suggestions to mitigate the risk of name collisions. There were three categories: High (dead men walking), Uncategorised (deer in headlights) and Low (phew ).
There was going to be a study about something at sometime that would decide stuff and the aforementioned deer would roam free. There was going to be a TLD tasting period during which time registries got to play spammer to unsuspecting ISPs (I wonder if I can get a refund like domain tasters used to, if I don’t get enough traffic?).
A comment period was had and people duly commented. Neither the original suggestions nor the comments seem to have any connection with what appeared in the document we read yesterday. The actions and processes discussed in the document are completely new. Oh, and the Board approved them.
A thought for those in the industry: are we so inured to this kind of procedural disdain that one more example simply doesn’t make us angry anymore?
So what of the document? Is it good for us and the industry? Well there is no low or uncategorised risk grouping anymore. Everyone is in the same bucket of riskiness. Depending on who you are, that might be good for you.
The TLD tasting period, where a TLD was delegated and emails were sent to every poor soul who made the mistake of looking up a non-existing TLD, is gone. That is definitely good. An outreach program with network operators and ISPs seems like an eminently sensible idea. A spam campaign chasing random DNS queries seems like a mad idea.
Now to the grim news – there will be another study (isn’t there always) and another process (if it’s implementation can we just… oh never mind).
The study will tell us which strings from the DITL data set (and other unnamed sets) are risky and why and what we should do with them. Such risk will be contextual to the TLD in question. There’s no detail on how many strings we are talking about. There’s no criteria for the string’s presence in the list (number of queries, type of queries, known risks etc). That sounds like a large chunk of work. No matter how it is automated.
The process to be determined is how the strings and suggested mitigations are delivered to and managed by registries. There’s potentially a lot of future system development and labour costs on the horizon for TLD operators.
Many TLDs will not need to wait for this completed work to delegate. However they must accept from ICANN a list of names they can’t delegate until the process/study and their personalised list of names is completed.
Firstly ICANN has to decide if you can take this option up. How will they do that you ask? I would point you to the very clear decision tree located within the document, only it appears to have been left out. Coming soon.
Second, ICANN has to create and send you the standby extra cautious list. Now we are getting nervous. Just how many names will be on this list? Will there be any filtering or common sense applied? Is the extra cautious list subject to comment? Does it exist already?
There’s also a new process that allows someone who suffers harm from the delegation of a second level domain to have it blocked for a period of up to 2 years. When one thinks through such a process it seems most likely that this harm is only determined after the delegation, not prior. Therefore Registries may be in a position where they need to un-delegate a domain already in use by a registrant.
That could be a rude shock to some innocent registrants. The principle of doing this bothers us. The practical and legal implication of doing this bothers us. And the lack of any detail around how this process is managed, most definitely bothers us.
Whenever I hear process and study I also hear delay. In fact the modus operandi of those opposing the gTLD program has not been to fight it, but to suggest one more study and another process, knowing the effect such activities will have.
So here we are, certain in our uncertainty that one day – soon or not so soon – we will be delegated.
We can’t be the only ones who have internal jokes about the randomness of ICANN policy development. They help us make light of the otherwise business crippling proclamations we receive with no warning.
Don’t you wish, just for once, those jokes weren’t so true?

Read previous and future diary entries here.

New gTLD applicants get a way to avoid name collision delay

Kevin Murphy, October 9, 2013, Domain Tech

ICANN has given blessed relief to many new gTLD applicants by wiping potentially months off their path to delegation.
Its New gTLD Program Committee this week adopted a new “New gTLD Collision Occurrence Management Plan” which aims to tackle the problem of clashes between new gTLDs and names used on private networks.
The good news is that the previous categorization of strings according to risk, which would have delayed “uncalculated risk” gTLDs by months pending further study, has been scrapped.
The two “high risk” strings — .home and .corp — don’t catch a break, however. ICANN says it will continue to refuse to delegate them “indefinitely”.
For everyone else, ICANN said it will conduct additional studies into the risk of name collisions, above and beyond what Interisle Consulting already produced.
The study will take into account not only the frequency that new gTLDs currently generate NXDOMAIN traffic in the DNS root, but also the number of second-level domains queried, the diversity of requesting sources, and other factors.
Any new gTLD applicant that does not wish to wait for this study will be able to proceed to delegation without delay, but only if they block huge numbers of second-level domains at launch.
The registries will have to block every SLD that was queried in their gTLD according to the Day in the Life of the Internet data that Interisle used in its study.
This list will vary by TLD, but in the most severe cases is likely to extend to tens of thousands of names. In many cases, it’s likely to be a few thousand names.
Fortunately, studies conducted by the likes of Donuts and Neustar indicate that many of these SLDs — maybe even the majority — are likely to be invalid strings, such as those with an underscore or other non-DNS character, or randomly generated 10-character strings of gibberish generated by Google Chrome.
In other words, the actual number of potentially salable domains that registries will have to block may turn out to be much lower than it appears at first glance.
Each SLD will have to be blocked in such a way that it continues to return NXDOMAIN responses, as they all do today.
Because the DITL data represented a 48-hour snapshot in May 2013, and may not include every potentially affected string, ICANN is also proposing to give organizations a way to:

report and request the blocking of a domain name (SLD) that causes demonstrably severe harm as a consequence of name collision occurrences.

The process will allow the deactivation (SLD removal from the TLD zone) of the name for a period of up to two (2) years in order to allow the affected party to effect changes to its network to eliminate the DNS request leakage that causes collisions, or mitigate the harmful impact.

One has to wonder if any trademark lawyers reading this will think: “Ooh, free defensive registration!” It will be interesting to see if any of them give it a cheeky shot.
I’ve got a feeling that most new gTLD applicants will want to take ICANN up on its offer. It’s not an ideal solution for them, but it does give them a way to get into the root relatively quickly.
There’s no telling what ICANN’s additional studies will find, but there’s a chance it could be negative for their string(s) — getting delegated at least mitigates the risk of never getting delegated.
The new ICANN proposal may in some cases interfere with their plans to market and use their TLDs, however.
Take a dot-brand such as .cisco, which the networking company has applied for. Its block list is likely to have about 100,000 strings on it, increasing the chances that useful, brandable SLDs are going to be taken out of circulation for a while.
ICANN is also proposing to conduct an awareness-raising campaign, using the media, to let network operators know about the risks that new gTLDs may present to their networks.
Depending on how effective this is, new registries may be able to forget about getting positive column inches for their launch — if a journalist is handed a negative angle for a story on a plate, they’ll take it.

Mockapetris hired as ICANN security advisor

Kevin Murphy, October 7, 2013, Domain Tech

DNS inventor Paul Mockapetris has been recruited by ICANN to act as senior security advisor to the Generic Domains Division under its president, Akram Atallah.
It’s not clear precisely what Mockapetris’ role will be, though it doesn’t appear to be a full-time position. He is still chairman and chief scientist of DNS software vendor Nominum.
ICANN recently recorded an interview with Mockapetris in which he pooh-poohed Verisign’s campaign against new gTLDs on security grounds, saying name collisions were not a new phenomenon.
It’s not the first time ICANN has hired a “name” as a security advisory.
One of the inventors of public key cryptography, Whitfield Diffie, became VP of information security under former CEO Rod Beckstom but quietly disappeared not too long after Fadi Chehade took over last year.

dotShabaka Diary — Day 15, Iran and Name Collisions

Kevin Murphy, October 3, 2013, Domain Registries

The fifteenth installment of dotShabaka Registry’s journal, charting its progress towards becoming one of the first new gTLDs to go live, written by general manager Yasmin Omer.

Thursday 3 October 2013
At a time when ICANN has hit the ‘pause’ button on the new gTLD program in order to assess the impact of “name collisions” on the security and stability of the DNS, we were surprised to see the ICANN Board approve the delegation of ایران., the IDN ccTLD for the Islamic Republic of Iran. While we understand the many distinctions between a ccTLD and a gTLD, the DNS does not make any such distinction.
As we’ve heard from Paul Mockapetris and John Crain recently in their interviews posted on the ICANN website, name collisions (or, more accurately, NX Domain responses) is not a new phenomenon; they have been evident with the introduction of any TLD and with existing TLDs in the root. Experience has shown that steps have been taken to successfully resolve the issues. We understand that ICANN is concerned that the use of NX Domain responses has the potential to create confusion with the introduction of new TLDs into the DNS.
As a contracted party with ICANN, شبكة. (an IDN gTLD) is unable to be delegated as we wait the outcomes of ICANN’s deliberations on name collisions. We have paid our $185,000 application fee, we have undertaken a very resource intensive exercise to ensure a compliant application, we have passed Initial Evaluation, we have signed a registry agreement with ICANN, we have passed pre-delegation testing and yet we sit and wait.
Our understanding of the IDN ccTLD fast track process is that it is much less rigorous, the application fee is voluntary, there is no requirement to enter into a contract with ICANN, the TLD can develop a launch strategy that is not restricted by ICANN mandated rights protection mechanisms, and any contribution to ICANN’s budget is voluntary. But because this is a ccTLD and not a new gTLD, the Board has seen fit to approve this delegation request at this time despite the serious conversation going on in the community about name collisions.
As we said previously, the DNS does not distinguish between a ccTLD or a gTLD, or for that matter an IDN ccTLD or an IDN gTLD. We would appreciate an explanation as to why we sit and wait for delegation while the IDN ccTLD is approved.

Read previous and future diary entries here.

Name Collisions: Unanticipated Effects [Guest Post]

Kurt Pritz, September 30, 2013, Domain Policy

I attended the TLD Security Forum sponsored by Artemis in San Francisco five weeks ago. By happenstance, I became involved in a small group formed after the meeting that dedicated themselves to replicating the Interisle study (“Name Collisions in the DNS”) and carrying on with the next step in the analysis.
The work among competitors that occurred over the next four weeks was collaborative, intensive, and competent: an excellent example of how the multi-stakeholder model can accomplish significant work and publish it to the broad internet community in an effort to resolve an issue. It brought the right people together to accomplish more, faster than any other governance model would achieve.
Their work is easily identifiable among the many comments submitted on the name collision issue. Without offering an opinion on conclusions here, I note that the competence of work shines through and should be carefully considered.
The Interisle study sounded an alarm because it reported a potentially high number of domain name “collisions” that might result from the delegation of new gTLDs. The term “collision” is somewhat of a misnomer and the key issue, I think, is the use of search-list processing by companies in configuring their networks.
The Interisle report published the volumes of NX Domain responses by TLD and described possible harms but did not link harms to specific types of queries nor delve into the data in order to draw firm conclusions or propose mitigations.
There is nothing wrong with this –- the report was competently executed given the time available.
This is where several interested parties, mostly applicants, jumped in. In an impromptu meeting after the conference a half-dozen companies coordinated: the purchase of servers to analyze previously collected root-zone data (the “Day In The Life” or DITL data); acquisition of memberships in OARC, to whom the servers were donated; and the analysis of vast amounts of data.
Considerable time was spent redesigning queries in order to replicate the Interisle results from the DITL data so that the next step in the analysis would be seamless as the work transitioned from Interisle to this collaborative group.
Hypotheses were developed, queries written, data summarized and statistically tested. Every difference between the Interisle data and the newly analyzed data was discussed until the team was satisfied it would withstand public scrutiny.
The team met twice weekly in conference calls and traded numerous emails to flesh out technical details. Data scientists learned about the DNS, DNS experts learned about z-tests and the effects of non-standard distributions.
The team agreed to publish the data, which it has, so that anyone could perform analysis similar to that done by this team.
For me, these technical discussions brought to mind the reaffirmation of the effectiveness of the ICANN model that occurred as a result of this issue. Work continues and will be discussed at the next TLD Security Conference on October 1st in Washington, DC.
This is a guest post written by Kurt Pritz, ICANN’s former chief strategy officer. He is currently an independent consultant working with new gTLD applicants and others.