Latest news of the domain name industry

Recent Posts

ICANN approves two new TLDs, including THAT one

Kevin Murphy, January 30, 2019, Domain Registries

ICANN’s board of directors has given the nod to two more country-code TLDs.

The eight-year-old nation of South Sudan is finally getting its possibly controversial .ss, while Mauritania is getting the Arabic-script version of its name, موريتانيا. (.xn--mgbah1a3hjkrd), to complement its existing .mr ccTLD.

Both TLDs were approved by ICANN after going through the usual, secretive IANA process, at its board meeting at the weekend.

The recipient of موريتانيا. is the Université de Nouakchott Al Aasriya, while .ss is going to National Communication Authority, a governmental agency.

As previously noted, .ss has the potential to be controversial due to its Nazi associations, and the fact that Nazis are precisely the kind of people who have trouble finding TLDs that will allow them to register names.

But none of that is ICANN’s business. It simply checks to make sure the requester has the support of the local internet community and that the string is on the ISO 3166 list.

The Mauritanian IDN has already been added to the DNS root, while .ss has not.

Afilias sues India to block $12 million Neustar back-end deal

Kevin Murphy, August 27, 2018, Domain Registries

Afilias has sued the Indian government to prevent it awarding the .in ccTLD back-end registry contract to fierce rival Neustar.

The news emerged in local reports over the weekend and appears to be corroborated by published court documents.

According to Moneycontrol, the National Internet Exchange of India plans to award the technical service provider contract to Neustar, after over a decade under Afilias, but Afilias wants the deal blocked.

The contract would also include some 15 current internationalized domain name ccTLDs, with another seven on the way, in addition to .in.

That’s something Afilias reckons Neustar is not technically capable of, according to reports.

Afilias’ lawsuit reportedly alleges that Neustar “has no experience or technical capability to manage and support IDNs in Indian languages and scripts and neither does it claim to have prior experience in Indian languages”.

Neustar runs plenty of IDN TLDs for its dot-brand customers, but none of them appear to be in Indian scripts.

NIXI’s February request for proposals (pdf) contains the requirement: “Support of IDN TLDs in all twenty two scheduled Indian languages and Indian scripts”.

I suppose it’s debatable what this means. Actual, hands-on, operational experience running Indian-script TLDs at scale would be a hell of a requirement to put in an RFP, essentially locking Afilias into the contract for years to come.

Only Verisign and Public Interest Registry currently run delegated gTLDs that use officially recognized Indian scripts, according to my database. And those TLDs — such as Verisign’s .कॉम (the Devanagari .com) — are basically unused.

Neither Neustar nor Afilias have responded to DI’s requests for comment today.

.in has over 2.2 million domains under management, according to NIXI.

Neustar’s Indian subsidiary undercut its rival with a $0.70 per-domain-year offer, $0.40 cheaper than Afilias’ $1.10, according to Moneycontrol.

That would make the deal worth north of $12 million over five years for Afilias and over $7.7 million for Neustar.

One can’t help but be reminded of the two companies’ battle over Australia’s .au, which Afilias sneaked out from under long-time incumbent Neustar late last year.

That handover, the largest in DNS history, was completed relatively smoothly a couple months ago.

All Cyrillic .eu domains to be deleted

Eurid has announced that Cyrillic domain names in .eu will be deleted a year from now.

The registry said that it’s doing so to comply with the “no script mixing” recommendations for internationalized domain names, which are designed to limit the risk of homograph phishing attacks.

The deletions will kick in May 31, 2019, and only apply to names that have Cyrillic before the dot and Latin .eu after.

Cyrillic names in Eurid’s Cyrillic ccTLD .ею will not be affected.

The plan has been in place since Eurid adopted the IDNA2008 standard three years ago, but evidently not all registrants have dropped their affected names yet.

Bulgaria is the only EU member state to use Cyrillic in its national language.

Emojis coming to another ccTLD

Kevin Murphy, January 24, 2018, Domain Registries

dotFM is to make emoji domain names available in the .fm ccTLD it manages.

The company said today that it’s currently taking expressions of interest in ‘premium’ emoji inventory, and that such domains will be registerable at an unspecified point in future.

It’s published a list of single-emoji domains it plans to sell.

Emoji domains “will be available based on Unicode Consortium Emoji Version 5.0 standards using single code point; and allowing a mix of letters and emoji characters under the top-level .FM, as well as the dotRadio extensions, .RADIO.fm and .RADIO.am”, dotFM said.

Very few TLDs allow emojis to be registered today.

The most prominent is .ws, which is Western Samoa’s ccTLD, marketed as an abbreviation for “web site”.

.fm is the ccTLD for Micronesia, but dotFM markets it to radio stations.

As ccTLDs, they’re not subject to ICANN rules that essentially ban them contractually in gTLDs.

Emojis use the same encoding as internationalized domain names, but do not feature in the IDN standards because they’re not used in real spoken languages.

Emoji domains are usually considered not entirely practical due to the inconsistent ways they can be rendered by applications.

New gTLDs still a crappy choice for email — study

Kevin Murphy, September 28, 2017, Domain Tech

New gTLDs may not be the best choice of domain for a primary email address, judging by new research.

Over 20% of the most-popular web sites do not fully understand email addresses containing long TLDs, and Arabic email addresses are supported by fewer than one in 10 sites, a study by the Universal Acceptance Steering Group has found.

Twitter, IBM and the Financial Times are among those sites highlighted as having only partial support for today’s wide variety of possible email addresses.

Only 7% of the sites tested were able to support all types of email address.

The study, carried out by Donuts and ICANN staff, looked at 749 websites (in the top 1,000 or so as ranked by Alexa) that have forms for filling in email addresses.

On each site, seven different email addresses were input, to see whether the site would accept them as valid.

The emails used different combinations of ASCII and Unicode before the dot and mixes of internationalized domain name and ASCII at the second and top levels.

These were the results (click to enlarge or download the PDF of the report here):

IDN emails

The problem with these numbers, it seems to me, is the lack of a control. There’s no real baseline to judge the numbers against.

There’s no mention in the paper about testing addresses that use .com or decades-old ccTLDs, which would have highlighted web sites that with broken scripts that reject all emails.

But if we assume, as the paper appears to, that all the tested web sites were 100% compliant for .com domains, the scores for new gTLDs are not great.

There are currently over 800 TLDs over four characters in length, but according to the UASG research 22% of web sites will not recognize them.

There are 150 IDN TLDs, but a maximum of 30% of sites will accept them in email addresses.

When it comes to right-to-left scripts, such as Arabic, the vast majority of sites are totally hopeless.

UASG dug into the code of the tested sites when it could and found that most of them use client-side code — JavaScript processing a regular expression — to verify addresses.

A regular expression is complex bit of code that can look something like this: /^.+@(?:[^.]+\.)+(?:[^.]{2,})$

It’s not every coder’s cup of tea, but it can get the job done with minimal client-side resource overheads. Most coders, the UASG concludes, copy regex they found on a forum and maybe tweak it a bit.

This should not be shocking news to anyone. I’ve known about it since 2009 or earlier when I first started ripping code from StackOverflow.

However, the UASG seems to be have been working on the assumption that more sites are using off-the-shelf software libraries, which would have allowed the problem to be fixed in a more centralized fashion.

It concludes in its paper that much greater “awareness raising” needs to happen before universal acceptance comes closer to reality.