Twitter has started recognizing new gTLDs on its web page and on Tweetdeck.
As of some point in the last 48 hours, you can type something like “nic.berlin” or “fire.plumbing” in a tweet and Twitter will automatically turn it into a clickable link.
The switcheroo seems to have happened in the last two days, as this conversation may illustrate.
But there seems to be some delay — about a month, by my reckoning — in the support.
Domains such as nic.sexy, which is in a TLD delegated November 14, become clickable, but domains in more recent delegations such as .okinawa, which hit the root on Wednesday, are not.
Going back through the DI PRO Calendar, it seems that any TLD delegated on February 5 or earlier gets clickable links in Twitter and those delegated over the last four weeks do not.
I’m not sure why TLDs delegated in the last month are not supported, but I imagine it could be an annoyance during registries’ pre-launch marketing.
It’s difficult to overestimate how important application support is for the new gTLD program.
If new gTLDs don’t look like web addresses, there’s going to be a big barrier to adoption. A .link domain that isn’t clickable isn’t much use and nobody wants to have to copy-paste URLs.
Support for new gTLDs for Twitter’s 232 million active users is a big step along the road to universal acceptance of all TLDs, which ICANN has identified as a problem.
Up to 9.8 million new gTLD domain names are to get a get-out-of-jail card, with the publication yesterday of ICANN’s plan to mitigate the risk of damaging name collisions.
As a loyal DI reader, the details of the plan will not come as a great surprise. It was developed by JAS Global Advisors and previewed in a guest post by CEO Jeff Schmidt in January
Name collisions are scenarios where a TLD delegated by ICANN to the public DNS matches a TLD that one or more organizations already uses on their internal networks.
Verisign, in what many view as protectionist propaganda, has been arguing that name collisions could cause widespread technical and economic damage and even a risk to life.
Things might stop working and secret data might leak out of corporate networks, Verisign warns.
JAS’ proposed solution, which ICANN has opened for public comment, is quite clever, I think.
Called “controlled interruption”, it will see new gTLD registries being asked to wildcard their entire second level of their TLDs to point to the IP address 127.0.53.53.
If there’s a name collision on example.corp the company using that TLD on its network will notice unusual behavior and will have an opportunity to fix the problem.
Importantly, no data apart from the DNS look-up will leak out of their networks — the 127/8 IP address block is reserved by various standards for local uses only.
The registry will essentially bounce the DNS request back to the network making the request. If that behavior causes problems, the network administrator will presumably check her logs, notice the odd IP address, and Google it for further information.
Today, she’ll find a Slashdot article about the name collisions plan, which should put the admin on the road to figuring out the problem and fixing her network. In future, maybe ICANN will rank for the term.
Registries would be able to choose whether to wildcard their whole TLD or to only point to 127.0.53.53 those second-level names currently on their collisions block lists.
In either case, the redirection would only last for the first 120 days after delegation.
That’s the same duration as the quiet period ICANN already imposes on new delegations, during which only “nic.” may resolve.
After the 120 days are up, the name collisions issue would be considered permanently closed for that TLD.
If this goes ahead, the plan will allow registries to unblock as many as 9.8 million domain names representing 6.8 million unique second-level labels, according to DI PRO collisions database.
It could also put an end to the argument about whether name collisions really were a significant problem (160,000 new gTLD names are already live and we haven’t heard any reports of collisions yet).
Pointing to the fact that new TLDs, some of which showed evidence of collisions, were getting delegated rather regularly before the current new gTLD round, JAS said in its report:
We do not find that the addition of new Top Level Domains (TLDs) fundamentally or significantly increases or changes the risks associated with DNS namespace collisions. The modalities, risks, and etiologies of the inevitable DNS namespace collisions in new TLD namespaces will resemble the collisions that already occur routinely in the other parts of the DNS.
Collisions in all TLDs and at all levels within the global Internet DNS namespace have the ability to expose potentially serious security and availability problems and deserve serious attention.
JAS calls its plan “a conservative buffer between potential legacy usage of a TLD and the new usage”.
As wildcarding is currently prohibited by ICANN’s standard Registry Agreement (ironically, to prevent a repeat of Verisign’s Site Finder) an amendment is going to be needed, as the JAS plan acknowledges.
The drawback of the plan is that if an organization is relying on a colliding internal TLD, whatever systems use that TLD could break under the plan. The 127/8 redirection is a way to help them resolve the breakage, not always to prevent it happening at all.
For new gTLD registries it’s pretty good news, however. There are many thousands of potentially valuable premium names blocked under the current regime that would be made available for sale.
If you’re an applicant for .mail, however, it’s a different story. The JAS report says .mail should be reserved forever, putting it in the same category as .home and .corp:
the use of .corp and .home for internal namespaces/networks is so overwhelming that the inertia created by such a large “installed base” and prevalent use is not likely reversible. We also note that RFC 6762 suggests that .corp and .home are safe for use on internal networks.
Like .corp and .home, the TLD .mail also exhibits prevalent, widespread use at a level materially greater than all other applied-for TLDs. Our research found that .mail has been hardcoded into a number of installations, provided in a number of example configuration scripts/defaults, and has a large global “installed base” that is likely to have significant inertia comparable to .corp and .home. As such, we believe .mail’s prevalent internal use is also likely irreversible and recommend reservation similar to .corp and .home.
In other words, .mail is dead and the five remaining applicants for the string are probably going to be forced to withdraw through no fault of their own. Should these companies get a full refund from ICANN?
Anyone who thinks that having a exact-match keyword domain automatically promotes their web site to the top of search results is in for a rude awakening, according to a top guy at Bing.
In a blog post, Bing senior product manager Duane Forrester tried to debunk the “myth… That merely having a popular keyword in the domain will help that site, regardless of content, rank on the high volume keyword”.
Ranking today is a result of so many signals fed into the system the words used in a domain send less and less information into the stack as a percentage of overall decision making signals.
There are no shortcuts. Even the new generic top level domains (gTLDs) coming out near the end of February will be treated in this manner. Domain spamming isn’t new, so sites that provide value, are relevant and that people like will rank as usual. They won’t rank “just because” they have certain words in them, and thinking that keyword stuffing a domain (think: cars.cars) will give you an edge is dangerous.
Forrester’s post is not a condemnation of keyword domains, however. He does not deny that the domain is one factor Bing takes into account in rankings, albeit one of very many.
Rather, it seems he’s trying to point out that it’s possible to get decent search traffic even when your domain has nothing to do with your content (he gives satire site The Onion as an example).
His overall message is that creating good content is the way to get good SEO, something that will come as absolutely no surprise to anyone who’s been paying attention to the pronouncements of search engine companies for the last several years.
This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.
One of JAS’ commitments during this process was to “float” ideas and solicit feedback. This set of thoughts poses an alternative to the “trial delegation” proposals in SAC062. The idea springs from past DNS-related experiences and has an effect we have named “controlled interruption.”
Learning from the Expired Registration Recovery Policy
Many are familiar with the infamous Microsoft Hotmail domain expiration in 1999. In short, a Microsoft registration for passport.com (Microsoft’s then-unified identity service) expired Christmas Eve 1999, denying millions of users access to the Hotmail email service (and several other Microsoft services) for roughly 20 hours.
Fortunately, a well-intended technology consultant recognized the problem and renewed the registration on Microsoft’s behalf, yielding a nice “thank you” from Microsoft and Network Solutions. Had a bad actor realized the situation, the outcome could have been far different.
The Microsoft Hotmail case and others like it lead to the current Expired Registration Recovery Policy.
More recently, Regions Bank made news when its domains expired, and countless others go unreported. In the case of Regions Bank, the Expired Registration Recovery Policy seemed to work exactly as intended – the interruption inspired immediate action and the problem was solved, resulting in only a bit of embarrassment.
Importantly, there was no opportunity for malicious activity.
For the most part, the Expired Registration Recovery Policy is effective at preventing unintended expirations. Why? We call it the application of “controlled interruption.”
The Expired Registration Recovery Policy calls for extensive notification before the expiration, then a period when “the existing DNS resolution path specified by the Registrant at Expiration (“RAE”) must be interrupted” – as a last-ditch effort to inspire the registrant to take action.
Nothing inspires urgent action more effectively than service interruption.
But critically, in the case of the Expired Registration Recovery Policy, the interruption is immediately corrected if the registrant takes the required action — renewing the registration.
It’s nothing more than another notification attempt – just a more aggressive round after all of the passive notifications failed. In the case of a registration in active use, the interruption will be recognized immediately, inspiring urgent action. Problem solved.
What does this have to do with collisions?
A Trial Delegation Implementing Controlled Interruption
There has been a lot of talk about various “trial delegations” as a technical mechanism to gather additional data regarding collisions and/or attempt to notify offending parties and provide self-help information. SAC062 touched on the technical models for trial delegations and the related issues.
Ideally, the approach should achieve these objectives:
- Notifies systems administrators of possible improper use of the global DNS;
- Protects these systems from malicious actors during a “cure period”;
- Doesn’t direct potentially sensitive traffic to Registries, Registrars, or other third parties;
- Inspires urgent remediation action; and
- Is easy to implement and deterministic for all parties.
Like unintended expirations, collisions are largely a notification problem. The offending system administrator must be notified and take action to preserve the security and stability of their system.
One approach to consider as an alternative trial delegation concept would be an application of controlled interruption to help solve this notification problem. The approach draws on the effectiveness of the Expired Registration Recovery Policy with the implementation looking like a modified “Application and Service Testing and Notification (Type II)” trial delegation as proposed in SAC62.
But instead of responding with pointers to application layer listeners, the authoritative nameserver would respond with an address inside 127/8 — the range reserved for localhost. This approach could be applied to A queries directly and MX queries via an intermediary A record (the vast majority of collision behavior observed in DITL data stems from A and MX queries).
Responding with an address inside 127/8 will likely break any application depending on a NXDOMAIN or some other response, but importantly also prevents traffic from leaving the requestor’s network and blocks a malicious actor’s ability to intercede.
In the same way as the Expired Registration Recovery Policy calls for “the existing DNS resolution path specified by the RAE [to] be interrupted”, responding with localhost will hopefully inspire immediate action by the offending party while not exposing them to new malicious activity.
If legacy/unintended use of a DNS name is present, one could think of controlled interruption as a “buffer” prior to use by a legitimate new registrant. This is similar to the CA Revocation Period as proposed in the New gTLD Collision Occurrence Management Plan which “buffers” the legacy use of certificates in internal namespaces from new use in the global DNS. Like the CA Revocation Period approach, a set period of controlled interruption is deterministic for all parties.
Moreover, instead of using the typical 127.0.0.1 address for localhost, we could use a “flag” IP like 127.0.53.53.
Why? While troubleshooting the problem, the administrator will likely at some point notice the strange IP address and search the Internet for assistance. Making it known that new TLDs may behave in this fashion and publicizing the “flag” IP (along with self-help materials) may help administrators isolate the problem more quickly than just using the common 127.0.0.1.
We could also suggest that systems administrators proactively search their logs for this flag IP as a possible indicator of problems.
Why the repeated 53? Preserving the 127.0/16 seems prudent to make sure the IP is treated as localhost by a wide range of systems; the repeated 53 will hopefully draw attention to the IP and provide another hint that the issue is DNS related.
Two controlled interruption periods could even be used — one phase returning 127.0.53.53 for some period of time, and a second slightly more aggressive phase returning 127.0.0.1. Such an approach may cover more failure modes of a wide variety of requestors while still providing helpful hints for troubleshooting.
A period of controlled interruption could be implemented before individual registrations are activated, or for an entire TLD zone using a wildcard. In the case of the latter, this could occur simultaneously with the CA Revocation Period as described in the New gTLD Collision Occurrence Management Plan.
The ability to “schedule” the controlled interruption would further mitigate possible effects.
One concern in dealing with collisions is the reality that a potentially harmful collision may not be identified until months or years after a TLD goes live — when a particular second level string is registered.
A key advantage to applying controlled interruption to all second level strings in a given TLD in advance and at once via wildcard is that most failure modes will be identified during a scheduled time and before a registration takes place.
This has many positive features, including easier troubleshooting and the ability to execute a far less intrusive rollback if a problem does occur. From a practical perspective, avoiding a complex string-by-string approach is also valuable.
If there were to be a catastrophic impact, a rollback could be implemented relatively quickly, easily, and with low risk while the impacted parties worked on a long-term solution. A new registrant and associated new dependencies would likely not be adding complexity at this point.
Request for Feedback
As stated above, one of JAS’ commitments during this process was to “float” ideas and solicit feedback early in the process. Please consider these questions:
- What unintended consequences may surface if localhost IPs are served in this fashion?
- Will serving localhost IPs cause the kind of visibility required to inspire action?
- What are the pros and cons of a “TLD-at-once” wildcard approach running simultaneously with the CA Revocation Period?
- Is there a better IP (or set of IPs) to use?
- Should the controlled interruption plan described here be included as part of the mitigation plan? Why or why not?
- To what extent would this methodology effectively address the perceived problem?
- Other feedback?
We anxiously await your feedback — in comments to this blog, on the DNS-OARC Collisions list, or directly. Thank you and Happy New Year!
Microsoft seems to be ahead of its rival Google when it comes to recognizing new gTLDs in their respective search engines.
Doing an advanced search for sites within specific new gTLDs on Bing is returning results today. The same cannot be said for Google, however.
Here’s an example of a search results page limited to Uniregistry’s .sexy:
The same type of search seems to work for .tattoo (Uniregistry) and .ruhr (Regiodot) but not for .uno or for any of Donuts’ many Latin-script gTLDs (which all currently redirect to donuts.co).
Sometimes the searches work with a dot, sometimes they don’t.
New gTLD support appears to be a work in progress at Microsoft, in other words, but the company does seem to be further along than Google, which so far doesn’t return any results for the same queries.