Latest news of the domain name industry

Recent Posts

ICANN to flip the secret key to the internet

Kevin Murphy, July 20, 2016, Domain Tech

ICANN is about to embark on a year-long effort to warn the internet that it plans to replace the top-level cryptographic keys used in DNSSEC for the first time.
CTO David Conrad told DI today that ICANN will rotate the so-called Key Signing Key that is used as the “trust anchor” for all DNSSEC queries that happen on the internet.
Due to the complexity of the process, and the risk that something might go wrong, the move is to be announced in the coming days even though the new public key will not replace the existing one until October 2017.
The KSK is a cryptographic key pair used to sign the Zone Signing Keys that in turn sign the DNS root zone. It’s basically at the top of the DNSSEC hierarchy — all trust in DNSSEC flows from it.
It’s considered good practice in DNSSEC to rotate keys every so often, largely to reduce the window would-be attackers have to compromise them.
The Zone Signing Key used by ICANN and Verisign to sign the DNS root is rotated quarterly, and individual domain owners can rotate their own keys as and when they choose, but the same KSK has been in place since the root was first signed in 2010.
Conrad said that ICANN is doing the first rollover partly to ensure that the procedures in has in place for changing keys are effective and could be deployed in case of emergency.
That said, this first rotation is going to happen at a snail’s pace.
Key generation is a complex matter, requiring the physical presence of at least three of seven trusted key holders.
These seven individuals possess physical keys to bank-style strong boxes which contain secure smart cards. Three of the seven cards are needed to generate a new key.
Each of the quarterly ZSK signing ceremonies — which are recorded and broadcast live over the internet — takes about five hours.
The first step in the rollover, Conrad said, is to generate the keys at ICANN’s US east coast facility in October this year. A copy will be moved to a facility on the west coast in February.
The first time the public key will appear in DNS will be July 11, 2017, when it will appear alongside the current key.
It will finally replace the current key completely on October 11, 2017, by which time the DNS should be well aware of the new key, Conrad said.
There is some risk of things going wrong, which could affect domains that are DNSSEC-signed, which is another reason for the slowness of the rollover.
If ISPs that support DNSSEC do not start supporting the new KSK before the final switch-over, they’ll fail to correctly resolve DNSSEC-signed domains, which could lead to some sites going dark for some users.
There’s also a risk that the increased DNS packet sizes during the period when both KSKs are in use could cause queries to be dropped by firewalls, Conrad said.
“Folks who have things configured the right way won’t actually need to do anything but because DNSSEC is relatively new and this software hasn’t really been tested, we need to get the word out to everyone that this change is going to be occurring,” said Conrad.
ICANN will conduct outreach over the coming 15 months via the media, social media and technology conferences, he said.
It is estimated that about 20% of the internet’s DNS resolvers support DNSSEC, but most of those belong to just two companies — Google and Comcast — he said.
The number of signed domains is tiny as a percentage of the 326 million domains in existence today, but still amounts to millions of names.

Verisign says new gTLDs put millions at risk

Kevin Murphy, May 26, 2016, Domain Tech

Verisign has revived its old name collisions security scare story, publishing this week a weighty research paper claiming millions are at risk of man-in-the-middle attacks.
It’s actually a study into how a well-known type of attack, first documented in the 1990s, might become easier due to the expansion of the DNS at the top level.
According to the paper there might be as many as 238,000 instances per day of query traffic intended for private networks leaking to the public DNS, where attackers could potentially exploit it to all manner of genuinely nasty things.
But Verisign has seen no evidence of the vulnerability being used by bad guys yet and it might not be as scary as it first appears.
You can read the paper here (pdf), but I’ll attempt to summarize.
The problem concerns a virtually ubiquitous protocol called WPAD, for Web Proxy Auto-Discovery.
It’s used by mostly by Windows clients to automatically download a web proxy configuration file that tells their browser how to connect to the web.
Organizations host these files on their local networks. The WPAD protocol tries to find the file using DHCP first, but fails over to DNS.
So, your browser might look for a wpad.dat file on wpad.example.com, depending on what domain your computer belongs to, using DNS.
The vulnerability arises because companies often use previously undelegated TLDs — such as .prod or .global — on their internal networks. Their PCs could belong to domains ending in .corp, even though .corp isn’t real TLD in the DNS root.
When these devices are roaming outside of their local network, they will still attempt to use the DNS to find their WPAD file. And if the TLD their company uses internally has actually been delegated by ICANN, their WPAD requests “leak” to registry or registrant.
A malicious attacker could register a domain name in a TLD that matches the domain the target company uses internally, allowing him to intercept and respond to the WPAD request and setting himself up as the roaming laptop’s web proxy.
That would basically allow the attacker to do pretty much whatever he wanted to the victim’s browsing experience.
Verisign says it saw 20 million WPAD leaks hit its two root servers every single day when it collected its data, and estimates that 6.6 million users are affected.
The paper says that of the 738 new gTLDs it looked at, 65.7% of them saw some degree of WPAD query leakage.
The ones with the most leaks, in order, were .global, .ads, .group, .network, .dev, .office, .prod, .hsbc, .win, .world, .one, .sap and .site.
It’s potentially quite scary, but there are some mitigating factors.
First, the problem is not limited to new gTLDs.
Yesterday I talked to Matt Larson, ICANN’s new vice president of research (who held the same post at Verisign’s until a few years ago).
He said ICANN has seen the same problem with .int, which was delegated in 1988. ICANN runs one of .int’s authoritative name servers.
“We did a really quick look at 24 hours of traffic and saw a million and a half queries for domain names of the form wpad.something.int, and that’s just one name server out of several in a 24-hour period,” he said.
“This is not a new problem, and it’s not a problem that’s specific to new gTLDs,” he said.
According to Verisign’s paper, only 2.3% of the WPAD query leaks hitting its root servers were related to new gTLDs. That’s about 238,000 queries every day.
With such a small percentage, you might wonder why new gTLDs are being highlighted as a problem.
I think it’s because organizations typically won’t own the new gTLD domain name that matches their internal domain, something that would eliminate the risk of an attacker exploiting a leak.
Verisign’s report also has limited visibility into the actual degree of risk organizations are experiencing today.
Its research methodology by necessity was limited to observing leaked WPAD queries hitting its two root servers before the new gTLDs in question were delegated.
The company only collected relevant NXDOMAIN traffic to its two root servers — DNS queries with answers typically get resolved closer to the user in the DNS hierarchy — so it has no visibility to whether the same level of leaks happen post-delegation.
Well aware of the name collisions problem, largely due to Verisign’s 11th-hour epiphany on the subject, ICANN forces all new gTLD registries to wildcard their zones for 90 days after they go live.
All collision names are pointed to 127.0.53.53, a reserved IP address picked in order to catch the attention of network administrators (DNS uses TCP/IP port 53).
Potentially, at-risk organizations could have fixed their collision problems shortly after the colliding gTLD was delegated, reducing the global impact of the vulnerability.
There’s no good data showing how many networks were reconfigured due to name collisions in the new gTLD program, but some anecdotal evidence of admins telling Google to go fuck itself when .prod got delegated.
A December 2015 report from JAS Advisors, which came up with the 127.0.53.53 idea, said the effects of name collisions have been rather limited.
ICANN’s Larson echoed the advice put out by security watchdog US-CERT this week, which among other things urges admins to use proper domain names that they actually control on their internal networks.

It’s official: new gTLDs didn’t kill anyone

Kevin Murphy, December 2, 2015, Domain Tech

The introduction of new gTLDs posed no risk to human life.
That’s the conclusion of JAS Advisors, the consulting company that has been working with ICANN on the issue of DNS name collisions.
It is final report “Mitigating the Risk of DNS Namespace Collisions”, published last night, JAS described the response to the “controlled interruption” mechanism it designed as “annoyed but understanding and generally positive”.
New text added since the July first draft says: “ICANN has received fewer than 30 reports of disruptive collisions since the first delegation in October of 2013. None of these reports have reached the threshold of presenting a danger to human life.”
That’s a reference to Verisign’s June 2013 claim that name collisions could disrupt “life-supporting” systems such as those used by emergency response services.
Names collisions, you will recall, are scenarios in which a newly delegated TLD matches a string that it is already used widely on internal networks.
Such scenarios could (and have) led to problems such as system failure and DNS queries leaking on to the internet.
The applied-for gTLDs .corp and .home have been effectively banned, due to the vast numbers of organizations already using them.
All other gTLDs were obliged, following JAS recommendations, to redirect all non-existent domains to 127.0.53.53, an IP address chosen to put network administrators in mind of port 53, which is used by the DNS protocol.
As we reported a little over a year ago, many administrators responded swearily to some of the first collisions.
JAS says in its final report:

Over the past year, JAS has monitored technical support/discussion fora in search of posts related to controlled interruption and DNS namespace collisions. As expected, controlled interruption caused some instances of limited operational issues as collision circumstances were encountered with new gTLD delegations. While some system administrators expressed frustration at the difficulties, overall it appears that controlled interruption in many cases is having the hoped-for outcome. Additionally, in private communication with a number of firms impacted by controlled interruption, JAS would characterize the overall response as “annoyed but understanding and generally positive” – some even expressed appreciation as issues unknown to them were brought to their attention.

There are a number of other substantial additions to the report, largely focusing on types of use cases JAS believes are responsible for most name collision traffic.
Oftentimes, such as the random 10-character domains Google’s Chrome browser uses for configuration purposes, the collision has no ill effect. In other cases, the local system administrators were forced to remedy their software to avoid the collision.
The report also reveals that the domain name corp.com, which is owned by long-time ICANN volunteer Mikey O’Connor, receives a “staggering” 30 DNS queries every second.
That works out to almost a billion (946,728,000) queries per year, coming when a misconfigured system or inexperienced user attempts to visit a .corp domain name.

Verisign offers free public DNS

Kevin Murphy, September 30, 2015, Domain Tech

Verisign has launched a free recursive DNS service aimed at the consumer market.
Public DNS, as the service is called, is being positioned as a way to avoid having your browsing history collated and sold for marketing purposes by your ISP.
There’s no charge, and the company is promising not to sell your data. It also does not plan to monetize NXDOMAIN traffic.
So what’s in it for Verisign? According to a FAQ:

One of Verisign’s core operating principles is to be a good steward of the Internet. Providing the Verisign Public DNS service supports the overall ecosystem of DNS and solidifies end-user trust in the critical navigation that they have come to depend upon for their everyday interactions.

Verisign also offers paid-for recursive DNS services to enterprises, so there may be an up-sell opportunity here.
The market for free public DNS currently has big players including Cisco’s OpenDNS and Google.
If you want to use the Verisign service, the IP addresses to switch to are 64.6.64.6 and 64.6.65.6.

.sexy may be blocked in Iran

Kevin Murphy, September 16, 2015, Domain Tech

Some networks in Iran appear to be systematically blocking Uniregistry’s .sexy gTLD.
That’s one of the conclusions of a slightly odd experiment commissioned by ICANN.
The newly published An Analysis of New gTLD Universal Acceptance was conducted by APNIC Labs. The idea was to figure out whether there are any issues with new gTLDs on the internet’s DNS infrastructure.
It concluded that there is not — new gTLDs work just fine on the internet’s plumbing.
However, the survey — which comprised over 100 million DNS resolution attempts — showed “One country, Iran, shows some evidence of a piecemeal block of Web names within the .sexy gTLD.”
The sample size for Iranian attempts to access .sexy was just 30 attempts. In most cases, users were able to resolve the names with DNS, but HTTP responses appeared to be blocked.
The survey did not test .porn or .adult names, but it might be safe to assume similar behavior in those gTLDs.
APNIC also concluded that Israel’s .il ccTLD, included in the report as a known example of TLD blocking at the national level, is indeed blocked in Iran and Syria.
The study also found that there may be issues with Adobe’s Flash software, when used in Internet Explorer, when it comes to resolving internationalized domain names.
That conclusion seems to have been reached largely because the test’s methodology saw a Flash advertisement discretely fetching URLs in the background of web pages using Google Ads.
When the experimenters used HTML 5 to run their scripts instead, there was no problem resolving the names.
The study did not look at some of the perhaps more pressing UA issues, such as the ability for registrants and others to use new gTLD domain names in web applications.

Blue Coat explains .zip screw-up

Kevin Murphy, September 4, 2015, Domain Tech

Security vendor Blue Coat apparently doesn’t check whether domains are actually domains before it advises customers to block them.
The company yesterday published a blog post that sought to explain why it denounced Google’s unlaunched .zip gTLD as “100% shady” even though the only .zip domain in existence leads to google.com.
Unrepentant, Blue Coat continued to insist that businesses should consider blocking .zip domains, while acknowledging there aren’t any.
It said that its censorware treats anything entered into a browser’s address bar as a URL, so it has been treating file names that end in .zip — the common format for compressed archive files — as if they are .zip domain names. The blog states:

when one of those URLs shows up out on the public Internet, as a real Web request, we in turn treat it as a URL. Funny-looking URLs that don’t resolve tend to get treated as Suspicious — after all, we don’t see any counter-balancing legitimate traffic there.
Further, if a legal domain name gets enough shady-looking traffic — with no counter-evidence of legitimate Web traffic — it’s possible for one of our AI systems to conclude that the behavior isn’t changing, and that it deserves a Suspicious rating in the database. So it gets one.

In other words, Blue Coat has been categorizing Zip file names that somehow find their way into a browser address bar as .zip domain names.
That may sound like a software bug that Blue Coat needs to fix, but it’s still telling people to block Google’s gTLD anyway, writing:

In conclusion, none of the .zip “domains” we see in our traffic logs are requests to registered sites. Nevertheless, we recommend that people block these requests, until valid .zip domains start showing up.

That’s a slight change of position from its original “Businesses should consider blocking traffic that leads to the riskiest TLDs”, but it still strikes me as irresponsible.
The company has still not disclosed the real numbers behind any of the percentages in its report, so we still have no idea whether it was fair to label, for example, Famous Four’s .review as “100% shady”.

Concern over mystery TMCH outage

Kevin Murphy, May 20, 2015, Domain Tech

The Trademark Clearinghouse is investigating the causes and impact of an outage that is believed to have hit its primary database for 10 hours last Friday.
Some in the intellectual property community are concerned that the downtime may have allowed people to register domain names without receiving Trademark Claims notices.
The downtime was confirmed as unscheduled by the TMCH on a mailing list, but requests for more information sent its way today were deflected to ICANN.
An ICANN spokesperson said that the outage is being analyzed right now, which will take a couple of days.
The problem affected the IBM-administered Trademark Database, which registrars query to determine whether they need to serve up a Claims notice when a customer tries to register a domain that matches a trademark.
I gather that registries are supposed to reject registration attempts if they cannot get a definitive answer from the TMDB, but some are concerned that that may not have been the case during the downtime.
Over 145,000 Claims notices have been sent to trademark owners since the TMCH came online over a year ago.
(UPDATE: This story was edited May 21 to clarify that it is the TMCH conducting the investigation, rather than ICANN.)

Site names and shames shoddy TLD support

Kevin Murphy, April 20, 2015, Domain Tech

A self-professed geek from Australia is running a campaign to raise awareness of new gTLDs by naming and shaming big companies that don’t provide comprehensive TLD support on their web sites.
SupportTheNew.domains, run by university coder Stuart Ryan, has been around since last June and currently indexes support problems at dozens of web sites.
The likes of Facebook, Amazon, Adobe and Apple are among those whose sites are said to offer incomplete support for new gTLDs.
It’s the first attempt I’m aware of to list “universal acceptance” failures in any kind of structured way.
Ryan says on the site that he set up the campaign after running into problems signing up for services using his new .email email address.
The site relies on submissions from users and seems to be updated whenever named companies respond to support tickets.
Universal acceptance is a hot topic in the new gTLD space, with ICANN recently creating a steering group to promote blanket TLD support across the internet.
Often, sites rely on outdated lists of TLDs or regular expressions that think TLDs are limited to three characters when they attempt to verify domains in email addresses or URLs.

Google eliminating domains from search results

Kevin Murphy, April 17, 2015, Domain Tech

Google has made another move to make domain names less relevant to internet users.
The company will no longer display URLs in search results pages for any web site that adopts a certain technical standard.
Instead, the name of the web site will be given. So instead of a DI post showing up with “domainincite.com” in results, it would be “Domain Incite”.
Google explained the change in a blog post incorrectly titled “Better presentation of URLs in search results”.
Webmasters wishing to present a company name or brand instead of a domain name need to publish metadata on their home pages. It’s just a few lines of code.
Google will make a determination whether to make the change based on whether the name meets these criteria:

Be reasonbly [sic] similar to your domain name
Be a natural name used to refer to the site, such as “Google,” rather than “Google, Inc.”
Be unique to your site—not used by some other site
Not be a misleading description of your site

Code samples and the rules are published here.
It strikes me that Google, by demanding naming uniqueness, is opening itself up for a world of hurt.
Could there be a landrush among non-unique brands? How will disputes be handled?
Right now the change has been made only to mobile search results and only in the US, but Google hinted that it could roll out elsewhere too.

More security issues prang ICANN site

Kevin Murphy, March 3, 2015, Domain Tech

ICANN has revealed details of a security problem on its web site that could have allowed new gTLD registries to view data belonging to their competitors.
The bug affected its Global Domains Division customer relationship management portal, which registries use to communicate with ICANN on issues related to delegation and launch.
ICANN took GDD down for three days, from when it was reported February 27 until last night, while it closed the hole.
The vulnerability would have enabled authenticated users to see information from other users’ accounts.
ICANN tells me the issue was caused because it had misconfigured some third-party software — I’m guessing the Salesforce.com platform upon which GDD runs.
A spokesperson said that the bug was reported by a user.
No third parties would have been able to exploit it, but ICANN has been coy about whether any it believes any registries used the bug to access their competitors’ accounts.
ICANN has ‘fessed up to about half a dozen crippling security problems in its systems since the launch of the new gTLD program.
Just in the last year, several systems have seen downtime due to vulnerabilities or attacks.
A similar kind of privilege escalation bug took down the Centralized Zone Data Service last April.
The RADAR service for registrars was offline for two weeks after being hacked last May.
A phishing attack against ICANN staff in December enabled hackers to view information not normally available to the public.