Latest news of the domain name industry

Recent Posts

Verisign confirms .gov downtime, blames algorithm

Kevin Murphy, August 15, 2013, Domain Tech

Verisign this morning confirmed yesterday’s reports that the .gov top-level domain went down for some internet users due to a DNSSEC problem, which it said was related to an algorithm change.

In a posting to various mailing lists, Verisign principal engineer Duane Wessels said:

On the morning of August 14, a relatively small number of networks may have experienced an operational disruption related to the signing of the .gov zone. In preparation for a previously announced algorithm rollover, a software defect resulted in publishing the .gov zone signed only with DNSSEC algorithm 8 keys rather than with both algorithm 7 and 8. As a result .gov name resolution may have failed for validating recursive name servers. Upon discovery of the issue, Verisign took prompt action to restore the valid zone.

Verisign plans to proceed with the previously announced .gov algorithm rollover at the end of the month with the zone being signed with both algorithms for a period of approximately 10 days.

This clarifies that the problem was slightly different to what had been assumed yesterday.

It was related to change of the cryptographic algorithm used to create .gov’s DNSSEC keys, a relatively rare event, rather than a scheduled key rollover, which is a rather more frequent occurrence.

The problem would only have made .gov domains (and consequently web sites, email, etc) inaccessible for users of networks where DNSSEC validation is strictly enforced, which is quite small.

The US ISP with the strongest support for DNSSEC is Comcast. Since turning on its validators it has reported dozens of instances of DNSSEC failing — mostly in second-level .gov domains, where DNSSEC is mandated by US policy.

On two other occasions Comcast has blogged about the whole .gov TLD failing DNSSEC validation due to problems keeping keys up to date.

The general problem is widespread enough, and the impact severe enough, that Comcast has had to create an entirely new technology to prevent borked key rollovers making web sites go dark for its customers.

Called Negative Trust Anchors, it’s basically a Band-Aid that allows the ISP to deliberately ignore DNSSEC on a given domain while it waits for that domain’s owner to sort out its key problem.

The technology was created following the widely reported nasa.gov outage last year.

It’s really little wonder that so few organizations are interested in deploying DNSSEC today.

Yesterday’s .gov problem may have been minor, lasting only an hour or two, but had the affected TLD been .com, and had DNSSEC deployment been more widespread, everyone on the planet would have noticed.

Under ICANN contract, DNSSEC is mandatory for new gTLDs at the top level, but not the second level.

Reports: .gov fails due to DNSSEC error

Kevin Murphy, August 14, 2013, Domain Tech

The .gov top-level domain suffered a DNSSEC problem today and was unavailable to some internet users, according to reports.

According to mailing lists and the SANS Internet Storm Center, it appeared that .gov rolled one of its DNSSEC keys without telling the root zone about the update.

This meant that anyone whose DNS servers do strict DNSSEC validation — a relatively small number of networks — would have been unable to access .gov web sites, email and other resources.

As a matter of policy, all second-level .gov domains have to be DNSSEC-signed.

The problem was corrected quite quickly — looks like within an hour or two — but as SANS noted, caching issues may prolong the impact.

Both .gov and the root zone are managed by Verisign, which isn’t on the best of terms with the US government at the moment.

Dotless domains “dangerous”, security study says

Kevin Murphy, August 6, 2013, Domain Tech

An independent security study has given ICANN a couple dozen very good reasons to continue outlaw “dotless” domain names, but stopped short of recommending an outright ban.

The study, conducted by boutique security outfit Carve Systems and published by ICANN this morning, confirms that dotless domains — as it sounds, a single TLD label with no second-level domain and no dot — are potentially “dangerous”.

If dotless domains were to be allowed by ICANN, internet users may unwittingly send their private data across the internet instead of a local network, Carve found.

That’s basically the same “internal name collision” problem outlined in a separate paper, also published today, by Interisle Consulting (more on that later).

But dotless domains would also open up networks to serious vulnerabilities such as cookie leakage and cross-site scripting attacks, according to the report.

“A bug in a dotless website could be used to target any website a user frequents,” it says.

Internet Explorer, one of the many applications tested by Carve, automatically assumes dotless domains are local network resources and gives them a higher degree of trust, it says.

Such domains also pose risks to users of standard local networking software and residential internet routers, the study found. It’s not just Windows boxes either — MacOS and Unix could also be affected.

These are just a few of the 25 distinct security risks Carve identified, 10 of which are considered serious.

ICANN has a default prohibition on dotless gTLDs in the new gTLD Applicant Guidebook, but it’s allowed would-be registries to specially request the ability to go dotless via Extended Evaluation and the Registry Services Evaluation Process (with no guarantee of success, of course).

So far, Google is the only high-profile new gTLD applicant to say it wants a dotless domain. It wants to turn .search into such a service and expects to make a request for it via RSEP.

Other portfolio applicants, such as Donuts and Uniregistry, have also said they’re in favor of dotless gTLDs.

Given the breadth of the potential problems identified by Carve, you might expect a recommendation that dotless domains should be banned outright. But that didn’t happen.

Instead, the company has recommended that only certain strings likely to have a huge impact on many internet users — such as “mail” and “local” — be permanently prohibited as dotless TLDs.

It also recommends lots of ways ICANN could allow dotless domains and mitigate the risk. For example, it suggests massive educational outreach to hardware and software vendors and to end users.

Establish guidelines for software and hardware manufacturers to follow when selecting default dotless names for use on private networks. These organizations should use names from a restricted set of dotless domain names that will never be allowed on the public Internet.

Given that most people have never heard of ICANN, that internet standards generally take a long time to adopt, and allowing for regular hardware upgrade cycles, I couldn’t see ICANN pulling off such a feat for at least five to 10 years.

I can’t see ICANN approving any dotless domains any time soon, but it does appear to have wiggle-room in future. ICANN said:

The ICANN Board New gTLD Program Committee (NGPC) will consider dotless domain names and an appropriate risk mitigation approach at its upcoming meeting in August.

NTIA alarmed as Verisign hints that it will not delegate new gTLDs

Kevin Murphy, August 5, 2013, Domain Tech

Verisign has escalated its war against competition by telling its government masters that it is not ready to add new gTLDs to the DNS root, raising eyebrows at NTIA.

The company told the US National Telecommunications and Information Administration in late May that the lack of uniform monitoring across the 13 root servers means it would put internet security and stability at risk to start delegating new gTLDs now.

In response, the NTIA told Verisign that its recent position on DNS security is “troubling”. It demanded confirmation that Verisign is not planning to block new gTLDs from being delegated.

The letters (pdf and pdf) were published by ICANN over the weekend, over two months after the first was sent.

Verisign senior VP Pat Kane wrote in the May letter:

we strongly believe certain issues have not been addressed and must be addressed before any root zone managers, including Verisign, are ready to implement the new gTLD Program.

We want to be clearly on record as reporting out this critical information to NTIA unequivocally as we believe a complete assessment of the critical issues remain unaddressed which left unremediated could jeopardize the security and stability of the DNS.

we strongly recommend that the previous advice related to this topic be implemented and the capability for root server system monitoring, instrumentation, and management capabilities be developed and operationalized prior to beginning delegations.

Kane’s concerns were first outlined by Verisign in its March 2013 open letter to ICANN, which also expressed serious worries about issues such as internal name collisions.

Verisign is so far the only root server operator to publicly express concerns about the lacking of coordinated monitoring, and many people believe that the company is simply desperately trying to delay competition for its $800 million .com business for as long as possible.

These people note that in early November 2012, Verisign signed a joint letter with ICANN and NTIA that said:

the Root Zone Partners are able to process at least 100 new TLDs per week and will commit the necessary resources to meet all root zone management volume increases associated with the new gTLD program

That letter was signed before NTIA stripped Verisign of its right to increase .com prices every year, depriving it of tens or hundreds of millions of dollars of additional revenue.

Some say that Verisign is raising spurious security concerns now purely because it’s worried about its bottom line.

NTIA is beginning to sound like one of these critics. In its response to the May 30 letter, sent by NTIA and published by ICANN on Saturday, deputy associate administrator Vernita Harris wrote:

NTIA and VeriSign have historically had a strong working relationship, but inconsistencies in VeriSign’s position in recent months are troubling… NTIA fully expects VeriSign to process change requests when it receives an authorization to delegate a new gTLD. So that there will be no doubt on this point, please provide me a written confirmation no later than August 16, 2013 that VeriSign will process change requests for the new gTLD program when authorized to delegate a new gTLD.

Harris said that a system is already in place that would allow the emergency rollback of the root zone, basically ‘un-delegating’ any gTLD that proves to cause a security or stability problem.

This would be “sufficient for the delegation of new gTLDs”, she wrote.

Could Verisign block new gTLDs?

It’s worth a reminder at this point that ICANN’s power over the DNS root is something of a facade.

Verisign, as operator of the master A root server, holds the technical keys to the kingdom. Under its NTIA contract, it only processes changes to the root — such as adding a TLD — when NTIA tells it to.

NTIA in practice merely passes on the recommendations of IANA, the department within ICANN that has the power to ask for changes to the root zone, also under contract with NTIA.

Verisign or NTIA in theory could refuse to delegate new gTLDs — recall that when .xxx was heading to the root the European Union asked NTIA to delay the delegation.

In practice, it seems unlikely that either party would stand in the way of new gTLDs at the root, but the Verisign rhetoric in recent months suggests that it is in no mood to play nicely.

To refuse to delegate gTLDs out of commercial best interests would be seen as irresponsible, however, and would likely put its role as custodian of the root at risk.

That said, if Verisign turns out to be the lone voice of sanity when it comes to DNS security, it is ICANN and NTIA that will ultimately look like they’re the irresponsible parties.

What’s next?

Verisign now has until August 16 to confirm that it will not make trouble. I expect it to do so under protest.

According to the NTIA, ICANN’s Root Server Stability Advisory Committee is currently working on two documents — RSSAC001 and RSSAC002 — that will outline “the parameters of the basis of an early warning system” that will address Verisign’s concerns about root server management.

These documents are likely to be published within weeks, according to the NTIA letter.

Meanwhile, we’re also waiting for the publication of Interisle Consulting’s independent report into the internal name collision issue, which is expected to recommend that gTLDs such as .corp and .home are put on hold. I’m expecting this to be published any day now.

Two banks go down after forgetting to renew domains

Kevin Murphy, July 31, 2013, Domain Tech

Two UK banks suffered downtime over the weekend after apparently failing to renew their domain name registrations.

Clydesdale Bank and Yorkshire Bank, which offer online banking services at cbonline.co.uk and ybonline.co.uk respectively, both blamed a “systems update” for the downtime.

But some customers reported seeing a registrar’s renewal page when they attempted to access the sites, and others are reportedly still seeing difficulties consistent with DNS propagation delays.

Both domain names have expiry dates of July 26, according to Whois records.

Thankfully, the banks, both of which are owned by National Australia Bank, managed to retain control of their domains. If they’d fallen into third party hands things could have been a lot worse.

Combined, the banks have revenue of a couple of billion pounds.