Verisign confirms .gov downtime, blames algorithm

Kevin Murphy, August 15, 2013, Domain Tech

Verisign this morning confirmed yesterday’s reports that the .gov top-level domain went down for some internet users due to a DNSSEC problem, which it said was related to an algorithm change.

In a posting to various mailing lists, Verisign principal engineer Duane Wessels said:

On the morning of August 14, a relatively small number of networks may have experienced an operational disruption related to the signing of the .gov zone. In preparation for a previously announced algorithm rollover, a software defect resulted in publishing the .gov zone signed only with DNSSEC algorithm 8 keys rather than with both algorithm 7 and 8. As a result .gov name resolution may have failed for validating recursive name servers. Upon discovery of the issue, Verisign took prompt action to restore the valid zone.

Verisign plans to proceed with the previously announced .gov algorithm rollover at the end of the month with the zone being signed with both algorithms for a period of approximately 10 days.

This clarifies that the problem was slightly different to what had been assumed yesterday.

It was related to change of the cryptographic algorithm used to create .gov’s DNSSEC keys, a relatively rare event, rather than a scheduled key rollover, which is a rather more frequent occurrence.

The problem would only have made .gov domains (and consequently web sites, email, etc) inaccessible for users of networks where DNSSEC validation is strictly enforced, which is quite small.

The US ISP with the strongest support for DNSSEC is Comcast. Since turning on its validators it has reported dozens of instances of DNSSEC failing — mostly in second-level .gov domains, where DNSSEC is mandated by US policy.

On two other occasions Comcast has blogged about the whole .gov TLD failing DNSSEC validation due to problems keeping keys up to date.

The general problem is widespread enough, and the impact severe enough, that Comcast has had to create an entirely new technology to prevent borked key rollovers making web sites go dark for its customers.

Called Negative Trust Anchors, it’s basically a Band-Aid that allows the ISP to deliberately ignore DNSSEC on a given domain while it waits for that domain’s owner to sort out its key problem.

The technology was created following the widely reported nasa.gov outage last year.

It’s really little wonder that so few organizations are interested in deploying DNSSEC today.

Yesterday’s .gov problem may have been minor, lasting only an hour or two, but had the affected TLD been .com, and had DNSSEC deployment been more widespread, everyone on the planet would have noticed.

Under ICANN contract, DNSSEC is mandatory for new gTLDs at the top level, but not the second level.

Reports: .gov fails due to DNSSEC error

Kevin Murphy, August 14, 2013, Domain Tech

The .gov top-level domain suffered a DNSSEC problem today and was unavailable to some internet users, according to reports.

According to mailing lists and the SANS Internet Storm Center, it appeared that .gov rolled one of its DNSSEC keys without telling the root zone about the update.

This meant that anyone whose DNS servers do strict DNSSEC validation — a relatively small number of networks — would have been unable to access .gov web sites, email and other resources.

As a matter of policy, all second-level .gov domains have to be DNSSEC-signed.

The problem was corrected quite quickly — looks like within an hour or two — but as SANS noted, caching issues may prolong the impact.

Both .gov and the root zone are managed by Verisign, which isn’t on the best of terms with the US government at the moment.

Google starts supporting DNSSEC

Kevin Murphy, March 21, 2013, Domain Tech

Google has started fully supporting DNSSEC, the domain name security standard, on its Public DNS service.

According to a blog post from the company, while the free-to-use DNS resolution service has always passed on DNSSEC requests, now its resolvers will also validate DNSSEC signatures.

What does this mean?

Well, users of Public DNS will get protected from DNS cache poisoning attacks, but only for the small number of domains (such as domainincite.com) that are DNSSEC-signed.

It also means that if a company borks its DNSSEC implementation or key rollover, it’s likely to cause problems for Public DNS users. Comcast, an even earlier adopter, sees such problems pretty regularly.

But the big-picture story is that a whole bunch of new validating resolvers have been added to the internet, providing a boost to DNSSEC’s protracted global roll-out.

Google said:

Currently Google Public DNS is serving more than 130 billion DNS queries on average (peaking at 150 billion) from more than 70 million unique IP addresses each day. However, only 7% of queries from the client side are DNSSEC-enabled (about 3% requesting validation and 4% requesting DNSSEC data but no validation) and about 1% of DNS responses from the name server side are signed. Overall, DNSSEC is still at an early stage and we hope that our support will help expedite its deployment.

One has to wonder whether Google’s participation in the ICANN new gTLD program — with its mandatory DNSSEC at the registry level — encouraged the company to adopt the technology.

How NCC plans to revolutionize domain name security with .secure gTLD

The proposed .secure generic top-level domain is now officially contested, after NCC Group, best known in the domain industry for its data escrow services, announced a bid.

Newly formed NCC subsidiary Artemis Internet Inc, based in San Francisco, is the official applicant.

According to Artemis chief technology officer Alex Stamos, who co-founded security testing firm iSEC Partners and sold it to NCC for $22.8 million two years ago, this is a hard security play.

The .secure gTLD would be all about enforcing strict security policies on registrants, he said.

“Right now there are a lot of interesting security technologies out there, but they’re generally not very effective because they’re optional,” he said.

As well as premium pricing and a manual registrant verification process expected to take about two weeks – complete with mailing address confirmation and two-factor authentication tokens – Artemis plans to force registrants to adhere to certain baseline security policies.

For example, all .secure web sites would have to be completely HTTPS, Stamos said. The only permissible use of a standard port 80 URL would be to redirect to the encrypted site.

The same would go for mail servers – they’d all have to use TLS to encrypt email as standard.

“When you go to bank.secure you’ll know that the software and servers at the other end are going to make the most secure decisions possible,” Stamos said.

Artemis would scan its registrants’ sites for compliance with these baseline rules, looking out for things such as botched SSL implementations.

But Artmeis wants to take it a step further. It is also proposing a new protocol, Domain Policy Framework, which would let registrants publish their security policies in the DNS.

Stamos said the company has set up a Domain Policy Working Group to develop the spec, which it plans to submit to the IETF for standardization before the end of the year.

The other members of the working group, which promise to include some “influential” names in financial services, software and social media, will be announced in July.

DPF would work alongside the existing DNSSEC and DANE protocols to enable registrants to specify, for example, which Certificate Authorities browsers should trust when accessing their .secure domain, preventing certain types of attacks, Stamos said.

Obviously, this system is not going to work without support from browser software, but Stamos said he’s hopeful that the big vendors will embrace the DPF spec.

“The most innovative and forward-leaning browsers will support it first,” he said.

Domains in .secure would still be accessible by non-compliant browsers, he said.

ARI Registry Services has been hired to manage the back-end registry, but Artemis is also building a secondary registry system for storing the DPF records, which it plans to offer to other TLD registries.

NCC plans to invest up to £6 million ($9.7 million) in Artmeis over the next 15 months, according to a press release.

Another firm, Domain Security Company, also plans to apply for .secure.

Experts say piracy law will break the internet

Kevin Murphy, May 26, 2011, Domain Tech

Five of the world’s leading DNS experts have come together to draft a report slamming America’s proposed PROTECT IP Act, comparing it to the Great Firewall of China.

In a technical analysis of the bill’s provisions, the authors conclude that it threatens to weaken the security and stability of the internet, putting it at risk of fragmentation.

The bill (pdf), proposed by Senator Leahy, would force DNS server operators, such as ISPs, to intercept and redirect traffic destined for domains identified as hosting pirated content.

The new paper (pdf) says this behavior is easily circumvented, incompatible with DNS security, and would cause more problems than it solves.

The paper was written by: Steve Crocker, Shinkuro; David Dagon, Georgia Tech; Dan Kaminsky, DKH; Danny McPherson, Verisign and Paul Vixie of the Internet Systems Consortium.

These are some of the brightest guys in the DNS business. Three sit on ICANN’s Security and Stability Advisory Committee and Crocker is vice-chairman of ICANN’s board of directors.

One of their major concerns is that PROTECT IP’s filtering would be “fundamentally incompatible” with DNSSEC, the new security protocol that has been strongly embraced by the US government.

The authors note that any attempts to redirect domains at the DNS level would be interpreted as precisely the kind of man-in-the-middle attack that DNSSEC was designed to prevent.

They also point out that working around these filters would be easy – changing user DNS server settings to an overseas provider would be a trivial matter.

PROTECT IP’s DNS filtering will be evaded through trivial and often automated changes through easily accessible and installed software plugins. Given this strong potential for evasion, the long-term benefits of using mandated DNS filtering to combat infringement seem modest at best.

If bootleggers start using dodgy DNS servers in order to find file-sharing sites, they put themselves at risk of other types of criminal activity, the paper warns.

If piracy sites start running their own DNS boxes and end users start subscribing to them, what’s to stop them pharming users by capturing their bank or Paypal traffic, for example?

The paper also expresses concern that a US move to legitimize filtering could cause other nations to follow suit, fragmenting the mostly universal internet.

If the Internet moves towards a world in which every country is picking and choosing which domains to resolve and which to filter, the ability of American technology innovators to offer products and services around the world will decrease.

This, incidentally, is pretty much the same argument used to push for the rejection of the .xxx top-level domain (which Crocker voted for).