Latest news of the domain name industry

Recent Posts

Google domain hijacked in Kenya

Kevin Murphy, April 16, 2013, Domain Tech

Google’s Kenyan web site was reportedly inaccessible yesterday due to a hijacking of the company’s local domain name.
Google.co.ke briefly redirected users to a site bearing the slogan “hacked” on a black background, according to the Daily Nation. A change of DNS was blamed.
Google Kenya reportedly said:

Google services in Kenya were not hacked. For a short period, some users visiting www.google.co.ke and a few other website were re-directed to a different website. We are in contact with the organisation responsible for managing domain names in Kenya.

Google is of course a high-profile target; hackers often exploit weaknesses at third-party providers such as domain name registries in order to take down its satellite sites.
Its Irish site was taken down in October last year, after attackers broke in through a vulnerability in IEDR’s Joomla content management system.

Verisign’s security angst no reason to delay new gTLDs, says expert

Kevin Murphy, April 7, 2013, Domain Tech

Potential security vulnerabilities recently disclosed by Verisign and PayPal are well in hand and not a reason to delay the launch of new gTLDs, according to the chair of ICANN’s security committee.
Patrick Falstrom, chair of the Security and Stability Advisory Committee, said today that the risk of disastrous clashes between new gTLDs and corporate security certificates has been taken care of.
Talking to the GNSO Council at the ICANN public meeting in Beijing, he gave a definitive “no” when asked directly if the SSAC would advise ICANN to delay the delegation of new gTLDs for security reasons.
Falstrom had given a presentation on “internal name certificates”, one of the security risks raised by Verisign in a paper last week.
These are the same kinds of digital certificates given out by Certificate Authorities for use in SSL transactions on the web, but to companies for their own internal network use instead.
The SSAC, judging by Falstrom’s presentation, had a bit of an ‘oh-shit’ moment late last year when a member raised the possibility of new gTLDs clashing with the domain names on these certificates.
Consider the scenario:
A company has a private namespace on its LAN called .corp, for example, where it stores all of its sensitive corporate data. It uses a digital certificate, issued by a reputable CA, to encrypt this data in transit.
But today we have more than a few applicants for .corp that would use it as a gTLD accessible to the whole internet.
Should .corp get delegated by ICANN — which of course is by no means assured — then there could be the risk of CAs issuing certificates for public domains that clash with private domains.
That might enable, for example, a hacker on a Starbucks wifi network to present his evil laptop as a secured, green-padlocked, corporate server to an unlucky road warrior sitting in the same cafe.
According to Falstrom, at least 157 CAs have issued certificates that clash with applied-for new gTLDs. The actual number is probably much higher.
This risk was outlined in Verisign’s controversial security report to ICANN, which recommended delay to the new gTLD program until security problems were resolved, two weeks ago.
But Falstrom told the GNSO Council today that recent secretive work by the SSAC, along with ICANN security staff and the CA/Browser Forum, a certificate industry authority, has mitigated this risk to the point that delay is not needed.
Falstrom said that after the SSAC realized that there was a potential vulnerability, it got it touch with the CA/Browser Forum to share its concerns. But as it turned out, the Forum was already on the case.
The Forum decided in February, a couple of weeks after an SSAC briefing, that member CAs should stop issuing internal name certificates that clash with new gTLDs within 30 days of ICANN signing a registry contract for that gTLD.
It has also decided to revoke any already-issued internal domain certificate that clashes with a new gTLD within 120 days of contract signing.
This means that the vulnerability window will be much shorter, should the vulnerability start getting exploited in wild.
But only if all CAs conform to the CA/Browser Forum’s guidelines.
Much of this is detailed in a report issued by SSAC last month (pdf). The CA/Browser Forum’s guidance is here (pdf). Falstrom’s PowerPoint is available here (pdf)

Six big reasons we won’t see any new gTLD launches until Q3

Kevin Murphy, April 5, 2013, Domain Policy

ICANN’s announcement of a big media bash in New York on April 23, to announce the launch of new gTLDs, has gotten many people thinking the first launches are imminent.
Wrong.
We’re not going to see any new gTLD domains on sale until the third quarter at the earliest, in my view, and here are a few good reasons why.
April 23 is just a PR thing
ICANN has said that April 23 is primarily about awareness-raising.
Not only does it hope to garner plenty of column inches talking about new gTLDs — helping the marketing efforts of their registries — it also hopes to ceremonially sign the first Registry Agreements.
I think CEO Fadi Chehade’s push to make the industry look more respectable will also play a part, with the promotion of the Registrant Rights and Responsibilities document.
But there’s never been any suggestion that any strings will be delegated at that time, much less go live.
The contracts are still hugely controversial
If ICANN wants to sign a Registry Agreement on April 23, it’s going to need a Registry Agreement to sign.
Right now, applicants are up in arms about ICANN’s demand for greater powers to amend the contract in future.
While ICANN has toned down its proposals, they may still be unacceptable to many registries and gTLD applicants.
Applicants have some impetus to reach agreement quickly — because they want to launch and start making money as soon as possible.
But ICANN wants the same powers added to the 2013 Registrar Accreditation Agreement, and registrars are generally less worried about the speedy approval of new gTLDs.
ICANN has tied the approval of the RA and the RAA together — only registrars on the new RAA will be able to sell domains in new gTLDs.
Chehade has also made it clear that agreement on the new RAA is a gating issue for new gTLD launches.
If registries, registrars and ICANN can’t settle these issues in Beijing, it’s hard to see how any contracts could be signed April 23. The first launch would be delayed accordingly.
GAC Advice might not be what we’re expecting
GAC Advice on New gTLDs is, in my view, the biggest gating issue applicants are facing right now.
GAC Advice is an integral part of the approval process outlined in the Applicant Guidebook and ICANN has said many times that it cannot and will not sign any contracts until the GAC has spoken.
But what does that mean from a process and timing point of view?
According to the Applicant Guidebook, if an application receives GAC Advice, it gets shunted from the main evaluation track to the ICANN board of directors for consideration.
It’s the only time the ICANN board has to get directly involved with the approval process, according to the Guidebook’s rather complex flow-charts.
GAC Advice is not an automatic death sentence, but any application the GAC is unanimously opposed to stands a very slim chance of getting approved by the board.
Given that ICANN is has said it will not sign contracts until it has received GAC Advice, and given that it has said it wants to sign the first contract April 23, it’s clearly expecting to know which applications are problematic and which are not during the next three weeks.
But I don’t think that’s necessarily going to happen. The GAC moves slowly and it has a track record of missing ICANN-imposed deadlines, which it often seems to regard as irksome.
Neither ICANN nor the GAC have ever said GAC Advice on New gTLDs will be issued during next week’s public meeting in Beijing. If a time is given it’s usually “after” or “following” Beijing.
And I don’t think the GAC, which decided against holding an inter-sessional meeting between Toronto and Beijing, is remotely close to providing a full list of specific applications of concern.
I do think a small number of slam-dunk bad applications – such as DotConnectAfrica’s .africa bid – will get Advised against during or after the Beijing meeting.
But I also think the GAC is likely to issue Advice that is much broader, and which may not provide the detail ICANN needs to carry the process forward for many applicants.
The GAC, in its most recent (delayed) update, is still talking about “categories” of concern – such as “consumer protection” and “geographical names” – some of which are very broad indeed.
Given the limited amount of time available to it in Beijing, I think it’s quite likely that the GAC is going to produce advice about categories as well as about individual applications.
And, crucially, I don’t think it’s necessarily going to give ICANN a comprehensive list of which specific applications fall into which categories.
If the GAC decides to issue Advice under the banner of “consumer protection”, for example, somebody is going to have to decide which applications are captured by that advice.
Is that just strings that relate to regulated industries such as pharmaceuticals or banking? Or is it any string that relates to selling stuff? What about .shop and .car? Shops and cars are “regulated” by consumer protection and safety laws in most countries.
Deciding which Advice covered which applications would not be an easy task, nor would it be a quick one. I don’t think the GAC has done this work yet, nor do I think it will in Beijing.
For the GAC to reach consensus advice against specific applications will in some cases require GAC representatives to return to their capitals for guidance, which would add delay.
There is, in my view, a very real possibility of more discussions being needed following Beijing, just in order to make sense of what the GAC comes up with.
The new gTLD approval process needs the GAC to provide a list of specific applications or strings with which it has concerns, and we may not see that before April 23.
ICANN may get a short list of applications that definitely do have Advice by then, but it won’t necessarily know which applications do not, which may complicate the contract-signing process.
The Trademark Clearinghouse still needs testing
The Trademark Clearinghouse is already, in one sense, open for business. Trademark owners have been able to submit their marks for validation for a couple of weeks now.
But the hard integration work has not been done yet, because the technical specifications the registries and registrars need to interface with IBM’s TMCH database have not all been finalized.
When the specs are done (it seems likely this will happen in the next few weeks), registries and registrars will need to finish writing their software and start production testing.
ICANN’s working timetable has the TMCH going live July 1, but companies that know much more than me about the technical issues at play here say it’s unlikely that they’ll be ready to go live with Sunrise and Trademark Claims services before August.
It’s in everyone’s interests to get all the bugs ironed out before launch.
For new gTLD registries, a failure of the centralized TMCH database could mean embarrassing bugs and downtime during their critical launch periods.
Trademark owners and domain registrants may also be concerned about the potential for loopholes.
For example, it’s still not clear to some how Trademark Claims – which notifies registrants when there’s a clash between a trademark and a domain they want – will interact with landrush periods.
Does the registrant only get a warning when they apply for the domain, which could be some weeks before a landrush auction? If so, what happens if a mark is submitted to the TMCH between the application and the auction and ultimate registration?
Is that a loophole to bypass Trademark Claims? Could a registrant get hit by a Claim after they’ve just spent thousands to register a domain?
These are the kinds of things that will need to be ironed out before the TMCH goes fully live.
There’s a sunrise notice period
The sunrise period is the first stage of launch in which customers get to register domain names.
Lest we forget, ICANN recently decided to implement a mandatory 30-day notice period for every new gTLD sunrise period. This adds a month to every registry’s go-live runway.
Because gTLD sunrise periods from now on all have to use the TMCH, registries may have to wait until the Clearinghouse is operational before announcing their sunrise dates.
If the TMCH goes live in July, this would push the first launch dates out until August.
Super-eager registries may of course announce their sunrise period as soon as they are able, and then delay it as necessary to accommodate the TMCH, but this might carry public relations risks.
Verisign’s security scare
It’s still not clear how Verisign’s warning about the security risks of launching new gTLDs on the current timetable will be received in Beijing.
If the GAC reckons Verisign’s “concerns” are valid, particularly on the issue of root zone stability, ICANN will have to do a lot of reassuring to avoid being advised to delay its schedule.
Could ICANN offer to finish off its work of root zone automation, for example, before delegating new gTLDs? To do so would add months to the roll-out timetable.

ANA calls for new gTLDs delay, again

Kevin Murphy, April 3, 2013, Domain Policy

The Association of National Advertisers has seized upon Verisign’s recent report into the security risks of ICANN’s new gTLD timetable to call for delays to the program.
In a blog post yesterday, ANA vice president Dan Jaffe said ICANN’s dismissal of the surprising Verisign letter is “like the Captain of the Titanic before the crash saying that the dangers of icebergs had been discussed for years.”
The post highlights the lack of finalized Trademark Clearinghouse specs as “one of the greatest concerns”, saying “millions of customers are the ones who will face harm”.
That’s not strictly true, of course. New gTLD registries are contractually unable to launch until the TMCH is ready, so the risk of registrants being harmed by the lack of specs today is a non-starter.
The ANA also points to ongoing concerns about proposed TLDs such as .corp and .home, which run the risk of clashing with existing private TLDs used on internal corporate and ISP networks.
It’s on much firmer ground here. If a user tries to access a LAN resource on a .corp domain while roaming, what’s to stop them sending sensitive data to a third-party web site instead?
I’ve yet to see a compelling reason why this is not a problem, but it’s not yet known whether the many applications for .corp, .home and similar strings have passed their ICANN technical evaluations.
The ICANN application form asked applicants to disclose potential operational problems such as these, but some applicants that were very familiar with the problem decided not to do so.
But the ANA’s main concern is its belief that new gTLDs will increase cybersquatting and increase the cost of defensive registrations, of course.
“Adequate steps have not been taken to protect Internet users, and we are headed toward uncharted waters with major danger to consumers, brandholders, and the Internet itself,” Jaffe wrote.
“The only prudent action for ICANN now is to delay this arbitrary domain name roll-out until it has fixed these very serious problems.”

Chehade says “no delay” as Verisign drops a security bomb on ICANN

Kevin Murphy, March 29, 2013, Domain Policy

Verisign today said that the new gTLD program presents risks to the security of the internet, but ICANN CEO Fadi Chehade told DI that he’s not expecting any new delays.
The .com behemoth tonight delivered a scathing review of the security and stability risks of launching new gTLDs on ICANN’s current timetable.
The new Verisign report catalogs the myriad ways in which ICANN is not ready to start approving new gTLDs, and the various security problems they could cause if launched without due care.
It strongly suggests that ICANN should delay the program until its concerns are addressed.
But Chehade, in an exclusive interview with DI tonight, rebutted the already-emerging conspiracy theories and said: “There’s nothing new here that would cause me to predict a new delay.”
What does the Verisign report say?
It’s a 21-page document, and it covers a lot of ground.
The gist of it is that ICANN is rushing to launch new gTLDs without paying enough attention to the potential security and stability risks that a vast influx of new gTLDs could cause.
It covers about a dozen main points, but here are the highlights:

  • Certificate authorities and browser makers are not ready. CAs have long issued certificates for use on organizations’ internal networks. In many cases, these certs will use TLDs that only exist on that internal network. A company might have a private .mail TLD, for example, and use certs to secure those domains for its users. The CA/Browser Forum, which coordinates CAs and browser makers, has decided (pdf) to deprecate these certs, but not until October 2016. This, Verisign says, creates a “vulnerability window” of three years during which attackers could exploit clashes between certs on internal TLDs and new gTLDs.
  • Root server operators are not ready. The organizations that run the 13 DNS root servers do not currently coordinate their performance metrics, Verisign said. This makes it difficult to see what impact new gTLDs will have on root server stability. “The current inability to view the root server system’s performance as a whole presents a risk when combined with the impending delegation of the multitude of new gTLDs,” Verisign said.
  • Root zone automation isn’t done yet. ICANN, Verisign and the US Department of Commerce are responsible for adding new gTLDs to the root zone, and work on automating the “TLD add” process is not yet complete. Verisign reckons this could cause “data integrity” problems at the root.
  • The Trademark Clearinghouse is not ready. Delays in finalizing the TMCH technical specs mean registries haven’t had sufficient time to build their interfaces and test them, and the TMCH itself is a potential single point of failure with an unknown attack profile.
  • Universal acceptance of new TLDs. Verisign points out that new gTLDs won’t be immediately available to users when they go live due to lack of software support. It points specifically to the ill-maintained Public Suffix List, used by browsers to set cookie boundaries, as a potential risk factor.
  • A bunch of other stuff. The report highlights issues such as zone file access, data escrow, Whois and pre-delegation testing where Verisign reckons ICANN has not given registries enough time to prepare.

Basically, Verisign has thrown pretty much every risk factor it can think of into the document.
Some of the issues of concern have been well-discussed in the ICANN community at large, others not so much.
Yeah, yeah, but what did Fadi say?
Chehade told DI this evening that he was surprised by the report. He said he’s been briefed on its contents today and that there’s “nothing new” in it. The program is “on track”, he said.
“What is most surprising here is that there is nothing new,” he said. “I’m trying to get my finger on what is new here and I can’t find it.”
“It was very surprising to see this cornucopia of things put together,” he said. “I’m struggling to see how the Trademark Clearinghouse has a security impact, for example.”
He added that some of Verisign’s other concerns, such as the fact that the Emergency Back-End Registry Operator is not yet up and running, are confusing given that existing TLDs don’t have EBEROs.
The report could be divided into two buckets, he said: those things related to ICANN’s operational readiness and those things related to the DNS root.
“Are these operational issues really security and stability risks, and given that we can only launch TLDs when these things are done… what’s the issue there?” he said.
On the DNS root issues, he pointed to a November 2012 report, signed by Verisign, that said the root is ready to take 1,000 new gTLDs a year or 100 a week.
So the Conspiracy Theory is wrong?
ICANN timelineWhen ICANN held a webinar for new gTLD applicants earlier this week, Chehade spent an inordinate amount of time banging home the point that security and stability concerns underpin every stage of the new gTLD program’s timetable.
As this slide from his presentation (click to enlarge) illustrates, security, stability and resiliency or “SSR” is the foundation of every timing assumption.
He said during the webinar:

Nothing will trump the gTLD process, nothing, but the SSR layer. The SSR layer is paramount. It is our number one responsibility to the internet community. Nothing will be done that jeopardizes the security and stability of the internet, period.
At any time if we as a community do not believe that all relevant security and stability matters have been addressed, if we do not believe that’s the case, the program freezes, period.
There is too much riding on the DNS. Hundreds of billions of dollars of commerce. Some may say livelihoods. We will not jeopardize it, not on my watch, not during my administration.

During the webinar, I was lurking on an unofficial chat room of registries, registrars and others, where the mood at that point could be encapsulated by: “Shit, what does Chehade know that he’s not telling us?”
Most people listening to the webinar were immediately suspicious that Chehade was expecting to receive some last-minute security and stability advice and that he was preparing the ground for delay.
The Verisign report was immediately taken as confirmation that their suspicions were correct.
It seemed quite likely that ICANN knew in advance that the report was coming down the pike and was not-so-subtly readying applicants for a serious SSR discussion in Beijing a little over a week from now.
When I asked Chehade a few times whether he knew the Verisign report was coming in advance, he declined to give a straight answer.
My feeling is he probably did, though he may not have known precisely what it was going to say. The question is perhaps less relevant given what he said about its contents.
But what Chehade thinks right now is probably not the biggest concern for new gTLD applicants.
The GAC’s reaction is now critical
The Verisign document could be seen as pure GAC fodder. How the Governmental Advisory Committee reacts to the report, which was CC’d to the US Department of Commerce, is now key.
The GAC has been banging on about root system stability for years and will, in my view, lap up anything that seems to prove that it was right all along.
The GAC will raise the Verisign report with ICANN in Beijing and, if it doesn’t like what it hears, it might advise delay. GAC advice is a lot harder for ICANN’s board to ignore than a self-serving Verisign report.
What’s Verisign playing at?
So why did Verisign issue the report now? I’ve been unable to get the company on the phone at this late hour, but I’ve asked some other industry folk for their responses.
Verisign’s super-lucrative .com contract is the obvious place to start theorizing.
Even though the company has over 200 new gTLD back-end contracts — largely with dot-brand applicants — .com is its cash cow and new gTLDs are a potential threat to that business.
The company has sounded a little more aggressive — talking about enforcing its patents and refusing to comply with ICANN’s audits — since the US Department of Commerce ordered a six-year .com price freeze last November.
But Chehade would not speculate too much about Verisign’s motives.
“I can’t read why this report and why now,” Chehade said. “Especially when there’s nothing new in it. That’s not for me to figure out. It’s for me to look at this report with a critical eye and understand if there’s something we’re not addressing. If there is, and we find it, we’ll address it.”
He pointed to a flurry of phone calls and emails to his desk after the Initial Evaluation results started getting published last week for a possible reason for the report’s timing.
“I think the real change that’s happened in the last few months is that the new gTLD program is now on track and for the first time people are seeing it coming,” he said.
Competitors were more blunt.
“It’s a bloody long report,” said ARI Registry Services CEO Adrian Kinderis. “Had they put the same amount of effort into working with ICANN, we’d be a lot better off on the particular issues.”

Hackers may have stolen .edu domain passwords

Kevin Murphy, February 19, 2013, Domain Registries

Owners of .edu domain names have been told to change their passwords after hackers compromised a server belonging to registry manager Educause.
The registry said today that it has deactivated the admin passwords for all .edu domains after discovering a “security breach” that gave attackers access to hashed passwords for .edu registrants.
The attack also compromised passwords for users of the Educause web site, the organization said.
The .edu domain is of course reserved for US-based educational institutions, and is considered one of the most secure and prestigious TLDs available.
Educause said it “immediate steps to contain this breach and is working with Federal law enforcement, investigators, and security experts to make sure this incident is properly addressed.”
The registry did not say whether the attack is related to the attack against the Massachusetts Institute of Technology last month, which reportedly was enabled via an Educase hack.

Is .home back on ICANN’s new gTLD risk list?

Kevin Murphy, January 13, 2013, Domain Tech

While most new gTLD applicants were focused on delays to the program revealed during last Friday’s ICANN webinar, another bit of news may also be a cause for concern for .home applicants.
As Rubens Kuhl of Nic.br spotted, ICANN revealed that 11 applications have not yet passed their DNS Stability check.
SLide
That’s a reversal from November, when ICANN said that all new gTLD applications had passed the stability review.
As I noted at the time, that was good news for .home, which some say may cause security problems if it is delegated.
As Kuhl observed, there are exactly 11 applications for .home, the same as the number of applications that now appear to have un-passed the DNS Stability check.
So is ICANN taking a closer look at .home, or is it just a numerical coincidence?
The string is considered risky by many because .home already receives a substantial amount of DNS traffic at the root servers, which will be inherited by whichever company wins the contention set.
It’s on a list of frequently requested invalid TLDs produced by ICANN’s Security and Stability Advisory Committee which was incorporated by reference in the new gTLD Applicant Guidebook.
Some major ISPs, notably BT in the UK, use .home as a pseudo-TLD in their residential routers.

IEDR admits blame for hack that brought down Google and Yahoo

Kevin Murphy, November 9, 2012, Domain Registries

IEDR, the Irish ccTLD registry, has admitted that an attack on its own web servers was responsible for google.ie and yahoo.ie being hijacked last month.
In a detailed statement, the registry said that hackers spent 25 days probing for weaknesses in its systems, before eventually breaking in through a vulnerability in the Joomla content management software.
This enabled the attackers to upload malicious PHP scripts and access the back-end database, according to the statement. They then redirected yahoo.ie and google.ie to an Indonesian web site.
It’s a reverse of position for IEDR, which had appeared to blame one of its registrars (believed to be Mark Monitor) for the lapse in security when the hack was discovered last month.
IEDR told ZDNet October 11: “an unauthorised change was made to two .ie domains on an independent registrar’s account which resulted in a change of DNS nameservers”.
But today it said instead: “The IEDR investigation also confirmed that neither the Registrar of the affected domains nor its systems had any responsibility for this incident.”
The registry has filed a complaint with the Irish police over the incident, and apologized to its customers for the disruption.
It also said it plans to roll out a Domain Lock service to help prevent hijacking in future, though I doubt such a service would have prevented this specific incident.

Only 2% of phishing attacks use cybersquatted domain names

Kevin Murphy, October 25, 2012, Domain Registries

The number of cybersquatted domain names being used for phishing is falling sharply and currently stands at just 2% of attacks, according to the Anti-Phishing Working Group.
The APWG’s first-half 2012 report (pdf) identified 64,204 phishing domains in total.
Of those, the group believes that only 7,712 (12%) were actually registered by the phishers themselves. The rest belonged to innocent third parties and had been compromised.
That’s a steep drop from 12,895 domains in the second half of 2011 and 14,650 in the first half of 2011.
Of the 7,712 phisher-owned domains, about 66% were being use to phish Chinese targets, according to the APWG.
The group’s research found only 1,350 that contained a brand name or a misspelling of a brand name.
That’s down from 2,232 domains in the second-half of 2011, representing just 2% of all phishing domains and 17% of phisher-owned domains.
The report states:

Most maliciously registered domain strings offered nothing to confuse a potential victim. Placing brand names or variations thereof in the domain name itself is not a favored tactic, since brand owners are proactively scanning Internet zone files for such names.
As we have observed in the past, the domain name itself usually does not matter to phishers, and a domain name of any meaning, or no meaning at all, in any TLD, will usually do.
Instead, phishers almost always place brand names in subdomains or subdirectories. This puts the misleading string somewhere in the URL, where potential victims may see it and be fooled. Internet users are rarely knowledgeable enough to be able to pick out the “base” or true domain name being used in a URL.

Taken as a percentage of attacks, brand-jacking is clearly a pretty low-occurrence offence, according to the APWG’s numbers.
In absolute numbers, it works out to about 7.5 domain names per day that are being use to phish and contain a variation of the brand name being targeted.
Unsurprisingly, the APWG found that Freedom Registry’s .tk — which offers free registration — is the TLD being abused most often to register domains for phishing attacks.
More than half of the phisher-owned domains were in .tk, according to the report.

What the hell happened to Go Daddy last night?

Kevin Murphy, September 11, 2012, Domain Registrars

Thousands — possibly millions — of Go Daddy customers suffered a four-hour outage last night, during a suspected distributed denial of service attack.
The company has not yet revealed the cause of the downtime, which started at 1725 UTC last night, but it bears many of the signs of DDoS against the company’s DNS servers.
During the incident, godaddy.com was inaccessible. DI hosts with Go Daddy; domainincite.com and secureserver.net, the domain Go Daddy uses to provide its email services, were both down.
The company issued the following statement:

At 10:25 am PT, GoDaddy.com and associated customer services experienced intermittent outages. Services began to be restored for the bulk of affected customers at 2:43 pm PT. At no time was any sensitive customer information, such as credit card data, passwords or names and addresses, compromised. We will provide an additional update within the next 24 hours. We want to thank our customers for their patience and support.

Several Go Daddy sites I checked remained accessible from some parts of the world initially, only to disappear later.
Others reported that they were able to load their Go Daddy webmail, but that no new emails were getting through.
This all points to a problem with Go Daddy’s DNS, rather than with its hosting infrastructure. People able to view affected sites were likely using cached copies of DNS records.
Close to 34 million domains use domaincontrol.com, Go Daddy’s primary name server, for their DNS. The company says it has over 10 million customers.
Reportedly, Go Daddy started using Verisign’s DNS for its home page during the event, which would also point to a DNS-based attack.
The outage was so widespread that the words “GoDaddy” and “DNS” quickly became trending topics on Twitter.
The web site downforeveryoneorjustme.com, which does not use Go Daddy, also went down as thousands of people rushed to check whether their web sites were affected.
Some outlets reported that Anonymous, the hacker group, had claimed credit for the attack via an anonymous (small a) Twitter account.
Companies the size of Go Daddy experience DDoS attacks on a daily basis, and they build their infrastructure with sufficient safeguards and redundancies to handle the extra traffic.
This leads me to believe that either yesterday’s attack was either especially enormous, or that somebody screwed up.
The fact that the company has not yet confirmed that external malicious forces were at work is worrying.
Either way it’s embarrassing for Go Daddy, which is applying for three new gTLDs which it plans to self-host.
Several reports have already speculated that the attack could be revenge for one or more of Go Daddy’s recent PR screw-ups.
The company has promised an update later today.