Latest news of the domain name industry

Recent Posts

Russian registry hit with second breach notice after downtime

ICANN has issued another breach notice against the registry for .gdn, which seems to be suffering technical problems and isn’t up-to-date on its bills.

Navigation-Information Systems seems to have experienced about 36 hours of Whois/RDDS downtime starting from April 22, and is past due with its quarterly ICANN fees, according to the notice.

Contractually, if ICANN’s probes detect downtime of Whois more than 24 hours per week, that’s enough to trigger emergency measures, allowing ICANN to migrate the TLD to an Emergency Back-End Registry Operator.

Today, the registry’s web site hasn’t resolved for me in several hours, timing out instead, suggesting serious technical problems. Other non-registry .gdn web sites seem to work just fine.

NIS seems to be a Russian company — although most ICANN records give addresses in Dubai and Toronto — so it might be tempting to speculate that its troubles might be a result of some kind of cyber-war related to the Ukraine invasion.

But it’s not the first time this has happened by a long shot.

The company experienced a pretty much identical problem twice a year earlier, and it seems to have happened in 2018 and 2019 also.

NIS just can’t seem to keep its Whois up.

According to the breach notice, whenever Compliance manages to reach the registry’s 24/7 emergency contact they’re told he/she can’t help.

ICANN has given the registry until May 29 to fix its systems and pay up, or risk termination.

.gdn was originally applied for as something related to satellites, but it launched as an open generic that attracted over 300,000 registrations, mostly via disgraced registrar AlpNames, earning it a leading position in spam blocklists. Today, it has around 11,000 names under management, mostly via a Dubai registrar that seems to deal purely in .gdn names.

Thousands of domains hit by downtime after DNSSEC error

Kevin Murphy, February 7, 2022, Domain Tech

Sweden saw thousands of domains go down for hours on Friday, after DNSSEC errors were introduced to the .se zone file.

Local ccTLD registry IIS said in a statement that around 8,000 domains had a “technical difficulty” that started around 1530 local time and lasted around seven hours:

On the afternoon of 4/2, a problem was discovered that concerned approximately 8,000 .se domains. The problem meant that services, such as email and web, that are linked to the affected domains in some cases could not be used or reached. In total, there are approximately 1.49 million .se domains, of which approximately 8,000 were affected.

During the afternoon and evening, a thorough work was done with the troubleshooting and the error could be fixed for the affected .se domains at approximately 22.25.

The problem is believed to have been caused by incorrect DNSSEC signatures being published in the .se zone file. Any machine using a DNSSEC-validating resolver would have seen the errors and flat-out refused to resolve the domain.

This is probably the key drawback of DNSSEC — typically resolvers will treat badly signed domains as if they do not exist, rather than fail over to an unsigned, but resolving, response.

Sweden is not a DNSSEC newbie — .se was the first TLD to deploy the technology, all the way back in 2005, with services for domain holders coming a couple of years later.

“We fell short” — Tucows says sorry for Enom downtime

Kevin Murphy, January 19, 2022, Domain Registrars

Tucows has apologized to thousands of Enom customers who suffered days of downtime after a planned data center migration went badly wrong.

Showing true Canadian humility, the registrar posted the following statement this evening:

Beginning Saturday, January 15, 2022, Enom experienced a series of complications with a planned data center migration that caused significant disruptions for a subset of our customers.

We sincerely apologize to all of those impacted. We pride ourselves on being a reliable domain registration platform, and this weekend we fell short. We are committed to regaining your trust and to serving you better.

A full internal audit is underway and an incident report is forthcoming. This will include a summary of events and scope, learnings, and policy and process changes to mitigate future issues.

We reported on the downtime on Monday, as some customers were entering their third day of non-resolving DNS, which led to broken web sites and email.

At the time, Enom was saying it was tracking a “few hundred” affected domains. As customers suspected, that turned out to be a huge underestimate. The true number was closer to 350,000 domains, Tucows is now saying.

The company had been warning its customers about the planned maintenance for weeks, but it did not anticipate a “a bug in the new DNS provisioning system” that stopped customers’ domains resolving.

The migration started Saturday January 15 at 1400 UTC and was expected to last 12 hours. In the end, the DNS issue was not fully fixed until Monday January 17 at about 1845 UTC.

Nightmare downtime weekend for some eNom and Google customers

Kevin Murphy, January 17, 2022, Domain Registrars

Some eNom customers have experienced almost two days of downtime after a planned data center migration went titsup, leading to DNS failures hitting what users suspect must have been thousands of domains.

Social media has been filled with posts from customers complaining that their DNS was offline, meaning their web sites and email have been down. Some have complained of losing money to the downtime.

Affected domains include some registered directly with eNom, as well as some registered via resellers including Google Workspace.

The issue appears to have been caused by a scheduled data center migration, which was due to begin 1400 UTC on Saturday and last for 12 hours.

The Tucows-owned registrar said that during that time both reseller hub enom.com and retail site enomcentral.com would be unavailable. While this meant users would be unable to manage their domains, DNS was expected to resolve normally.

But before long, customers started reporting resolution problems, leading eNom to post:

We are receiving some reports of domains using our nameservers which are failing to resolve. Owing to the migration we are unable to research and fully address the issue until the migration is complete. This is not an expected outcome from the migration, and we are working to address it as a priority.

The maintenance window was then extended several times, by three to six hours each time, as eNom engineers struggled to fix problems caused by the migration. eNom posted several times on its status page:

The unexpected extension to the maintenance window was due to data migration delays. We also discovered resolution problems that impact a few hundred domains

eNom continued to post updates until it finally declared the crisis over at 0800 UTC this morning, meaning the total period of downtime was closer to 42 hours than the originally planned 12.

A great many posts on social media expressed frustration and anger with the outage, with some saying they were losing money and reputation and others promising to take their business elsewhere.

Some said that they continued to experience problems after eNom had declared the maintenance over.

eNom primarily sells through its large reseller channel, so some customers were left having to explain the downtime in turn to their own clients. Google Workspace is one such reseller that acknowledged the problems on its Twitter feed.

Some customers questioned whether the problem really was just limited to just a few hundred domains, and eNom seemed to acknowledge that the actual number may have been higher.

I’m in contact with Tucows, eNom’s owner, and will provide an update when any additional information becomes available.