Latest news of the domain name industry

Recent Posts

Concern as ICANN shuts down “independent” security review

Kevin Murphy, October 31, 2017, Domain Policy

Just a year after gaining its independence from the US government, ICANN has come under scrutiny over concerns that its board of directors may have overstepped its powers.
The board has come in for criticism from almost everyone expressing an opinion at the ICANN 60 meeting in Abu Dhabi this week, after it temporarily suspended a supposedly independent security review.
The Security, Stability and Resiliency of the DNS Review, known as SSR-2, is one of the mandatory reviews that got transferred into ICANN’s bylaws after the Affirmation of Commitments with the US wound up last year.
The review is supposed to look at ICANN’s “execution of its commitment to enhance the operational stability, reliability, resiliency, security, and global interoperability of the systems and processes, both internal and external, that directly affect and/or are affected by the Internet’s system of unique identifiers that ICANN coordinates”.
The 14 to 16 volunteer members have been working for about eight months, but at the weekend the ICANN board pulled the plug, saying in a letter to the review team that it had decided “to suspend the review team’s work” and said its work “should be paused”.
Chair Steve Crocker clarified in sessions over the weekend and yesterday that it was a direction, not a request, but that the pause was merely “a moment to take stock and then get started again”.
Incoming chair Cherine Chalaby said in various sessions today and yesterday that the community — which I take to mean the leaders of the various interest groups — is now tasked with un-pausing the work.
Incoming vice-chair Chris Disspain told community leaders in an email (pdf) yesterday:

The Board has not usurped the community’s authority with respect to this review. Rather, we are asking the SOs and ACs to consider the concerns we have heard and determine whether or not adjustments are needed. We believe that a temporary pause in the SSR2 work while this consideration is under way is a sensible approach designed to ensure stakeholders can reach a common understanding on the appropriate scope and work plan

Confusion has nevertheless arise among community members, and some serious concerns and criticisms have been raised by commercial and non-commercial interests — including governments — over the last few days in Abu Dhabi.
But the board’s concerns with the work of SSR-2 seem to date back a few months, to the Johannesburg meeting in June, at which Crocker said “dangerous signals” were observed.
It’s not clear what he was referring to there, but the first serious push-back by ICANN came earlier this month, when board liaison Kaveh Ranjbar, apparently only appointed to that role in June, emailed the group to say it was over-stepping its mandate.
Basically, the SSR-2 group’s plan to carry out a detailed audit of ICANN’s internal security profile seems to have put the willies up the ICANN organization and board.
Ranjbar wrote:

The areas the Board is concerned with are areas that indeed raise important organizational information security and organizational oversight questions. However, these are also areas that are not segregated for community review, and are the responsibility of the ICANN Organization (through the CEO) to perform under the oversight of the ICANN Board.

While we support the community in receiving information necessary to perform a full and meaningful review over ICANN’s SSR commitments, there are portions of the more detailed “audit” plan that do not seem appropriate for in-depth investigation by the subgroup. Maintaining a plan to proceed with detailed assessments of these areas is likely to result in recommendations that are not tethered to the scope of the SSR review, and as such, may not be appropriate for Board acceptance when recommendations are issued. This also can expand the time and resources needed to perform this part of the review.

This does not seem hugely unreasonable to me. This kind of audit could be expensive, time-consuming and — knowing ICANN’s history of “glitches” — could have easily exposed all kinds of embarrassing vulnerabilities to the public domain.
Ranjbar’s letter was followed up a day later with a missive (pdf) from the chair of ICANN’s Security and Stability Advisory Committee, which said the SSR-2’s work was doomed to fail.
Patrick Falstrom recommended a “temporarily halt” to the group’s work. He wrote:

One basic problem with the SSR2 work is that the review team seems neither to have sufficient external instruction about what to study nor to have been able to formulate a clear direction for itself. Whatever the case, the Review Team has spent hundreds of hours engaged in procedural matters and almost no progress has been made on substantive matters, which in turn has damaged the goodwill and forbearance of its members, some of whom are SSAC members. We are concerned that, left to its own devices, SSR2 is on a path to almost certain failure bringing a consequential loss of credibility in the accountability processes of ICANN and its community.

Now that ICANN has actually acted upon that recommendation, there’s concern that it sets a disturbing precedent for the board taking “unilateral” action to scupper supposedly independent accountability mechanisms.
The US government itself expressed concern, during a session between the board and the Governmental Advisory Committee in Abu Dhabi today.
“This is unprecedented,” US GAC rep Ashley Heineman said. “I just don’t believe it was ever an expectation that the ICANN board would unilaterally make a decision to pause or suspend this action. And that is a matter of concern for us.”
“It would be one thing if it was the community that specifically asked for a pause or if it was a review team that says ‘Hey, we’re having issues, we need a pause.’ What’s of concern here is that ICANN asked for this pause,” she said.
UK GACer Mark Carvell added that governments have been “receiving expressions of grave concern” about the move and urged “maximum transparency” as the SSR-2 gets back on track.
Jonathan Zuck of the Innovators Network Foundation, one of the volunteers who worked on ICANN’s transition from US government oversight, also expressed concern during the public forum session yesterday.
“I think having a fundamental accountability mechanism unilaterally put on hold is something that we should be concerned about in terms of process,” he said. “I’m not convinced that it was the only way to proceed and that from a precedential standpoint it’s not best way to proceed.”
Similar concerns were voiced by many other parts of the community as they met with the ICANN board throughout today and yesterday.
The problem now is that the bylaws do not account for a board-mandated “pause” in a review team’s work, so there’s no process to “unpause” it.
ICANN seems to have got itself tangled up in a procedural quagmire — again — but sessions later in the week have been scheduled in order for the community to begin to untangle the situation.
It doubt we’ll see a resolution this week. This is likely to run for a while.

Over 750 domains hijacked in attack on Gandi

Gandi saw 751 domains belonging to its customers hijacked and redirected to malware delivery sites, the French registrar reported earlier this month.
The attack saw the perpetrators obtain Gandi’s password for a gateway provider, which it did not name, that acts as an intermediary to 34 ccTLD registries including .ch, .se and .es.
The registrar suspects that the password was obtained by the attacker exploiting the fact that the gateway provider does not enforce HTTPS on its login pages.
During the incident, the name servers for up up to 751 domains were altered such that they directed visitors to sites designed to compromise unpatched computers.
The redirects started at 0804 UTC July 7, and while Gandi’s geeks had reversed the changes by 1615 it was several more hours before the changes propagated throughout the DNS for all affected domains.
About the theft of its password, Gandi wrote:

These credentials were likewise not obtained by a breach of our systems and we strongly suspect they were obtained from an insecure connection to our technical partner’s web portal (the web platform in question allows access via http).

It’s not clear why a phishing attack, which would seem the more obvious way to obtain a password, was ruled out.
Gandi posted a detailed timeline here, while Swiss registry Switch also posted an incident report from its perspective here. An effected customer, which just happened to be a security researcher, posted his account here.
Gandi says it manages over 2.1 million domains across 730 TLDs.

GoDaddy launches security service after Sucuri acquisition

GoDaddy has revealed the first fruits of its March acquisition of web security service provider Sucuri.
It’s GoDaddy Website Security, what appears to be a budget version of the services Sucuri already offers on a standalone basis.
For $6.99 per month ($83.88/year), the service monitors your web site for malware and removes it upon request. It also keeps tabs on major blacklists to make sure you’re not being blocked by Google, Norton or McAfee.
This low-end offering gets you a 12-hour response time for the cleanup component. You can up that to 30 minutes by taking out the $299.99 per year plan.
The more expensive plan also includes DDoS protection, a malware firewall and integration with a content delivery network for performance.
There’s also an intermediate, $19.99-per-month ($239.88/year) plan that includes the extra features but keeps the response time at 12 hours.
An SSL certificate is included in the two more-expensive packages.
The pricing and feature set looks to compare reasonably well with Sucuri’s standalone products, which start at $16.66 a month and offer response times as fast as four hours.
As somebody who has suffered from three major security problems on GoDaddy over the last decade or so, and found GoDaddy’s response abysmal on all three occasions (despite my generally positive views of its customer service), the new service is a somewhat tempting proposition.

CIRA and Nominum offering DNS firewall

Canadian ccTLD registry CIRA has started offering DNS-based security services to Canadian companies.
The company has partnered with DNS security services provider Nominum to develop D-Zone DNS Firewall, which it said lets customers “block access to malicious content before it can reach their network”.
It’s basically a recursive DNS service with a layer of filterware that blocks access to lists of domains, such as those used by command and control servers, known to be connected to malware and phishing.
It’s a timely offering, given the high-profile WannaCry ransomware which infected hundreds of thousands of unpatched Windows boxes worldwide last month (though I’m not sure this kind of service would have actually prevented its spread).
The CIRA service uses Nominum’s technology but operates at Canadian internet exchange points and appears to be marketed at Canadian customers.
It’s the latest effort by CIRA to expand outside of its core .ca registry business. Earlier this year, it became ICANN’s newest approved gTLD back-end provider after a deal with .kiwi.
Many ccTLD registries are looking outside of their traditional businesses as the increasingly cluttered TLD market puts a squeeze on registration growth.

Massive ransomware attack hits 150 countries, brought down by a domain reg

Kevin Murphy, May 15, 2017, Domain Tech

A massive outbreak of malware on Friday hit thousands of organizations in an estimated 150 countries and had a big impact on the UK National Health Service before being temporarily thwarted by a single domain name registration.
WannaCry, as the malware has been called, targets Windows boxes that have not installed a March security patch. It encrypts files on the hosts it infects and demands money for the decryption key.
The attack is Big News for several reasons.
First, it spread ransomware over the network using a remotely exploitable vulnerability that required no user error or social engineering to install itself.
Second, it hit an estimated quarter-million machines, including thousands at big organizations such as Telefonica, the NHS, Deutsche Bahn and FedEx.
Third, it posed a real risk to human life. A reported 70,000 NHS machines, including medical devices, were said to be infected. Reportedly, some non-critical patients had to be turned away from UK hospitals and operations were cancelled due to the inability of doctors to access medical records.
Fourth, WannaCry appears to have been based on code developed by the US National Security Agency and leaked last month.
All in all, it was an attack the scale of which we have not seen for many years.
But it seems to have been “accidentally” prevented from propagating further on Friday, at least temporarily, with the simple act of registering a domain name.
A young British security researcher who goes by the online handle MalwareTech said he was poring over the WannaCry code on Friday afternoon when he came across an unregistered domain name.
On the assumption that the malware author perhaps planned to use the domain as a command and control center, MalwareTech spent the ten bucks to register it.
MalwareTech discovered that after the domain was registered, the malware stopped encrypting the hard drives it infected.
He first thought it was a fail-safe or kill-switch, but he later came to the conclusion that the author had included the domain lookup as a way to thwart security researchers such as himself, who run malware code in protected sandbox environments.
MalwareTech wrote:

In certain sandbox environments traffic is intercepted by replying to all URL lookups with an IP address belonging to the sandbox rather than the real IP address the URL points to, a side effect of this is if an unregistered domain is queried it will respond as [if] it were registered

Once the domain was registered, WannaCry iterations on newly infected machines assume they were running in sandboxes and turned themselves off before causing additional damage.
MalwareTech was naturally enough proclaimed the hero of the day by many news outlets, but it appears that versions of the malware without the DNS query kill-switch already started circulating over the weekend.
Many are warning that the start of the work week today may see a new rash of infections.
The researcher’s account of the incident can be read in full here.

Hacker hostage crisis at ICANN secret key ceremony! (on TV)

Kevin Murphy, March 24, 2017, Gossip

One of ICANN’s Seven Secret Key-Holders To The Internet got taken out as part of an elaborate heist or something on American TV this week.
In tense scenes, a couple of secret agents or something with guns were forced to break into one of ICANN’s quarterly root zone key signing ceremonies to prevent a hacker or terrorist or something from something something, something something.
The stand-off came after the secret agents or whatever discovered that a hacker called Mayhew had poisoned a guy named Adler, causing a heart attack, in order to secure his position as a replacement ICANN key-holder and hijack the ceremony.
This all happened on a TV show called Blacklist: Redemption that aired in the US March 16.
I’d be lying if I said I fully understood what was supposed to be going on in the episode, not being a regular viewer of the series, but here’s the exposition from the beginning of the second act.
Black List

Botox Boss Lady: Seven keys control the internet? That can’t be possible.
Neck Beard Exposition Guy: They don’t control what’s on it, just how to secure it. All domain names have an assigned number. But who assigns the numbers?
Soap Opera Secret Agent: Key holders?
Neck Beard Exposition Guy: Seven security experts randomly selected by ICANN, the Internet Corporation for Assigned Names and Numbers.
Bored Secret Agent: Max Adler’s wife mentioned a key ceremony.
Neck Beard Exposition Guy: Yeah, four times a year the key holders meet to generate a master key and to assign new numbers, to make life difficult for hackers who want to direct folks to malicious sites or steal their credit card information.
Botox Boss Lady: But by being at the ceremony, Mayhew gets around those precautions?
Neck Beard Exposition Guy: Oh, he does more than that. He can route any domain name to him.

That’s the genuine dialogue. ICANN, jarringly, isn’t fictionalized in the way one might usually expect from US TV drama.
The scene carries on to explain the elaborate security precautions ICANN has put in place around its key-signing ceremonies, including biometrics, smart cards and the like.
The fast-moving show then cuts to the aforementioned heist situation, in which our villain of the week takes an ICANN staffer hostage before using the root’s DNSSEC keys to somehow compromise a government data drop and download a McGuffin.
Earlier this week I begged Matt Larson, ICANN’s VP of research and a regular participant in the ceremonies (which are real) to watch the show and explain to me what bits reflect reality and what was plainly bogus.
“There are some points about it that are quite close to how the how the root KSK administration works,” he said, describing the depiction as “kind of surreal”.
“But then they take it not one but two steps further. The way the ceremony happens is not accurate, the consequences of what happens at the ceremony are not accurate,” he added.
“They talk about how at the ceremony we generate a key, well that’s not true. It’s used for signing a new key. And then they talk about how as a result of the ceremony anyone can intercept any domain name anywhere and of course that’s not true.”
The ceremonies are used to sign the keys that make end-to-end DNSSEC possible. By signing the root, DNSSEC resolvers have a “chain of trust” that goes all the way to the top of the DNS hierarchy.
Black ListThe root keys just secure the bit between the root at the TLDs. Compromising them would not enable a hacker to immediately start downloading data from the site of his choosing, as depicted in the show. He’d then have to go on to compromise the rest of the chain.
“You’d have to create an entire path of spoofed zones to who you wanted to impersonate,” Larson said. “Your fake root zone would have to delegate to a fake TLD zone to a fake SLD zone and so on so you could finally convince someone they were going to the address that you wanted.”
“If you could somehow compromise the processes at the root, that alone doesn’t give you anything,” he said.
But the show did present a somewhat realistic description of how the ceremony rooms (located in Virginia and California, not Manhattan as seen on TV) are secured.
Among other precautions, the facilities are secured with smart cards and PINs, retina scans for ICANN staff, and have reinforced walls to prevent somebody coming in with a sledgehammer, Larson said.
Blacklist: Redemption airs on Thursday nights on NBC in the US, but I wouldn’t bother if I were you.

Phishing in new gTLDs up 1,000% but .com still the worst

Kevin Murphy, February 20, 2017, Domain Registries

The .com domain is still the runaway leader TLD for phishing, with new gTLDs still being used for a tiny minority of attacks, according to new research.
.com domains accounted for 51% of all phishing in 2016, despite only having 48% of the domains in the “general population”, according to the 2017 Phishing Trends & Intelligence Report
from security outfit PhishLabs.
But new gTLDs accounted for just 2% of attacks, despite separate research showing they have about 8% of the market.
New gTLDs saw a 1,000% increase in attacks on 2015, the report states.
The statistics are based on PhishLabs’ analysis of nearly one million phishing sites discovered over the course of the year and include domains that have been compromised, rather than registered, by attackers.
The company said:

Although the .COM top-level domain (TLD) was associated with more than half of all phishing sites in 2016, new generic TLDs are becoming a more popular option for phishing because they are low cost and can be used to create convincing phishing domains.

There are a few reasons new gTLDs are gaining traction in the phishing ecosystem. For one, some new gTLDs are incredibly cheap to register and may be an inexpensive option for phishers who want to have more control over their infrastructure than they would with a compromised website. Secondly, phishers can use some of the newly developed gTLDs to create websites that appear to be more legitimate to potential victims.

Indeed, the cheapest new gTLDs are among the worst for phishing — .top, .xyz, .online, .club, .website, .link, .space, .site, .win and .support — according to the report.
But the numbers show that new gTLDs are significantly under-represented in phishing attacks.
According to separate research from CENTR, there were 309.4 million domains in existence at the end of 2016, of which about 25 million (8%) were new gTLDs.
Yet PhishLabs reports that new gTLD domains were used for only about 2% of attacks.
CENTR statistics have .com with a 40% share of the global domain market, with PhishLabs saying that .com is used in 51% of attacks.
The difference in the market share statistics between the two sets of research is likely due to the fact that CENTR excludes .tk from its numbers.
Again, because PhishLabs counts hacked sites — in fact it says the “vast majority” were hacked — we should probably exercise caution before attributing blame to registries.
But PhishLabs said in its report:

When we see a TLD that is over-represented among phishing sites compared to the general population, it may be an indication that it is more apt to being used by phishers to maliciously register domains for the purposes of hosting phishing content. Some TLDs that met these criteria in 2016 included .COM, .BR, .CL, .TK, .CF, .ML, and .VE.

By far the worst ccTLD for phishing was Brazil’s .br, with 6% of the total, according to the report.
Also notable were .uk, .ru, .au, .pl, and .in, each with about 2% of the total, PhishLabs said.

ICANN joins anti-phishing group board

Kevin Murphy, January 27, 2017, Domain Policy

ICANN’s VP of security has joined the board of directors of the Anti-Phishing Working Group.
Dave Piscitello is one of three new APWG board members, arriving as the group expands its board from two people to five.
APWG said the expansion “is recognition of the growing complexity and scale of Internet crime today and the challenges in responding to this global threat.”
In a press release, it noted that targeted phishing attacks are said to be the root cause of the data thefts that may or may not have influenced the US presidential election last year.
The other two new directors are Brad Wardman of PayPal and Pat Cain of The Cooper Cain Group, a security consulting firm (a different bloke to the similarly named Pat Kane of Verisign).
APWG is an independent, public-private coalition that collects and publishes data about phishing attack trends and advice for how to defend against them.
Part of this work entails tracking how many domain names are involved in phishing, and in which TLDs.
The APWG board also includes chair David Jevans of Proofpoint and secretary-general Peter Cassidy.

Security experts say ICANN should address collisions before approving more new TLDs

Kevin Murphy, January 2, 2017, Domain Tech

ICANN’s Security and Stability Advisory Committee has told ICANN it needs to do more to address the problem of name collisions before it approves any more new gTLDs.
In its latest advisory (pdf), published just before Christmas, SSAC says ICANN is not doing enough to coordinate with other technical bodies that are asserting authority over “special use” TlDs.
The SAC090 paper appears to be an attempt to get ICANN to further formalize its relationship with the Internet Engineering Task Force as it pertains to reserved TLDs:

The SSAC recommends that the ICANN Board of Directors take appropriate steps to establish definitive and unambiguous criteria for determining whether or not a syntactically valid domain name label could be a top-level domain name in the global DNS.

Pursuant to its finding that lack of adequate coordination among the activities of different groups contributes to domain namespace instability, the SSAC recommends that the ICANN Board of Directors establish effective means of collaboration on these issues with relevant groups outside of ICANN, including the IETF.

The paper speaks to at least two ongoing debates.
First, should ICANN approve .home and .corp?
These two would-be gTLDs were applied for by multiple parties in 2012 but have been on hold since August 2013 following an independent report into name collisions.
Names collisions are generally cases in which ICANN delegates a TLD to the public DNS that is already broadly used on private networks. This clash can result in the leakage of private data.
.home and .corp are by a considerable margin the two strings most likely to be affected by this problem, with .mail also seeing substantial volume.
But in recent months .home and .corp applicants have started to put pressure on ICANN to resolve the issue and release their applications from limbo.
The second incident the SSAC paper speaks to is the reservation in 2015 of .onion
If you’re using a browser on the privacy-enhancing Tor network, .onion domains appear to you to work exactly the same as domains in any other gTLDs, but under the hood they don’t use the public ICANN-overseen DNS.
The IETF gave .onion status as a “Special Use Domain“, in order to prevent future collisions, which caused ICANN to give it the same restricted status as .example, .localhost and .test.
But there was quite a lot of hand-wringing within the IETF before this status was granted, with some worrying that the organization was stepping on ICANN’s authority.
The SSAC paper appears to be designed at least partially to encourage ICANN to figure out how much it should take its lead from the IETF in this respect. It asks:

The IETF is an example of a group outside of ICANN that maintains a list of “special use” names. What should ICANN’s response be to groups outside of ICANN that assert standing for their list of special names?

For members of the new gTLD industry, the SSAC paper may be of particular importance because it raises the possibility of delays to subsequent rounds of the program if ICANN does not spell out more formally how it handles special use TLDs.
“The SSAC recommends that ICANN complete this work before making any decision to add new TLD names to the global DNS,” it says.

Amazon backtracks after pricing free Alexa list at over $900,000

Kevin Murphy, November 23, 2016, Domain Services

Amazon has reversed, at least temporarily, its decision to yank its free list of the world’s most popular domains, after an outcry from researchers.
The daily Alexa list, which contains the company’s estimate of the world’s top 1 million domains by traffic, suddenly disappeared late last week.
The list was popular with researchers in fields such as internet security. Because it was free, it was widely used.
DI PRO uses the list every day to estimate the relative popularity of top-level domains.
After deleting the list, Amazon directed users to its Amazon Web Services portal, which had started offering the same data priced at $0.0025 per URL.
That’s not cheap. The cost of obtaining same data suddenly leaped from nothing to $912,500 per year, or $2,500 per day.
That’s beyond the wallets, I suspect, of almost every Alexa user, especially the many domain name tools providers (including yours truly) that relied on the data to estimate domain popularity.
Even scaling back usage to the top 100,000 URLs would be prohibitively expensive for most researchers.
While Amazon is of course free to price its data at whatever it thinks it is worth, no notice was given that the file was to be deleted, scuppering without warning goodness knows how many ongoing projects.
Some users spoke out on Twitter.


I spent most of yesterday figuring out how to quickly rejigger DI PRO to cope with the new regime, but it seems I may have been wasting my time.
After an outcry from fellow researchers, Amazon has restored the free list. It said on Twitter:


It seems clear that the key word here is “temporarily”, and that the the restoration of the file may primarily be designed to give researchers more time to seek alternatives or wrap up their research.