Latest news of the domain name industry

Recent Posts

A new way to game the new gTLD program

Kevin Murphy, May 13, 2024, Uncategorized

It may not help you win a gTLD, but a new method for screwing over your enemies in ICANN’s new gTLD program has emerged.

As I reported earlier today, it seems quite likely that ICANN is going to add a new step in the new gTLD evaluation process for the next round — testing each applied-for string in the live DNS to see if it causes significant name collision problems, breaking commonly deployed software or leading to data leaks.

The proposed new Technical Review Team would make this assessment based in part on how much query traffic non-existent TLDs receive at various places in the DNS, including the ICANN-managed root. A string with millions of daily queries would be flagged for further review and potentially banned.

The Name Collision Analysis Project Discussion Group, which came up with the new name collisions recommendations, reckons this fact could be used against new gTLD applicants as a form of sabotage, as it might be quite difficult for ICANN to figure out whether the traffic is organic or simulated.

The group wrote in its final report (pdf):

In the 2012 round, the issue of name collisions included an assumption that the existence of any name collision was accidental (e.g., individuals and organizations that made a mistake in configuration). In future rounds, there is a concern on the part of the NCAP DG that name collisions will become purposeful (e.g., individuals and organizations will simulate traffic with an intention to confuse or disrupt the delegation process)…

Determining whether a name collision is accidental or purposeful will be a best-effort determination given the limits of current technologies.

We’re basically talking about a form of denial of service attack, where the DNS is flooded with bogus traffic with the intention of breaking not a server or a router but a new gTLD application filed by a company you don’t like.

It probably wouldn’t even be that difficult or expensive to carry out. A string needs fewer than 10 million queries a day to make it into the top 25 non-existent TLDs to receive traffic.

It would make no sense if the attacker was also applying for the same gTLD — because it’s the string, not the applicant, that gets banned — but if you’re Pepsi and you want to scupper Coca-Cola’s chances of getting .coke, there’s arguably a rationale to launch such an attack.

The NCAP DG noted that such actions “may also impact the timing and quantity of legal objections issued against proposed allocations, how the coordination of the next gTLD round is designed, and contention sets and auctions.”

“Name collisions are now a well-defined and known area of concern for TLD applicants when compared to the 2012 round, which suggests that individuals and organizations looking to ‘game’ the system are potentially more prepared to do so,” the report states.

I’d argue that the potential downside of carrying out such an attack, and getting found out, would be huge. Even if it turns out not to be a criminal act, you’d probably find yourself in court, with all the associated financial and brand damage that would cause, regardless.

.home, .mail and .corp could get unbanned

Kevin Murphy, May 13, 2024, Domain Tech

The would-be new gTLDs .home, .mail and .corp — which were some of the most hotly contested strings in the 2012 application round before ICANN banned them — could get a new lease of life if ICANN adopts the recommendations of a panel of security experts.

More than 20 applications for the three strings were first put on hold, and then rejected outright in 2018, due to the risk of name collisions — where a TLD in the public DNS clashes with a domain used extensively on private networks.

The three non-existent TLDs receive more than 100 million queries per day at the DNS root due to queries leaking out from private networks, creating the risk of stuff breaking or sensitive data being stolen if they were to ever be delegated.

But now ICANN has been told that it “should not reject a TLD solely based on the volume of name collisions” and that it should submit .home, .mail and .corp to a new, more nuanced “Name Collision Risk Assessment Process”.

The recommendations comes in a newly published and rather extensive final report (pdf) from the Name Collision Analysis Project Discussion Group, which has been looking into the name collisions problem for the last four years.

While NCAP says ICANN should create a Collision String List of high-risk strings that new gTLD applicants could consult, it stopped short of recommending that the Org preemptively ban strings outright with a “do not apply” list, writing:

Regarding .CORP, .HOME, and .MAIL, high query volume is not a sufficient indicator of high-risk impact. The complexity and diversity of query sources further complicate the assessment of risk and impact. It is impractical to create a pre-emptive “do-not-apply” list for gTLD strings due to the dynamic nature of the DNS and the need for real-time, comprehensive analysis.

.corp might have a relatively easier time getting unblocked. NCAP figured out that most queries for that TLD are due to one “globally dominant software package” made by Microsoft that uses .corp as a default setting. This problem would be easier to fix than .home, which sees bogus traffic from a huge range of sources.

.mail also might be safe to delegate. NCAP noted that at least six gTLDs with more pre-delegation query traffic — .network, .ads, .prod, .dev, .office and .site — were subsequently delegated and received very low numbers of collision reports from live deployment.

Instead of banning any string, NCAP instead proposes a new Name Collision Risk Assessment Framework.

Under the framework, a new Technical Review Team would be in charge of testing every applied-for gTLD not already considered high risk for collision risks and placing the high-risk ones on a Collision String List of essentially banned strings.

To do so, the applied-for gTLD string would have to be actually delegated to the live DNS root zone, under the control of the TRT rather than a registry or applicant, while data is gathered using four different methods of responding to query traffic not unlike the “controlled interruption” method currently in use.

This would be a huge break from the current system, under which gTLDs only get delegated after ICANN has contracted with a registry operator, but it would mean that IANA would be able to quickly yank a gTLD from the DNS, if it started causing serious problems, without stepping on anyone’s commercial interests or inviting legal action.

There’s little doubt that the proposed framework would add friction to the new gTLD evaluation process in the next round, but the fact that NCAP has delivered its recommendations ahead of its original schedule is good news for those hoping for no more delays to the next round actually launching.

The NCAP study was considered on the critical path to the next round. It’s already been approved by the Security and Stability Advisory Committee and is expected to be considered by ICANN’s board of directors at an upcoming meeting. Implementing the recommendations would obviously take some time, but I doubt that would delay the expected Q2 2026 opening of the next application window.

The new recommendations on .corp, .home and .mail mean those gTLDs could well come back into play in the next round, which will come as cold comfort to the applicants who had their $185,000 application fees tied up for years before ICANN finally decided to ban them in 2018, offering a full refund.

There were seven applicants for .mail, six for .corp, and a whopping 11 for .home. Applicants included GoDaddy, Google, Amazon, and Identity Digital.

According to ICANN’s web site, Google never actually withdrew its applications for .home, .corp and .mail, and Amazon never withdrew its application for .mail. If that’s accurate, it could lead to some interesting disputes ahead of the 2026 application round.

Amazon and Google among .internal TLD ban backers

Kevin Murphy, March 20, 2024, Domain Tech

Google and Amazon have publicly backed ICANN’s plan to reserve the top-level domain .internal for private behind-the-firewall uses.

ICANN picked the string “internal” as the one that it will promise to never delegate to the DNS root, allowing network administrators and software developers to confidently use it with a lower risk of data leakage should the TLD come under a registry’s control in future.

The public comment period over its choice is coming to a close tomorrow, with a generally supportive vibe coming from the 30-odd comments submitted so far.

Notably, tech giants Amazon and Google have both filed comments backing .internal, with both companies saying that they already use the TLD extensively for internal purposes (Google in its Cloud services) and that to allow it to be delegated in future would cause big problems.

Some commenters niggled that .internal is too long, and that something like .local or .lan, both already reserved, might be better. Others wondered why strings such as .corp or .home, which are already effectively banned due to the high risk of name collisions, were not chosen instead.

ICANN picks the domain it will never, ever release

Kevin Murphy, January 24, 2024, Domain Policy

ICANN has picked the TLD string that it will recommend for safe use behind corporate firewalls on the basis that it will never, ever be delegated.

The string is .internal, and the choice is now open for public comment.

It’s being called a “private use” TLD. Organizations would be able to use it behind their firewalls safe in the knowledge that it will never appear in the public DNS, mitigating the risk of public/private name collisions and data leakage.

.internal beat fellow short-lister .private to ICANN’s selection because it was felt that .private might lure people into a false sense of security.

While it’s unlikely that anyone was planning to apply for .internal as a commercial or brand gTLD in future, it’s important to note that when it makes it to the ICANN reserved list all confusingly similar strings will also be banned, under the current draft of the Applicant Guidebook.

So reserving .internal also potentially bans .internat, which Google tells me is the French word for a boarding school, or .internai, which is a possible brand for an AI for interns (yes, I’m grasping here, but you get my point).

The public comment period is open now and ends March 21.

.mail, .home, .corp hopefuls could get exit plan in January

Kevin Murphy, December 27, 2017, Domain Registries

The twenty remaining applicants for the gTLDs .corp, .home and .mail could get the option to bow out with a full refund as early as January.
The ICANN board of directors earlier this month discussed several options for how to treat the in-limbo applications, one of which was a refund.
According to minutes of its December 13 meeting:

Staff outlined some potential options for the Board to consider, which ranged from providing a full refund of the New gTLD Program application fee to the remaining .CORP, .HOME, and .MAIL applicants, to providing priority in subsequent rounds of the New gTLD Program if the applicants were to reapply for the same strings.

Applicants for these strings that already withdrew their applications for a partial refund were also discussed.
The three would-be gTLDs have been frozen for years, after a study showed that they receive vast amounts of error traffic already on a daily basis.
This means there would be likely a large number of name collisions with zones on private networks, should these strings be delegated to the authoritative root.
The ICANN board instructed the staff to draft some resolutions to be voted on at “a subsequent meeting”, suggesting directors are close to reaching a decision.
It seems possible a vote could even happen at a January meeting, given that the board typically meets up almost every month.

Refund “options” for in-limbo gTLD applicants?

Kevin Murphy, November 6, 2017, Domain Policy

ICANN may just be a matter of weeks away from giving applicants for the .mail, .corp and .home gTLDs an exit strategy from their four years in limbo.
Its board of directors on Thursday passed a resolution calling for staff to “provide options for the Board to consider to address the New gTLD Program applications for .CORP, .HOME, and .MAIL by the first available meeting of the Board following the ICANN60 meeting in Abu Dhabi”.
It’s possible this means the board could consider the matter before the end of the year.
Twenty remaining applications for the three strings have been on hold since they were identified as particularly risky in August 2013.
A study showed that all three — .home and .corp in particular — already experience vast amounts of erroneous DNS traffic on a daily basis.
This is due to so-called “name collisions”, which come about when a newly delegated TLD is actually already in use on corporate or public networks.
Many companies use .corp and .mail already behind their firewalls, a practice sometimes historically encouraged by commercial technical documentation, and .home is known to be used by some ISPs in residential and business routers.
Both of these scenarios and others can lead to DNS queries spilling out onto the public internet, which could cause breakage or data leakage.
The solution for all new gTLDs delegated to date has been to wildcard the entire zone with the message “Your DNS needs immediate attention” for a period before registrations are accepted.
This has led to some new gTLDs with far less collision traffic seeing small but notable pockets of outrage when delegated — Google’s .prod (used by some as an internal shorthand for “production”) in 2014.
Studies to date have concentrated on the volume of error traffic to applied-for gTLDs, but last Thursday the ICANN board kicked off a study that will look at what the real-world impact of name collisions in .mail, .corp and .home could be.
It’s tasked the Security and Stability Advisory Committee with carrying out the study in conjunction with related groups such as the IETF.
But this is likely to take quite a long time, so the board also resolved to think up “options” for the 20 affected applications.
Could the applicants be offered a full refund, as opposed to the partial one they currently qualify for? Could there be some kind of deferment option, such as that offered to unsuccessful 2000-round applicants? Either seems possible.

Security experts say ICANN should address collisions before approving more new TLDs

Kevin Murphy, January 2, 2017, Domain Tech

ICANN’s Security and Stability Advisory Committee has told ICANN it needs to do more to address the problem of name collisions before it approves any more new gTLDs.
In its latest advisory (pdf), published just before Christmas, SSAC says ICANN is not doing enough to coordinate with other technical bodies that are asserting authority over “special use” TlDs.
The SAC090 paper appears to be an attempt to get ICANN to further formalize its relationship with the Internet Engineering Task Force as it pertains to reserved TLDs:

The SSAC recommends that the ICANN Board of Directors take appropriate steps to establish definitive and unambiguous criteria for determining whether or not a syntactically valid domain name label could be a top-level domain name in the global DNS.

Pursuant to its finding that lack of adequate coordination among the activities of different groups contributes to domain namespace instability, the SSAC recommends that the ICANN Board of Directors establish effective means of collaboration on these issues with relevant groups outside of ICANN, including the IETF.

The paper speaks to at least two ongoing debates.
First, should ICANN approve .home and .corp?
These two would-be gTLDs were applied for by multiple parties in 2012 but have been on hold since August 2013 following an independent report into name collisions.
Names collisions are generally cases in which ICANN delegates a TLD to the public DNS that is already broadly used on private networks. This clash can result in the leakage of private data.
.home and .corp are by a considerable margin the two strings most likely to be affected by this problem, with .mail also seeing substantial volume.
But in recent months .home and .corp applicants have started to put pressure on ICANN to resolve the issue and release their applications from limbo.
The second incident the SSAC paper speaks to is the reservation in 2015 of .onion
If you’re using a browser on the privacy-enhancing Tor network, .onion domains appear to you to work exactly the same as domains in any other gTLDs, but under the hood they don’t use the public ICANN-overseen DNS.
The IETF gave .onion status as a “Special Use Domain“, in order to prevent future collisions, which caused ICANN to give it the same restricted status as .example, .localhost and .test.
But there was quite a lot of hand-wringing within the IETF before this status was granted, with some worrying that the organization was stepping on ICANN’s authority.
The SSAC paper appears to be designed at least partially to encourage ICANN to figure out how much it should take its lead from the IETF in this respect. It asks:

The IETF is an example of a group outside of ICANN that maintains a list of “special use” names. What should ICANN’s response be to groups outside of ICANN that assert standing for their list of special names?

For members of the new gTLD industry, the SSAC paper may be of particular importance because it raises the possibility of delays to subsequent rounds of the program if ICANN does not spell out more formally how it handles special use TLDs.
“The SSAC recommends that ICANN complete this work before making any decision to add new TLD names to the global DNS,” it says.

Verisign says new gTLDs put millions at risk

Kevin Murphy, May 26, 2016, Domain Tech

Verisign has revived its old name collisions security scare story, publishing this week a weighty research paper claiming millions are at risk of man-in-the-middle attacks.
It’s actually a study into how a well-known type of attack, first documented in the 1990s, might become easier due to the expansion of the DNS at the top level.
According to the paper there might be as many as 238,000 instances per day of query traffic intended for private networks leaking to the public DNS, where attackers could potentially exploit it to all manner of genuinely nasty things.
But Verisign has seen no evidence of the vulnerability being used by bad guys yet and it might not be as scary as it first appears.
You can read the paper here (pdf), but I’ll attempt to summarize.
The problem concerns a virtually ubiquitous protocol called WPAD, for Web Proxy Auto-Discovery.
It’s used by mostly by Windows clients to automatically download a web proxy configuration file that tells their browser how to connect to the web.
Organizations host these files on their local networks. The WPAD protocol tries to find the file using DHCP first, but fails over to DNS.
So, your browser might look for a wpad.dat file on wpad.example.com, depending on what domain your computer belongs to, using DNS.
The vulnerability arises because companies often use previously undelegated TLDs — such as .prod or .global — on their internal networks. Their PCs could belong to domains ending in .corp, even though .corp isn’t real TLD in the DNS root.
When these devices are roaming outside of their local network, they will still attempt to use the DNS to find their WPAD file. And if the TLD their company uses internally has actually been delegated by ICANN, their WPAD requests “leak” to registry or registrant.
A malicious attacker could register a domain name in a TLD that matches the domain the target company uses internally, allowing him to intercept and respond to the WPAD request and setting himself up as the roaming laptop’s web proxy.
That would basically allow the attacker to do pretty much whatever he wanted to the victim’s browsing experience.
Verisign says it saw 20 million WPAD leaks hit its two root servers every single day when it collected its data, and estimates that 6.6 million users are affected.
The paper says that of the 738 new gTLDs it looked at, 65.7% of them saw some degree of WPAD query leakage.
The ones with the most leaks, in order, were .global, .ads, .group, .network, .dev, .office, .prod, .hsbc, .win, .world, .one, .sap and .site.
It’s potentially quite scary, but there are some mitigating factors.
First, the problem is not limited to new gTLDs.
Yesterday I talked to Matt Larson, ICANN’s new vice president of research (who held the same post at Verisign’s until a few years ago).
He said ICANN has seen the same problem with .int, which was delegated in 1988. ICANN runs one of .int’s authoritative name servers.
“We did a really quick look at 24 hours of traffic and saw a million and a half queries for domain names of the form wpad.something.int, and that’s just one name server out of several in a 24-hour period,” he said.
“This is not a new problem, and it’s not a problem that’s specific to new gTLDs,” he said.
According to Verisign’s paper, only 2.3% of the WPAD query leaks hitting its root servers were related to new gTLDs. That’s about 238,000 queries every day.
With such a small percentage, you might wonder why new gTLDs are being highlighted as a problem.
I think it’s because organizations typically won’t own the new gTLD domain name that matches their internal domain, something that would eliminate the risk of an attacker exploiting a leak.
Verisign’s report also has limited visibility into the actual degree of risk organizations are experiencing today.
Its research methodology by necessity was limited to observing leaked WPAD queries hitting its two root servers before the new gTLDs in question were delegated.
The company only collected relevant NXDOMAIN traffic to its two root servers — DNS queries with answers typically get resolved closer to the user in the DNS hierarchy — so it has no visibility to whether the same level of leaks happen post-delegation.
Well aware of the name collisions problem, largely due to Verisign’s 11th-hour epiphany on the subject, ICANN forces all new gTLD registries to wildcard their zones for 90 days after they go live.
All collision names are pointed to 127.0.53.53, a reserved IP address picked in order to catch the attention of network administrators (DNS uses TCP/IP port 53).
Potentially, at-risk organizations could have fixed their collision problems shortly after the colliding gTLD was delegated, reducing the global impact of the vulnerability.
There’s no good data showing how many networks were reconfigured due to name collisions in the new gTLD program, but some anecdotal evidence of admins telling Google to go fuck itself when .prod got delegated.
A December 2015 report from JAS Advisors, which came up with the 127.0.53.53 idea, said the effects of name collisions have been rather limited.
ICANN’s Larson echoed the advice put out by security watchdog US-CERT this week, which among other things urges admins to use proper domain names that they actually control on their internal networks.

It’s official: new gTLDs didn’t kill anyone

Kevin Murphy, December 2, 2015, Domain Tech

The introduction of new gTLDs posed no risk to human life.
That’s the conclusion of JAS Advisors, the consulting company that has been working with ICANN on the issue of DNS name collisions.
It is final report “Mitigating the Risk of DNS Namespace Collisions”, published last night, JAS described the response to the “controlled interruption” mechanism it designed as “annoyed but understanding and generally positive”.
New text added since the July first draft says: “ICANN has received fewer than 30 reports of disruptive collisions since the first delegation in October of 2013. None of these reports have reached the threshold of presenting a danger to human life.”
That’s a reference to Verisign’s June 2013 claim that name collisions could disrupt “life-supporting” systems such as those used by emergency response services.
Names collisions, you will recall, are scenarios in which a newly delegated TLD matches a string that it is already used widely on internal networks.
Such scenarios could (and have) led to problems such as system failure and DNS queries leaking on to the internet.
The applied-for gTLDs .corp and .home have been effectively banned, due to the vast numbers of organizations already using them.
All other gTLDs were obliged, following JAS recommendations, to redirect all non-existent domains to 127.0.53.53, an IP address chosen to put network administrators in mind of port 53, which is used by the DNS protocol.
As we reported a little over a year ago, many administrators responded swearily to some of the first collisions.
JAS says in its final report:

Over the past year, JAS has monitored technical support/discussion fora in search of posts related to controlled interruption and DNS namespace collisions. As expected, controlled interruption caused some instances of limited operational issues as collision circumstances were encountered with new gTLD delegations. While some system administrators expressed frustration at the difficulties, overall it appears that controlled interruption in many cases is having the hoped-for outcome. Additionally, in private communication with a number of firms impacted by controlled interruption, JAS would characterize the overall response as “annoyed but understanding and generally positive” – some even expressed appreciation as issues unknown to them were brought to their attention.

There are a number of other substantial additions to the report, largely focusing on types of use cases JAS believes are responsible for most name collision traffic.
Oftentimes, such as the random 10-character domains Google’s Chrome browser uses for configuration purposes, the collision has no ill effect. In other cases, the local system administrators were forced to remedy their software to avoid the collision.
The report also reveals that the domain name corp.com, which is owned by long-time ICANN volunteer Mikey O’Connor, receives a “staggering” 30 DNS queries every second.
That works out to almost a billion (946,728,000) queries per year, coming when a misconfigured system or inexperienced user attempts to visit a .corp domain name.

ICANN Compliance probing Hunger Games domain

ICANN’s Compliance department is looking into whether Donuts broke the rules by activating a domain name for the forthcoming The Hunger Games movie.
Following up from the story we posted earlier today, ICANN sent DI the following statement:

We are well aware of this issue and are addressing it through our normal compliance resolution process. We attempt to resolve compliance matters through a collaborative informal resolution process, and we do not comment on what happens during the informal resolution phase.

At issue is whether Donuts allowed the movie’s marketers to launch thehungergames.movie before the new gTLD’s mandatory 90-day “controlled interruption” phase was over.
Under a strict reading of the CI rules, there’s something like 10 to 12 days left before Donuts is supposed to be allowed to activate any .movie domain except nic.movie.
Donuts provided the following statement:

This is a significant step forward in the mainstream usage of new domains. One of the core values of the new gTLD program is the promotion of consumer choice and competition, and Donuts welcomes this contribution to the program’s success, and to the promotion of the film. We don’t publicly discuss specific matters related to ICANN compliance.

I imagine what happened here is that Donuts got an opportunity to score an anchor tenant with huge visibility and decided to grasp it with both hands, even though distributor Lion’s Gate Entertainment’s (likely immovable) launch campaign schedule did not exactly chime with its own.
It may be a technical breach of the ICANN rules on name collisions — which many regard as over-cautious and largely unnecessary — but it’s not a security or stability risk.
Of course, some would say it also sets a precedent for other registries to bend the rules if they score big-brand backing in future.