More security issues prang ICANN site
ICANN has revealed details of a security problem on its web site that could have allowed new gTLD registries to view data belonging to their competitors.
The bug affected its Global Domains Division customer relationship management portal, which registries use to communicate with ICANN on issues related to delegation and launch.
ICANN took GDD down for three days, from when it was reported February 27 until last night, while it closed the hole.
The vulnerability would have enabled authenticated users to see information from other users’ accounts.
ICANN tells me the issue was caused because it had misconfigured some third-party software — I’m guessing the Salesforce.com platform upon which GDD runs.
A spokesperson said that the bug was reported by a user.
No third parties would have been able to exploit it, but ICANN has been coy about whether any it believes any registries used the bug to access their competitors’ accounts.
ICANN has ‘fessed up to about half a dozen crippling security problems in its systems since the launch of the new gTLD program.
Just in the last year, several systems have seen downtime due to vulnerabilities or attacks.
A similar kind of privilege escalation bug took down the Centralized Zone Data Service last April.
The RADAR service for registrars was offline for two weeks after being hacked last May.
A phishing attack against ICANN staff in December enabled hackers to view information not normally available to the public.
Domain hijacking bug found in Go Daddy
Go Daddy has rushed out a fix to a security bug in its web site that could have allowed attackers to steal valuable domain names.
Security engineer Dylan Saccomanni found several “cross site request forgery” holes January 17, which he said could be used to “edit nameservers, change auto-renew settings and edit the zone file entirely”.
He reported it to Go Daddy (evidently with some difficulty) and blogged it up, with attack code samples, January 18. Go Daddy reportedly patched its site the following day.
A CSRF vulnerability is where a web site fails to adequately validate data submitted via HTTP POST. Basically, in this case Go Daddy apparently wasn’t checking whether commands to edit name servers, for example, were being submitted via the correct web site.
Mitigating the risk substantially, attackers would have to trick the would-be victim domain owner into filling out a web form on a different site, while they were simultaneously logged into their Go Daddy accounts, in order to exploit the vulnerability, however.
In my experience, Go Daddy times out logged-in sessions after a period, reducing the potential attack window.
Being phishing-aware would also reduce your chance of being a victim.
I’m not aware of any reports of domains being lost to this attack.
Human glitch lets hackers into ICANN
It’s 2014. Does anyone in the domain name business still fall for phishing attacks?
Apparently, yes, ICANN staff do.
ICANN has revealed that “several” staff members fell prey to a spear-phishing attack last month, resulting in the theft of potentially hundreds of user credentials and unauthorized access to at least one Governmental Advisory Committee web page.
According to ICANN, the phishers were able to gather the email passwords of staff members, then used them to access the Centralized Zone Data Service.
CZDS is the clearinghouse for all zone files belonging to new gTLD registries. The data it stores isn’t especially sensitive — the files are archives, not live, functional copies — and the barrier to signing up for access legitimately is pretty low.
But CZDS users’ contact information and login credentials — including, as a matter of disclosure, mine — were also accessed.
While the stolen passwords were encrypted, ICANN is still forcing all CZDS users to reset their passwords as a precaution. The organization said in a statement:
The attacker obtained administrative access to all files in the CZDS. This included copies of the zone files in the system, as well as information entered by users such as name, postal address, email address, fax and telephone numbers, username, and password. Although the passwords were stored as salted cryptographic hashes, we have deactivated all CZDS passwords as a precaution. Users may request a new password at czds.icann.org. We suggest that CZDS users take appropriate steps to protect any other online accounts for which they might have used the same username and/or password. ICANN is providing notices to the CZDS users whose personal information may have been compromised.
As a victim, this doesn’t worry me a lot. My contact details are all in the public Whois and published on this very web site, but I can imagine other victims might not want their home address, phone number and the like in the hands of ne’er-do-wells.
It’s the second time CZDS has been compromised this year. Back in April, a coding error led to a privilege escalation vulnerability that was exploited to view requests by users to new gTLD registries.
Also accessed by the phishers this time around were several pages on the GAC wiki, which is about as interesting as it sounds (ie, not very). ICANN said the only non-public information that was viewed was a “members-only index page”.
User accounts on the ICANN blog and its Whois information portal were also accessed, but apparently no damage was caused.
In summary, the hackers seem to have stolen quite a lot of information they could have easily obtained legitimately, along with some passwords that may allow them to cause further mischief if they can be decrypted.
It’s embarrassing for ICANN, of course, especially for the staff members gullible enough to fall for the attack.
While the phishers made their emails appear to come from ICANN’s own domain, presumably their victims would have had to click through to a web page with a non-ICANN domain in the address bar order to hand over their passwords.
That’s not the kind of practice you’d expect from the people tasked with running the domain name industry.
As .trust opens for sunrise, Artemis dumps .secure bid
Amazon is now the proud owner of the .secure new gTLD, after much smaller competing applicant Artemis Internet withdrew its bid.
Coincidentally, the settlement of the contention set came just yesterday, the day before Artemis took its .trust — which I’ve described as a “backup plan” — to sunrise.
I assume .secure was settled with a private deal. I’ve long suspected Artemis — affiliated with data escrow provider NCC Group — had its work cut out to win an auction against Amazon.
It’s a shame, in a way. Artemis was one of the few new gTLD applicants that had actually sketched out plans for something quite technologically innovative.
Artemis’ .secure was to be a “trust mark” for a high-priced managed security service. It wasn’t really about selling domain names in volume at all.
The company had done a fair bit of outreach work, too. As long ago as July 2013, around 30 companies had expressed their interest in signing up as anchor tenants.
But, after ICANN gave Amazon a get-out-of-jail-free card by allowing it to amend its “closed generic” gTLD applications, it looked increasingly unlikely Artemis would wind up owning the gTLD it was essentially already pre-selling.
In February this year, it emerged that it had acquired the rights to .trust from Deutsche Post, which had applied for the gTLD unopposed.
This Plan B was realized today when .trust began its contractually mandated sunrise period.
Don’t expect many brands to apply for their names during sunrise, however — .trust’s standard registration policies are going to make cybersquatting non-existent.
Not only will .trust registrants have their identities manually vetted, but there’s also a hefty set of security standards — 123 pages (pdf) of them at the current count — that registrants will have to abide by on an ongoing basis in order to keep their names.
As for Amazon, its .secure application, as amended, is just as vague as all of its other former bids for closed, single-registrant generic strings (to the point where I often wonder if they’re basically still just closed generics).
It’s planning to deploy a small number of names to start with, managed by its own intellectually property department. After that, its application all gets a bit hand-wavey.
Russian hackers breaking in to NameCheap accounts
If you have an account at NameCheap, now might be a good time to think about changing your password.
According to the registrar, hackers based in Russia are using a haul of a reported 4.5 billion username/password combinations to attempt to break into its customers’ accounts.
Some attempts have been successful, NameCheap warned.
The attackers are using credentials stolen from third-party sources in a large-scale, automated attempt to log in to user accounts, disguised as regular users, the company said in a blog post.
NameCheap said:
The vast majority of these login attempts have been unsuccessful as the data is incorrect or old and passwords have been changed. As a precaution, we are aggressively blocking the IP addresses that appear to be logging in with the stolen password data. We are also logging these IP addresses and will be exporting blocking rules across our network to completely eliminate access to any Namecheap system or service, as well as making this data available to law enforcement.
While the vast majority of these logins are unsuccessful, some have been successful. To combat this, we’ve temporarily secured the Namecheap accounts that have been affected and are currently contacting customers involved requesting they improve the security for these accounts.
Affected users have been emailed, the company said.
NameCheap suspects the attack is linked to a reported cache of 1.2 billion unique username/password combinations amassed by a hacker group from databases vulnerable to SQL injection.
The registrar pointed out that its own systems haven’t been hacked. Customers should only be vulnerable if they use the same username and password at NameCheap as they use on other sites.
RADAR to be down at least two weeks after hack
ICANN expects its RADAR registrar database to be offline for “at least two weeks” following the discovery of a security vulnerability that exposed users’ login names and encrypted passwords.
ICANN seems to have been quick to act and to disclose the hack.
The attack happened last weekend and ICANN was informed about it by an “internet user” on Tuesday May 27, according to an ICANN spokesperson. RADAR was taken offline and the problem disclosed late May 28.
The spokesperson added that “we do not believe the user is affiliated with a current or previously accredited registrar.”
ICANN isn’t disclosing the nature of the vulnerability, but said RADAR will be offline for some time for a security audit. The spokesperson told DI in an email:
It will be at least two weeks. It is more important to complete a thorough security assessment of the site than to rush this process. First of all, we’re keeping the system offline until we complete a thorough audit of the system. We are also currently engaged in a security review of all systems and procedures at ICANN to assess and implement ongoing improvements as appropriate.
RADAR is a database used by registrars to coordinate stuff like emergency contacts and IP address whitelisting for bulk Whois access.
The downtime is not expected to impact registrants, according to ICANN. The spokesperson said: “Nothing that occurred has raised any concerns that registrants could or would be adversely affected.”
ICANN registrar database hacked
ICANN’s database of registrar contact information has been hacked and user data has been stolen.
The organization announced this morning that the database, known as RADAR, has been taken offline while ICANN conducts a “thorough review” of its security.
ICANN said:
This action was taken as a precautionary measure after it was learned that an unauthorized party viewed data in the system. ICANN has found no evidence of any unauthorized changes to the data in the system. Although the vulnerability has been corrected, RADAR will remain offline until a thorough review of the system is completed.
Users of the system — all registrars — have had their usernames, email addresses and encrypted passwords compromised, ICANN added.
ICANN noted that it’s possible to brute-force a hashed password into plaintext, so it’s enforcing a password reset on all users, but it has no evidence of any user accounts being accessed.
RADAR users may want to think about whether they have the same username/password combinations at other sites.
RADAR is a database used by registrars in critical functions such as domain name transfers.
Registrars can use it, for example, to white-list the IP addresses of rival registrars, enabling them to execute large amounts of Whois queries that would usually be throttled.
The news follows hot on the heels of a screwup in the Centralized Zone Data Service, which enabled any new gTLD registry to view data belonging to rival registries and other CZDS users.
Controlled interruption as a means to prevent name collisions [Guest Post]
This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.
One of JAS’ commitments during this process was to “float” ideas and solicit feedback. This set of thoughts poses an alternative to the “trial delegation” proposals in SAC062. The idea springs from past DNS-related experiences and has an effect we have named “controlled interruption.”
Learning from the Expired Registration Recovery Policy
Many are familiar with the infamous Microsoft Hotmail domain expiration in 1999. In short, a Microsoft registration for passport.com (Microsoft’s then-unified identity service) expired Christmas Eve 1999, denying millions of users access to the Hotmail email service (and several other Microsoft services) for roughly 20 hours.
Fortunately, a well-intended technology consultant recognized the problem and renewed the registration on Microsoft’s behalf, yielding a nice “thank you” from Microsoft and Network Solutions. Had a bad actor realized the situation, the outcome could have been far different.
The Microsoft Hotmail case and others like it lead to the current Expired Registration Recovery Policy.
More recently, Regions Bank made news when its domains expired, and countless others go unreported. In the case of Regions Bank, the Expired Registration Recovery Policy seemed to work exactly as intended – the interruption inspired immediate action and the problem was solved, resulting in only a bit of embarrassment.
Importantly, there was no opportunity for malicious activity.
For the most part, the Expired Registration Recovery Policy is effective at preventing unintended expirations. Why? We call it the application of “controlled interruption.”
The Expired Registration Recovery Policy calls for extensive notification before the expiration, then a period when “the existing DNS resolution path specified by the Registrant at Expiration (“RAE”) must be interrupted” – as a last-ditch effort to inspire the registrant to take action.
Nothing inspires urgent action more effectively than service interruption.
But critically, in the case of the Expired Registration Recovery Policy, the interruption is immediately corrected if the registrant takes the required action — renewing the registration.
It’s nothing more than another notification attempt – just a more aggressive round after all of the passive notifications failed. In the case of a registration in active use, the interruption will be recognized immediately, inspiring urgent action. Problem solved.
What does this have to do with collisions?
A Trial Delegation Implementing Controlled Interruption
There has been a lot of talk about various “trial delegations” as a technical mechanism to gather additional data regarding collisions and/or attempt to notify offending parties and provide self-help information. SAC062 touched on the technical models for trial delegations and the related issues.
Ideally, the approach should achieve these objectives:
- Notifies systems administrators of possible improper use of the global DNS;
- Protects these systems from malicious actors during a “cure period”;
- Doesn’t direct potentially sensitive traffic to Registries, Registrars, or other third parties;
- Inspires urgent remediation action; and
- Is easy to implement and deterministic for all parties.
Like unintended expirations, collisions are largely a notification problem. The offending system administrator must be notified and take action to preserve the security and stability of their system.
One approach to consider as an alternative trial delegation concept would be an application of controlled interruption to help solve this notification problem. The approach draws on the effectiveness of the Expired Registration Recovery Policy with the implementation looking like a modified “Application and Service Testing and Notification (Type II)” trial delegation as proposed in SAC62.
But instead of responding with pointers to application layer listeners, the authoritative nameserver would respond with an address inside 127/8 — the range reserved for localhost. This approach could be applied to A queries directly and MX queries via an intermediary A record (the vast majority of collision behavior observed in DITL data stems from A and MX queries).
Responding with an address inside 127/8 will likely break any application depending on a NXDOMAIN or some other response, but importantly also prevents traffic from leaving the requestor’s network and blocks a malicious actor’s ability to intercede.
In the same way as the Expired Registration Recovery Policy calls for “the existing DNS resolution path specified by the RAE [to] be interrupted”, responding with localhost will hopefully inspire immediate action by the offending party while not exposing them to new malicious activity.
If legacy/unintended use of a DNS name is present, one could think of controlled interruption as a “buffer” prior to use by a legitimate new registrant. This is similar to the CA Revocation Period as proposed in the New gTLD Collision Occurrence Management Plan which “buffers” the legacy use of certificates in internal namespaces from new use in the global DNS. Like the CA Revocation Period approach, a set period of controlled interruption is deterministic for all parties.
Moreover, instead of using the typical 127.0.0.1 address for localhost, we could use a “flag” IP like 127.0.53.53.
Why? While troubleshooting the problem, the administrator will likely at some point notice the strange IP address and search the Internet for assistance. Making it known that new TLDs may behave in this fashion and publicizing the “flag” IP (along with self-help materials) may help administrators isolate the problem more quickly than just using the common 127.0.0.1.
We could also suggest that systems administrators proactively search their logs for this flag IP as a possible indicator of problems.
Why the repeated 53? Preserving the 127.0/16 seems prudent to make sure the IP is treated as localhost by a wide range of systems; the repeated 53 will hopefully draw attention to the IP and provide another hint that the issue is DNS related.
Two controlled interruption periods could even be used — one phase returning 127.0.53.53 for some period of time, and a second slightly more aggressive phase returning 127.0.0.1. Such an approach may cover more failure modes of a wide variety of requestors while still providing helpful hints for troubleshooting.
A period of controlled interruption could be implemented before individual registrations are activated, or for an entire TLD zone using a wildcard. In the case of the latter, this could occur simultaneously with the CA Revocation Period as described in the New gTLD Collision Occurrence Management Plan.
The ability to “schedule” the controlled interruption would further mitigate possible effects.
One concern in dealing with collisions is the reality that a potentially harmful collision may not be identified until months or years after a TLD goes live — when a particular second level string is registered.
A key advantage to applying controlled interruption to all second level strings in a given TLD in advance and at once via wildcard is that most failure modes will be identified during a scheduled time and before a registration takes place.
This has many positive features, including easier troubleshooting and the ability to execute a far less intrusive rollback if a problem does occur. From a practical perspective, avoiding a complex string-by-string approach is also valuable.
If there were to be a catastrophic impact, a rollback could be implemented relatively quickly, easily, and with low risk while the impacted parties worked on a long-term solution. A new registrant and associated new dependencies would likely not be adding complexity at this point.
Request for Feedback
As stated above, one of JAS’ commitments during this process was to “float” ideas and solicit feedback early in the process. Please consider these questions:
- What unintended consequences may surface if localhost IPs are served in this fashion?
- Will serving localhost IPs cause the kind of visibility required to inspire action?
- What are the pros and cons of a “TLD-at-once” wildcard approach running simultaneously with the CA Revocation Period?
- Is there a better IP (or set of IPs) to use?
- Should the controlled interruption plan described here be included as part of the mitigation plan? Why or why not?
- To what extent would this methodology effectively address the perceived problem?
- Other feedback?
We anxiously await your feedback — in comments to this blog, on the DNS-OARC Collisions list, or directly. Thank you and Happy New Year!
DNS Namespace Collisions: Detection and Response [Guest Post]
Those tracking the namespace collision issue in Buenos Aries heard a lot regarding the potential response scenarios and capabilities. Because this is an important, deep, and potentially controversial topic, we wanted to get some ideas out early on potential solutions to start the conversation.
Since risk can almost never be driven to zero, a comprehensive approach to risk management contains some level of a priori risk mitigation combined with investment in detection and response capabilities.
In my city of Chicago, we tend to be particularly sensitive about fires. In Chicago, like in most cities, we have a priori protection in the form of building codes, detection in the form of smoke/fire alarms, and response in the form of 9-1-1, sprinklers, and the very capable Chicago Fire Department.
Let’s think a little about what the detection and response capabilities might look like for DNS namespace collisions.
Detection: How do we know there is a problem?
Rapid detection and diagnosis of problems helps to both reduce damage and reduce the time to recovery. Physical security practitioners invest considerably in detection, typically in the form of guards and sensors.
Most meteorological events are detected (with some advance warning) through the use of radars and predictive modeling. Information security practitioners are notoriously light with respect to systematic detection, but we’re getting better!
If there are problematic DNS namespace collisions, the initial symptoms will almost certainly appear through various IT support mechanisms, namely corporate IT departments and the support channels offered by hardware/software/service vendors and Internet Service Providers.
When presented with a new and non-obvious problem, professional and non-professional IT practitioners alike will turn to Internet search engines for answers. This suggests that a good detection/response investment would be to “seed” support vendors/fora with information/documentation about this issue in advance and in a way that will surface when IT folks begin troubleshooting.
We collectively refer to such documentation as “self-help” information. ICANN has already begun developing documentation designed to assist IT support professionals with namespace-related issues.
In the same way that radar gives us some idea where a meteorological storm might hit, we can make reasonable predictions about where issues related to DNS namespace collisions are most likely to first appear.
While problems could appear anywhere, we believe it is most likely that scenarios involving remote (“road warrior”) use cases, branch offices/locations, and Virtual Private Networks are the best places to focus advance preparation.
This educated guess is based on the observation that DNS configurations in these use cases are often brittle due to complexities associated with dynamic and/or location-dependent parameters. Issues may also appear in Small and Medium-sized Enterprises (SMEs) with limited IT sophistication.
This suggests that proactively reaching out to vendors and support mechanisms with a footprint in those areas would also be a wise investment.
Response: Options, Roles, and Responsibilities
In the vast majority of expected cases, the IT professional “detectors” will also be the “responders” and the issue will be resolved without involving other parties. However, let’s consider the situations where other parties may be expected to have a role in response.
For the sake of this discussion, let’s assume that an Internet user is experiencing a problem related to a DNS namespace collision. I use the term “Internet user” broadly as any “consumer” of the global Internet DNS.
At this point in the thought experiment, let’s disregard the severity of the problem. The affected party (or parties) will likely exercise the full range of typical IT support options available to them – vendors, professional support, IT savvy friends and family, and Internet search.
If any of these support vectors are aware of ICANN, they may choose to contact ICANN at any point. Let’s further assume the affected party is unable and/or unwilling to correct the technical problem themselves and ICANN is contacted – directly or indirectly.
There is a critical fork in the road here: Is the expectation that ICANN provide technical “self-help” information or that ICANN will go further and “do something” to technically remedy the issue for the user? The scope of both paths needs substantial consideration.
For the rest of this blog, I want to focus on the various “do something” options. I see a few options; they aren’t mutually exclusive (one could imagine an escalation through these and potentially other options). The options are enumerated for discussion only and order is not meaningful.
- Option 1: ICANN provides technical support above and beyond “self-help” information to the impacted parties directly, including the provision of services/experts. Stated differently, ICANN becomes an extension of the impacted party’s IT support structure and provides customized/specific troubleshooting and assistance.
- Option 2: The Registry provides technical support above and beyond “self-help” information to the impacted parties directly, including the provision of services/experts. Stated differently, the Registry becomes an extension of the impacted party’s IT support structure and provides customized/specific troubleshooting and assistance.
- Option 3: ICANN forwards the issue to the Registry with a specific request to remedy. In this option, assuming all attempts to provide “self-help” are not successful, ICANN would request that the Registry make changes to their zone to technically remedy the issue. This could include temporary or permanent removal of second level names and/or other technical measures that constitute a “registry-level rollback” to a “last known good” configuration.
- Option 4: ICANN initiates a “root-level rollback” procedure to revert the state of the root zone to a “last known good” configuration, thus (presumably) de-delegating the impacted TLD. In this case, ICANN would attempt – on an emergency basis – to revert the root zone to a state that is not causing harm to the impacted party/parties. Root-level rollback is an impactful and potentially controversial topic and will be the subject of a follow-up blog.
One could imagine all sorts of variations on these options, but I think these are the basic high-level degrees of freedom. We note that ICANN’s New gTLD Collision Occurrence Management Plan and SAC062 contemplate some of these options in a broad sense.
Some key considerations:
- In the broader sense, what are the appropriate roles and responsibilities for all parties?
- What are the likely sources to receive complaints when a collision has a deleterious effect?
- What might the Service Level Agreements look like in the above options? How are they monitored and enforced?
- How do we avoid the “cure is worse than the disease” problem – limiting the harm without increasing risk of creating new harms and unintended consequences?
- How do we craft the triggering criteria for each of the above options?
- How are the “last known good” configurations determined quickly, deterministically, and with low risk?
- Do we give equal consideration to actors that are following the technical standards vs. those depending on technical happenstance for proper functionality?
- Are there other options we’re missing?
On Severity of the Harm
Obviously, the severity of the harm can’t be ignored. Short of situations where there is a clear and present danger of bodily harm, severity will almost certainly be measured economically and from multiple points of view. Any party expected to “do something” will be forced to choose between two or more economically motivated actors: users, Registrants, Registrars, and/or Registries experiencing harm.
We must also consider that just as there may be users negatively impacted by new DNS behavior, there may also be users that are depending on the new DNS behavior. A fair and deterministic way to factor severity into the response equation is needed, and the mechanism must be compatible with emergency invocation and the need for rapid action.
Request for Feedback
There is a lot here, which is why we’ve published this early in the process. We eagerly await your ideas, feedback, pushback, corrections, and augmentations.
This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.
These are the top 50 name collisions
Having spent the last 36 hours crunching ICANN’s lists of almost 10 million new gTLD name collisions, the DI PRO collisions database is back online, and we can start reporting some interesting facts.
First, while we reported yesterday that 1,318 new gTLD applicants will be asked to block a total of 9.8 million unique domain names, the number of distinct second-level strings involved is somewhat smaller.
It’s 6,806,050, according to our calculations, still a bewilderingly high number.
The most commonly blocked string, as expected, is “www”. It’s on the block-lists for 1,195 gTLDs, over 90% of the total.
Second is “2010”. I currently have no explanation for this, but I’m wondering if it’s an artifact of the years of Day In The Life data upon which ICANN based its lists.
Protocol-related strings such as “wpad” and “isatap” also rank highly, as do strings matching popular TLDs such as “com”, “org”, “uk” and “de”. Single-character strings are also very popular.
The brand with the most blocks (free trademark protection?) is unsurprisingly Google.
The string “google” appears as an exact match on 930 gTLDs’ lists. It appears as a substring of 1,235 additional blocked strings, such as “google-toolbar” and “googlemaps”.
Facebook, Yahoo, Gmail, YouTube and Hotmail also feature in the top 100 blocked brands.
DI PRO subscribers can search for strings that interest them, discovering how many and which gTLDs they’re blocked in, using the database.
Here’s a table of the top 50 blocked strings.
[table id=22 /]
Recent Comments