The .com domain is still the runaway leader TLD for phishing, with new gTLDs still being used for a tiny minority of attacks, according to new research.
.com domains accounted for 51% of all phishing in 2016, despite only having 48% of the domains in the “general population”, according to the 2017 Phishing Trends & Intelligence Report
from security outfit PhishLabs.
But new gTLDs accounted for just 2% of attacks, despite separate research showing they have about 8% of the market.
New gTLDs saw a 1,000% increase in attacks on 2015, the report states.
The statistics are based on PhishLabs’ analysis of nearly one million phishing sites discovered over the course of the year and include domains that have been compromised, rather than registered, by attackers.
The company said:
Although the .COM top-level domain (TLD) was associated with more than half of all phishing sites in 2016, new generic TLDs are becoming a more popular option for phishing because they are low cost and can be used to create convincing phishing domains.
There are a few reasons new gTLDs are gaining traction in the phishing ecosystem. For one, some new gTLDs are incredibly cheap to register and may be an inexpensive option for phishers who want to have more control over their infrastructure than they would with a compromised website. Secondly, phishers can use some of the newly developed gTLDs to create websites that appear to be more legitimate to potential victims.
Indeed, the cheapest new gTLDs are among the worst for phishing — .top, .xyz, .online, .club, .website, .link, .space, .site, .win and .support — according to the report.
But the numbers show that new gTLDs are significantly under-represented in phishing attacks.
According to separate research from CENTR, there were 309.4 million domains in existence at the end of 2016, of which about 25 million (8%) were new gTLDs.
Yet PhishLabs reports that new gTLD domains were used for only about 2% of attacks.
CENTR statistics have .com with a 40% share of the global domain market, with PhishLabs saying that .com is used in 51% of attacks.
The difference in the market share statistics between the two sets of research is likely due to the fact that CENTR excludes .tk from its numbers.
Again, because PhishLabs counts hacked sites — in fact it says the “vast majority” were hacked — we should probably exercise caution before attributing blame to registries.
But PhishLabs said in its report:
When we see a TLD that is over-represented among phishing sites compared to the general population, it may be an indication that it is more apt to being used by phishers to maliciously register domains for the purposes of hosting phishing content. Some TLDs that met these criteria in 2016 included .COM, .BR, .CL, .TK, .CF, .ML, and .VE.
By far the worst ccTLD for phishing was Brazil’s .br, with 6% of the total, according to the report.
Also notable were .uk, .ru, .au, .pl, and .in, each with about 2% of the total, PhishLabs said.
ICANN’s VP of security has joined the board of directors of the Anti-Phishing Working Group.
Dave Piscitello is one of three new APWG board members, arriving as the group expands its board from two people to five.
APWG said the expansion “is recognition of the growing complexity and scale of Internet crime today and the challenges in responding to this global threat.”
In a press release, it noted that targeted phishing attacks are said to be the root cause of the data thefts that may or may not have influenced the US presidential election last year.
The other two new directors are Brad Wardman of PayPal and Pat Cain of The Cooper Cain Group, a security consulting firm (a different bloke to the similarly named Pat Kane of Verisign).
APWG is an independent, public-private coalition that collects and publishes data about phishing attack trends and advice for how to defend against them.
Part of this work entails tracking how many domain names are involved in phishing, and in which TLDs.
The APWG board also includes chair David Jevans of Proofpoint and secretary-general Peter Cassidy.
ICANN’s Security and Stability Advisory Committee has told ICANN it needs to do more to address the problem of name collisions before it approves any more new gTLDs.
In its latest advisory (pdf), published just before Christmas, SSAC says ICANN is not doing enough to coordinate with other technical bodies that are asserting authority over “special use” TlDs.
The SAC090 paper appears to be an attempt to get ICANN to further formalize its relationship with the Internet Engineering Task Force as it pertains to reserved TLDs:
The SSAC recommends that the ICANN Board of Directors take appropriate steps to establish definitive and unambiguous criteria for determining whether or not a syntactically valid domain name label could be a top-level domain name in the global DNS.
Pursuant to its finding that lack of adequate coordination among the activities of different groups contributes to domain namespace instability, the SSAC recommends that the ICANN Board of Directors establish effective means of collaboration on these issues with relevant groups outside of ICANN, including the IETF.
The paper speaks to at least two ongoing debates.
First, should ICANN approve .home and .corp?
These two would-be gTLDs were applied for by multiple parties in 2012 but have been on hold since August 2013 following an independent report into name collisions.
Names collisions are generally cases in which ICANN delegates a TLD to the public DNS that is already broadly used on private networks. This clash can result in the leakage of private data.
.home and .corp are by a considerable margin the two strings most likely to be affected by this problem, with .mail also seeing substantial volume.
But in recent months .home and .corp applicants have started to put pressure on ICANN to resolve the issue and release their applications from limbo.
The second incident the SSAC paper speaks to is the reservation in 2015 of .onion
If you’re using a browser on the privacy-enhancing Tor network, .onion domains appear to you to work exactly the same as domains in any other gTLDs, but under the hood they don’t use the public ICANN-overseen DNS.
The IETF gave .onion status as a “Special Use Domain“, in order to prevent future collisions, which caused ICANN to give it the same restricted status as .example, .localhost and .test.
But there was quite a lot of hand-wringing within the IETF before this status was granted, with some worrying that the organization was stepping on ICANN’s authority.
The SSAC paper appears to be designed at least partially to encourage ICANN to figure out how much it should take its lead from the IETF in this respect. It asks:
The IETF is an example of a group outside of ICANN that maintains a list of “special use” names. What should ICANN’s response be to groups outside of ICANN that assert standing for their list of special names?
For members of the new gTLD industry, the SSAC paper may be of particular importance because it raises the possibility of delays to subsequent rounds of the program if ICANN does not spell out more formally how it handles special use TLDs.
“The SSAC recommends that ICANN complete this work before making any decision to add new TLD names to the global DNS,” it says.
Amazon has reversed, at least temporarily, its decision to yank its free list of the world’s most popular domains, after an outcry from researchers.
The daily Alexa list, which contains the company’s estimate of the world’s top 1 million domains by traffic, suddenly disappeared late last week.
The list was popular with researchers in fields such as internet security. Because it was free, it was widely used.
DI PRO uses the list every day to estimate the relative popularity of top-level domains.
After deleting the list, Amazon directed users to its Amazon Web Services portal, which had started offering the same data priced at $0.0025 per URL.
That’s not cheap. The cost of obtaining same data suddenly leaped from nothing to $912,500 per year, or $2,500 per day.
That’s beyond the wallets, I suspect, of almost every Alexa user, especially the many domain name tools providers (including yours truly) that relied on the data to estimate domain popularity.
Even scaling back usage to the top 100,000 URLs would be prohibitively expensive for most researchers.
While Amazon is of course free to price its data at whatever it thinks it is worth, no notice was given that the file was to be deleted, scuppering without warning goodness knows how many ongoing projects.
Some users spoke out on Twitter.
The quiet death of the @Alexa_Support top million sites is a grievous blow to internet researchers everywhere. $2500 per pull now.
— April King (@aprilmpls) November 21, 2016
Removing the top 1M list is a HUGE mistake. It was extremely useful to assess the impact of new security vulnerabilities. 🙁 @Alexa_Support
— Benjamin Beurdouche (@beurdouche) November 22, 2016
@Alexa_Support I'm disappointed, but I hope you reconsider. The Top 1M list is a standard reference in research. It's simply irreplaceable.
— Santiago Zanella (@xEFFFFFFF) November 22, 2016
I spent most of yesterday figuring out how to quickly rejigger DI PRO to cope with the new regime, but it seems I may have been wasting my time.
After an outcry from fellow researchers, Amazon has restored the free list. It said on Twitter:
Thanks to customer feedback, the top 1M sites is temporarily available again. We’ll provide notice before updating the file in the future
— Alexa Support (@Alexa_Support) November 22, 2016
It seems clear that the key word here is “temporarily”, and that the the restoration of the file may primarily be designed to give researchers more time to seek alternatives or wrap up their research.
Verisign has revived its old name collisions security scare story, publishing this week a weighty research paper claiming millions are at risk of man-in-the-middle attacks.
It’s actually a study into how a well-known type of attack, first documented in the 1990s, might become easier due to the expansion of the DNS at the top level.
According to the paper there might be as many as 238,000 instances per day of query traffic intended for private networks leaking to the public DNS, where attackers could potentially exploit it to all manner of genuinely nasty things.
But Verisign has seen no evidence of the vulnerability being used by bad guys yet and it might not be as scary as it first appears.
You can read the paper here (pdf), but I’ll attempt to summarize.
The problem concerns a virtually ubiquitous protocol called WPAD, for Web Proxy Auto-Discovery.
It’s used by mostly by Windows clients to automatically download a web proxy configuration file that tells their browser how to connect to the web.
Organizations host these files on their local networks. The WPAD protocol tries to find the file using DHCP first, but fails over to DNS.
So, your browser might look for a wpad.dat file on wpad.example.com, depending on what domain your computer belongs to, using DNS.
The vulnerability arises because companies often use previously undelegated TLDs — such as .prod or .global — on their internal networks. Their PCs could belong to domains ending in .corp, even though .corp isn’t real TLD in the DNS root.
When these devices are roaming outside of their local network, they will still attempt to use the DNS to find their WPAD file. And if the TLD their company uses internally has actually been delegated by ICANN, their WPAD requests “leak” to registry or registrant.
A malicious attacker could register a domain name in a TLD that matches the domain the target company uses internally, allowing him to intercept and respond to the WPAD request and setting himself up as the roaming laptop’s web proxy.
That would basically allow the attacker to do pretty much whatever he wanted to the victim’s browsing experience.
Verisign says it saw 20 million WPAD leaks hit its two root servers every single day when it collected its data, and estimates that 6.6 million users are affected.
The paper says that of the 738 new gTLDs it looked at, 65.7% of them saw some degree of WPAD query leakage.
The ones with the most leaks, in order, were .global, .ads, .group, .network, .dev, .office, .prod, .hsbc, .win, .world, .one, .sap and .site.
It’s potentially quite scary, but there are some mitigating factors.
First, the problem is not limited to new gTLDs.
Yesterday I talked to Matt Larson, ICANN’s new vice president of research (who held the same post at Verisign’s until a few years ago).
He said ICANN has seen the same problem with .int, which was delegated in 1988. ICANN runs one of .int’s authoritative name servers.
“We did a really quick look at 24 hours of traffic and saw a million and a half queries for domain names of the form wpad.something.int, and that’s just one name server out of several in a 24-hour period,” he said.
“This is not a new problem, and it’s not a problem that’s specific to new gTLDs,” he said.
According to Verisign’s paper, only 2.3% of the WPAD query leaks hitting its root servers were related to new gTLDs. That’s about 238,000 queries every day.
With such a small percentage, you might wonder why new gTLDs are being highlighted as a problem.
I think it’s because organizations typically won’t own the new gTLD domain name that matches their internal domain, something that would eliminate the risk of an attacker exploiting a leak.
Verisign’s report also has limited visibility into the actual degree of risk organizations are experiencing today.
Its research methodology by necessity was limited to observing leaked WPAD queries hitting its two root servers before the new gTLDs in question were delegated.
The company only collected relevant NXDOMAIN traffic to its two root servers — DNS queries with answers typically get resolved closer to the user in the DNS hierarchy — so it has no visibility to whether the same level of leaks happen post-delegation.
Well aware of the name collisions problem, largely due to Verisign’s 11th-hour epiphany on the subject, ICANN forces all new gTLD registries to wildcard their zones for 90 days after they go live.
All collision names are pointed to 127.0.53.53, a reserved IP address picked in order to catch the attention of network administrators (DNS uses TCP/IP port 53).
Potentially, at-risk organizations could have fixed their collision problems shortly after the colliding gTLD was delegated, reducing the global impact of the vulnerability.
There’s no good data showing how many networks were reconfigured due to name collisions in the new gTLD program, but some anecdotal evidence of admins telling Google to go fuck itself when .prod got delegated.
A December 2015 report from JAS Advisors, which came up with the 127.0.53.53 idea, said the effects of name collisions have been rather limited.
ICANN’s Larson echoed the advice put out by security watchdog US-CERT this week, which among other things urges admins to use proper domain names that they actually control on their internal networks.