Latest news of the domain name industry

Recent Posts

Verisign says people might die if new gTLDs are delegated

Kevin Murphy, June 2, 2013, 00:58:41 (UTC), Domain Policy

If there was any doubt in your mind that Verisign is trying to delay the launch of new gTLDs, its latest letter to ICANN and the Governmental Advisory Committee advice should settle it.
The company has ramped up its anti-expansion rhetoric, calling on the GAC to support its view that launching new gTLDs now will put the security and stability of the internet at risk.
People might die if some strings are delegated, Verisign says.
Among other things, Verisign is now asking for:

  • Each new gTLD to be individually vetted for its possible security impact, with particular reference to TLDs that clash with widely-used internal network domains (eg, .corp).
  • A procedure put in place to throttle the addition of new gTLDs, should a security problem arise.
  • A trial period for each string ICANN adds to the root, so that new gTLDs can be tested for security impact before launching properly.
  • A new process for removing delegated gTLDs from the root if they cause problems.

In short, the company is asking for much more than it has to date — and much more that is likely to frenzy its rivals — in its ongoing security-based campaign against new gTLDs.
The demands came in Verisign’s response to the GAC’s Beijing communique, which detailed government concerns about hundreds of applied-for gTLDs and provided frustratingly vague remediation advice.
Verisign has provided one of the most detailed responses to the GAC advice of any ICANN has received to date, discussing how each item could be resolved and/or clarified.
In general, it seems to support the view that the advice should be implemented, but that work is needed to figure out the details.
In many cases, it’s proposing ICANN community working groups. In others, it says each affected registry should negotiate individual contract terms with ICANN.
But much of the 12-page letter talks about the security problems that Verisign suddenly found itself massively concerned about in March, a week after ICANN started publishing Initial Evaluation results.
The letter reiterates the potential problem that when a gTLD is delegated that is already widely used on internal networks, security problems such as spoofing could arise.
Verisign says there needs to be an “in-depth study” at the DNS root to figure out which strings are risky, even if the volume of traffic they receive today is quite low.
It also says each string should be phased in with an “ephemeral root delegation” — basically a test-bed period for each new gTLD — and that already-delegated strings should be removed if they cause problems:

A policy framework is needed in order to codify a method for braking or throttling new delegations (if and when these issues occur) either in the DNS or in dependent systems that provides some considerations as to when removing an impacting string from the root will occur.

While it’s well-known that strings such as .home and .corp may cause issues due to internal name clashes and their already high volume of root traffic, Verisign seems to want every string to be treated with the same degree of caution.
Lives may be on the line, Verisign said:

The problem is not just with obvious strings like .corp, but strings that have even small query volumes at the root may be problematic, such as those discussed in SAC045. These “outlier” strings with very low query rates may actually pose the most risks because they could support critical devices including emergency communication systems or other such life-supporting networked devices.

We believe the GAC, and its member governments, would undoubtedly share our fundamental concern.

The impact of pretty much every recommendation made in the letter would be to delay or prevent the delegation of new gTLDs.
A not unreasonable interpretation of this is that Verisign is merely trying to protect its $800 million .com business by keeping competitors out of the market for as long as possible.
Remember, Verisign adds roughly 2.5 million new .com domains every month, at $7.85 a pop.
New gTLDs may well put a big dent in that growth, and Verisign doesn’t have anything to replace it yet. It can’t raise prices any more, and the patent licensing program it has discussed has yet to bear fruit.
But because the company also operates the primary DNS root server, it has a plausible smokescreen for shutting down competition under the guise of security and stability.
If that is what is happening, one could easily make the argument that it is abusing its position.
If, on the other hand, Verisign’s concerns are legitimate, ICANN would be foolhardy to ignore its advice.
ICANN CEO Fadi Chehade has made it clear publicly, several times, that new gTLDs will not be delegated if there’s a good reason to believe they will destabilize the internet.
The chair of the SSAC has stated that the internal name problem is largely dealt with, at least as far as SSL certificates go.
The question now for ICANN — the organization and the community — is whether Verisign is talking nonsense or not.

Tagged: , , , , , ,

Comments (29)

  1. ChuckWagen says:

    This is certainly no laughing matter.
    If .lol is issued, par example, easily many people could die from hysterics.

  2. Jean Guillon says:

    Why would someone want to delay? Famous Four Media recently launched its 60 Governance Councils, the world is safe.
    🙂
    An example here (http://www.governancecouncils.com/news):
    “The .News Governance Council exists to provide a voice to the Internet community interested in the .News generic top level domain (gTLD). When ICANN launched its new gTLD initiative years ago, corporations, governments, and other industry stakeholders expressed a desire to see mechanisms in place to ensure that key stakeholders had a voice in marshalling new gTLDs when launched. The .News Governance Council has been created to provide this voice for the .News TLD. It does not exist to dictate which domains may exist on a TLD, nor to censor content. Rather, the Governance Council and its elected Board will monitor, advise, and recommend best practices, including but not limited to the Abuse Prevention and Mitigation (APM) Seal reporting system, intellectual property rights protection, TLD rules, reserved second-level domain names, certification or authentication programs, and ensuring compliance with ICANN rules. Its Board will be comprised of 5-11 individuals selected by the industry and supported by an independent management company tasked with aiding in the Governance Council’s self-governance. The result will be that the .News community—and interested governments around the world—can be sure that the .News TLD is appropriately supported by the Internet community. Interested parties are encouraged to apply”.

  3. Interested in tlds says:

    Verisign has customers for its backend services like Microsoft and Astrium and they must be fuming by this scaremongering.
    Funny how a supplier is working directly against the interest of its customers!

  4. Just a personal note — as you can see, long ago I registered corp.com for typical goofy 3am type reasons. It provides a good example of what Verisign is talking about.
    Right now there are a lot networks that have been configured with blah.corp.com names which are counting on 3rd-level corp.com names NOT being delegated in the “real” DNS. MS servers look to the real DNS *first* and then to the MS “domain” second. As long as there isn’t a name delegated in DNS, all is good.
    But just like delegating .corp, if I were to delegate all 3rd-level corp.com names with a wildcard DNS entry, several very bad things will happen:
    – a massive amount of traffic would “escape” from private Microsoft networks that are configured with blah.corp.com MS domains.
    – this breaks those private networks in all kinds of weird ways — unexpectedly.
    – all that traffic comes to me, or a place that I specify — where I could do all kinds of nefarious things.
    I haven’t looked recently, but people have shown me MS documentation that *recommends* using .corp as the extension for MS network domain names. I’m here to tell you that the number of newbie network admins who’s mistakenly added “.com” to that name is huge. So I can only imagine that when .corp delegates, the problem I’m describing will be orders of magnitude worse.
    Rather than constantly finding “delay gTLDs” conspiracy theories, can’t you entertain the possibility that Verisign is concerned that ICANN is transferring substantial risk to people who are completely unaware that it is coming (network admins and their users)?
    Don’t you think it would be a good idea to “pre-delegate” those names for a short interval to see what kind of breakage it will cause *before* we throw all those end-users down a rathole?
    I agree with Verisign on this one. Transferring massive known risks to global users of the Internet is a really bad idea.

    • JS says:

      Thanks for the insights Mike. Great comment

    • Rubens Kuhl says:

      It would be a good idea and we are entertaining a similar approach in a soon to be released communication, but we don’t need to be naive about VRSN reasons to insist on it. Saying people might die is the same kind of overreach we saw in their first letter or other communications to other applicants (can’t comment on those for now. This is not one person’s opinion, this is a whole company trying to make what may be a genuine belief from one of its officers into a market entry barrier. This translates to a sense of entitlement that is common with monopolies, and it’s up to us to fight it.

    • Kevin Murphy says:

      It’s hardly a conspiracy theory, Mikey.
      We’re talking about the same company that launched Site Finder, remember? It wanted to wildcard the whole of .com, which isn’t much different to the problem we’re talking about here.
      The article notes more than once that the problems of clashes with internal names are real and well known and that ICANN would be foolhardy to ignore a substantial risk.
      The question isn’t whether there’s *a* risk, it’s how big the risk is, how horrible the consequences of that risk becoming a reality, and what else, if anything, needs to be done about it.

      • um…
        “people may die” isn’t what Verisign said. their words are “life support” — a bit less loaded. But let’s follow that thread. What if a medical outfit is using a .corp extension for their internal network (pretty likely, given the predominance of MS products in that industry). What happens if all their internal traffic breaks on “delegation day”? Running through a few possibilities — nurses can’t retrieve patient records. Surgeons can’t retrieve x-rays. MRI machines fall over. I think that would make people cranky.
        Doesn’t it make sense to find out whether that’s going to happen? That’s all they’re saying.
        By the way, the reason they’re specifically pounding on .corp in their brief may be the comments that others (myself included) have made in a number of security and policy meetings, not just a Verisign officer on the warpath. Just sayin’
        Kevin, I agree with your points except for the ad hominem argument about wild-carding. I strongly agree with the last sentence in your reply just above — i’ll paraphrase it to “let’s conduct a risk assessment” — and I think Verisign is making that very point in the first of their 3 numbered bullets on Page 3 of their brief.
        One way to view this is that Verisign is being a piggo and trying to exploit their monopoly. Another, and the view that I take, is that they are *really* worried about this stuff. As am I, both individually and representing ISPs who are going to take the trouble calls when all these internal networks break in weird ways.
        To which I’ll add the observation that a lot of the “usual suspects” in the DNS security realm who would normally be chiming in with Verisign are conflicted right now because they work for applicants who are pushing hard for “no more delays.” I think this extends to ICANN — an organization that’s betting it’s budget on a brisk new-gTLD rollout schedule.
        So I think your article is fine Kevin, but I’m still pushing back on the conspiracy theory. 🙂

        • Kevin Murphy says:

          Because it matters to me, I’ll push back-back.
          I’ll first refer you to my March article on this subject, in which I actively tried to head off a related conspiracy theory.
          http://domainincite.com/12465-chehade-says-no-delay-as-verisign-drops-a-security-bomb-on-icann
          It may be ad hominem, but I don’t think it’s unreasonable to raise the point that Verisign was quite happy to put half of the internet’s private data at risk when it was in its corporate interests to do so, and that the situation now is merely reversed.
          As for the rest, I wonder if we’re on the same page.
          You’re talking primarily or exclusively, it seems, about .corp. I happen to agree with you. It should have been on the reserved list from the start.
          It’s also a completely pointless TLD that will be largely filled with defensive registrations. There are plenty of reasons it should never be delegated.
          And as one of million of BT broadband subscribers that are obliged to use “.home” on their residential network today, I have similar feelings about that TLD.
          What’s news is that Verisign now wants *every* proposed gTLD subjected to the same scrutiny.
          To the best of my knowledge and recollection, this is something it has *never* asked for before, despite the problem being understood for many years.
          I think it’s very reasonable to wonder whether this is necessary, given the possible motivations of the people asking for it. You may notice that I don’t come down on either side of that question, I merely pose it.

        • In the interests of widening the text back out, I’m replying “up a level” — apologies in advance if that hoses up the thread.
          Kevin, I’m using corp.com as an example because I have been coping with the error traffic to that domain since the early 90’s. But I’m really pushing the SAC45 report, which is broader than that.
          I checked out the “conspiracy theory” piece. Pretty entertaining. I’ll stick with my point — Verisign may be doing all that evil stuff, but I find it an unlikely strategy to be pursued by a publicly-traded company of that size.
          I didn’t read the details really carefully, but it looks to me like the instrumentation that they’re proposing is pretty light and could be done really quickly for all those edge cases. Like, in days, once it’s built.
          Your “this hasn’t been asked for before” point walks us into a different swamp — policy vs implementation. I’ll hazard a bet there are things in existing policy that would support their proposal (and other existing policy that wouldn’t).
          I think Fadi made a key statement in the Beijing public session, partly in response to my question about this. He said something along the lines of “we’ll never knowingly transfer risks to Internet users.” That’s not a bad credo to guide our actions.

  5. Paul Stahura says:

    Mikey,
    This is an issue that is being very overplayed. It’s not surprising given Verisign’s interests that they’re obviously trying to slow down new TLDs by scaring the world. And with due respect for your hard and detailed work, I think you’re conflicted here too. If I owned corp.com, I’d rather not have .corp be delegated either. Though as the current registrant of corp.com you could contribute some useful data.
    This whole issue reminds me of Y2K. Remember how electrical grids would go offline, banks would close, hospital equipment would stop functioning (there’s a recycled argument for you) and general pandemonium? In reality, IT departments just fixed their code, everything worked, and we had a nice New Year’s Day.
    You suggest “pre-delegating” the names to see what may break. That seems like a good idea. Let’s deal with facts that help us stay on target instead of the ridiculously false sky-is-falling and the laughable babies-will-die image some are putting forth.
    Network admins may in cases be naive by adding .com to the end of their corporate networks. ICANN isn’t here, as many often say, to guard against every possible scenario and, in fact, regardless of which TLD we’re talking about, network traffic will “escape” the private network from time to time for a variety of reasons.
    It can happen right now when a private network is using a publicly-unregistered domain name that then gets registered in one of the “public” TLDs.
    We do not prevent those names from being registered do we? Just as we do not revoke the registration of corp.com as its continued registration *MAY*, if you did a long list of nefarious things, interfere with someone’s intended use of a “example.corp” (for a private network) on a public network.
    Getting from the situation as it probably exists to outright thievery and DNS anarchy would take a a long series of actions intending harm and dependencies to line up in precise order. Harm will most definitely not come from simply allowing names to be registered in a new .corp TLD – this other series of things (which is hard to do ) would have to happen by those intending harm with a single specific .corp sub-domain. I agree with Kevin when he says it’s very reasonable to wonder if slamming on the brakes is necessary given the motivations of the people asking for it. He doesn’t come down on either side of that question, but I do.
    As the registrant for corp.com, surely you have data on what subdomains are getting what kind of traffic. For all we know, all that traffic is going to one, or very few, corp.com subdomains.
    Would you share more information to make the discussion, which is misdirected to begin with, better informed on real data?
    In that spirit, would you provide a list of the .corp.com top-100 subdomains (by traffic) with the amount of traffic you see for each (and break out the traffic by record type)? Incidentally, the fact that you are (currently) not answering the queries causes many requestors to ask again, increasing your “error” traffic by probably an order of magnitude.
    ICANN has had its fill of alarmist predictions, which seem to arrive the moment we get close to an intrenched interest being negatively impacted.
    Strange how a very few voices get more shrill the closer we get to introducing quality TLDs like .corp that will likely prove very competitive to .com
    Lastly, I don’t think it was Fadi who said he would never knowingly transfer risk to the end user.
    And anyway the so-called “risk” to “transfer” is so small as to be nearly non-existent.
    By the way, I’m curious as to your opinion on 1) which party has that so-called “risk” now, 2) a reliable definition of what that “risk” is, 3) a reliable estimate of the magnitude of the “risk”, 4) your guess as to the benefits to the public brought about by taking that “risk”.
    Can you help with these questions?

  6. LOL. Love the innuendo. Actually, I probably make more money if .corp delegates. Either by selling matching 3rd-level names to 2nd-level registrants in .corp (a business model that every exact-match .com registrant ought to be thinking about) or by selling the name to the registry so *they* can do that. I’m especially pleased if .corp delegates and all these problems have been proven to be phantoms. Otherwise, I’ll have a hell of an ethical issue to puzzle through.
    And actually, it’s Verisign that suggests the “pre-delegating” idea, not me. I just think it’s a good one. So do you. So let’s do that. Let’s get some data. I’m the first guy who’d love to be proven wrong. Let’s get us some facts and let the chips fall where they may.
    Here’s the deal with corp.com. I don’t route any 3rd level names except http://www.corp.com because doing that would have the same impact that routing a 2nd-level name will have in .corp — traffic suddenly, unexpectedly escapes from networks that currently (yes, naively) count on that traffic staying local. Why did I decide to do that? Because one time I “pre-delegated” all those names with a wild-card, buried myself under all that traffic, and realized that I’d just broken stuff for a lot of people. I have snotty words on the web site admonishing system-administrators not to do that, but in reality they’re stuck. Here’s why they’re stuck, and why your “let them fix their code” approach isn’t quite that simple.
    This is the link to a knowledge base article on renaming an MS Active Directory forest:
    http://technet.microsoft.com/en-us/library/cc738208(v=ws.10).aspx
    And here’s a payday quote from that article (I dare you to read the whole thing):
    “Determining Domain Rename Completion
    “In a large forest, some number of domain controllers might prove impossible to contact during the domain rename process. These domain controllers never reach the final state of Done. You must decide how long to continue to try to reach domain controllers that are unreachable and how long to retry failed update attempts on domain controllers that have reached the Error state. When further progress in the forest is not possible, declare the domain rename process to be complete. All domain controllers that did not reach the final Done state, because they were either unreachable or finished in the Error state, must be demoted or removed from service, if demotion is not possible. For a forest to function without problems after a domain rename, only domain controllers that have reached the Done state can exist in the forest.”
    To get to that paragraph, you’ll need to read through some pretty dense, complex instructions. What that paragraph above is saying is that even after/if you do all the right stuff, even after you’ve taken your servers all down and brought them back up a few times (can you imagine the impact of that in the 24×7 six-sigma-uptime world we live in?), you STILL may have servers that will be unreachable or have to be removed from service. THAT’s why people still run AD forests with .corp and corp.com names, even when they know they shouldn’t — it’s HARD (bordering on impossible in some cases) to change them.
    And THAT’s why: a) I don’t mess with 3rd level names under corp.com, b) I would only do “pre-delegation” type tests on corp.com under really carefully instrumented and supervised circumstances, c) I think it’s really important that we do those tests on both corp.com (as a pilot) and .corp BEFORE it’s delegated into the root, and d) we seriously consider putting some strings like .corp on a reserved list if the results turn out as badly as some of us think. I’m happy to share the traffic from corp.com (for very short time intervals) for those tests, but I’m completely unqualified to design or conduct them. I’d love to tell you the top-10 subdomains that hit corp.com, but flicking on the wildcard saturates my 10mbit internet connection in about 10 minutes as the delegation propagates. So in addition to being incompetent to conduct that analysis, I don’t have the infrastructure or bandwidth to collect the data. If DNS-OARC wanted to take a crack at this, I’d die and go to heaven.
    You wonder how easy it would be for nefarious folks to do bad stuff? Here’s an example from the very early days of corp.com, back in around 1995. I didn’t know any better, so I wildcarded the MX record for email. Within a few days, I received email mis-addressed to controller@fortune50company.corp.com (should have been going to controller@corp.fortune50company.com). Attached to the email were the pre-filing drafts of SEC forms scheduled to go out in about 2 weeks. Once I realized what had happened I: notified the company and the SEC, erased the message, signed a nondisclosure agreement, and freekin’ dropped that MX record like a hot potato.
    Sure, it’s a stupid error, their mail was going to the wrong domain name fer cryin’ out loud. But delegating .corp is likely to have that effect. Email that today is CORRECTLY addressed to the internal address controller@acme.corp will tomorrow escape and head off to controller@acme.corp — which very likely could be a different Acme Corporation.
    By the way, I’ll continue to lobby for people to get off the Verisign hobby horse. This “alarmist prediction” is all developed in SAC45 — THREE years ago. It’s just that people haven’t done anything about it. That’s not Verisign’s fault — they’re just the bearer of bad tidings. Let’s get a move on, get some data, figure out how bad the problem is and figure out what to do about it.
    Finally, here’s a quote from Fadi at the Beijing Public Forum (page 135 of 141): “I want to finish with three key points. The first is, I want to remind everyone, we will not jeopardize the security and stability of the DNS for any reason. This is — this is rule number one.”

  7. Jordyn A. Buchanan says:

    I’ve been thinking about this a bit today, and don’t understand the need to delegate the domains in order to do the necessary analysis. Queries for names within non-existent TLDs to the root still carry the fully FQDN in the request packet, and so while analysis so far has been limited to TLDs, it would be straightforward to look at the invalid queries hitting the root and not only understand which specific TLDs are potential problems, but also whether the invalid queries are clustered around certain SLDs as well.
    The corollary to this, of course, is if there’s no traffic for a particular hitting the roots now, there’s no need to do a pre-delegation test; we already know that the TLD’s nameservers won’t receive traffic through the simple act of delegation.

  8. Paul Stahura says:

    Mikey,
    1) If you are thinking of selling sub-domains to make money, you’d be better off without the .corp TLD as competition, not with it.
    You are comparing selling corp.com subdomains (with .corp in the root) to doing nothing with corp.com. In that case, I agree with you, the former is obviously better.
    2) if “pre-delegating” means getting it in the root, then I’m all for it. If it means taking it out of the root if there is any bad-stuff going on, that sounds good too, but we should apply that same rule to all TLDs, including the legacy ones (after all, there’s plenty of evidence that .com is full of trademark infringement and malware). Always interesting when the incumbents want to put more restrictions and delays on those wishing to enter the market, but not on themselves.
    3) Traffic stats on corp.com should be obtainable without entering a wildcard record or any subdomains under corp.com. What kind of NxD traffic are you getting? As I said before, for all we know all this traffic is going to one domain (which the .corp registry could easily block from registration). Problem solved – or a more accurate phrase would be “problem non-existant”. Or maybe you’ll see traffic going to fortune500trademark.corp.com, which only that TM holder would be able to register in .corp. Again, problem non-existant.
    4) Your example of bad stuff is ancient history. Browsers have changed much since then (for example, new browsers do not insert “.com” on the end of the domain anymore), for one. For another, the wildcard is not allowed in .corp. The owner of donuts.com has a wildcard MX record, so I’m sure they are seeing error donuts.co emails. Does that mean we should take the entire .co TLD out of the root? No. Same with donuts.info and others. Again no – those TLDs should be allowed to exist in the root, and people who send email should strive to type in the correct email address when sending mail.
    5) What Fadi said and what you said he said are two different things. I agree with what he actually said.
    Jordyn,
    Makes total sense to me. I agree with you.

    • Rubens Kuhl says:

      Although I strongly disagree with Verisign, and that will become clear when the content we are working in gets published, we need to accept there is a difference between originally RFC-specified TLDs (com, net, org, gov, ccTLDs) and what came next. The pre-ICANN TLDs predated most of the software we use today, including most DNS resolver libraries of operating systems, so the only ones we could compare to are .biz, .info, .cat, .post, .aero, .travel. The fact those were introduced without security issues can be seen both as how new gTLDs don’t break things and as how careful analysis can be useful avoiding such issues.

  9. This text do reference also an earlier article that simplify a bit the view I have as chair of SSAC.
    The certificate issue is not at all resolved.
    What has happened is that the CA/B Forum has agreed on a policy that if implemented as planned would make the issue not at all as “scary” as it is now. And, given how certificates and SSL trust is calculated today, there is not much more that can be done without completely changing the technology we use.
    And that change is worked on by the Internet Engineering Task Force and others. It is called DANE, and imply that with the help of DNSSEC, key material stored in DNS (signed of course) can be used for the trust used for the communication instead of the current way of calculating trust which is by using lists of CAs that are passed around.
    But to call it “dealt with” is a bit too strong… SSAC is done, CA/B forum has made a policy change, now the proposed changes are to be implemented, and we need to speed up deployment of DNSSEC, DANE and other things.
    Evolution of the Internet never stops.

    • Rubens Kuhl says:

      It would be helpful to have the aggregate CA/B Forum numbers on internal certificates issued… they mentioned to be working on it answering a question in the Beijing SSAC presentation, any news on that ?

  10. Sorry about the delayed response to Paul’s post. That Internet thing broke for a few days here at the farm (www.aprairiehaven.com) and life has been focused on tractors and electrical wiring.
    Thanks for the numbers — makes it easy to organize a reply.
    1) I should clarify — I’m an accidental participant in the hurly burly of the domain-name marketplace. When I said “I probably make more money…” I meant that in a hypothetical case, not that I’ve actually got plans to do it. I registered those names back in the days before the web was a big deal and I did it because I thought they were cool (the way radio callsigns are cool — on 10-20 meters I’m KZ0C, and I touched WORT, KUSP, KDNA, WEFT and WAIF in one way or another — my favorite community TV callsign is KBDI, you have to say it out loud to “get” it).
    I’ve slept through a lot of ways to make money on domain names (drop-catching, domain-tasting, parking, aftermarket) mostly because I’m dumb, but also because most of those things don’t strike me as “cool!” so I wasn’t interested. I tried my hand at sub-domains under corp.com in the middle of last decade ’cause I thought that was cool. Exactly zero people agreed with me enough to buy one though, so I walked away from my massive $0.00 investment in that project. I think offering people sub-domains *only* to registrants of matching top-level strings so they can control their .com error traffic might be cool, but the Foo (www.bar.com) thinks I’m nuts. I’m willing to bet a couple more years of renewal fees ($20) on that however. We’ll see.
    I have to admit, I’m a little startled that when I talk to applicants and ask them “so, what’s going on at the .com domain that matches your string?” and they almost always go “gee, I hadn’t thought about that.” Interesting, given the O.co “leakage” problem where some gigantic percentage of the traffic went to O.com instead.
    2) I think what Versign is proposing is something a bit short of “putting it in the root and then taking it out if bad things happens.” I think they want to “instrument” the root so’s to be able to analyze the traffic *before* the delegation is made, see if bad things are likely to happen, and provide some facts for a decision about what to do about these rascals before they go into the root. I see a narrowing of the disagreement here — you’re agreeing to a lot of what they’re saying, just not the timing and approach. This strikes me as a Good Thing.
    3) I think this idea of looking at NxD traffic is cool! I had a Doh!-head-slap moment when I read that. The Foo and I are going to instrument his domain (www.bar.com) that way and see what we can learn. One thing that is especially cool is that if I understand it correctly, this could be “passive” instrumentation that wouldn’t change the pattern of the traffic — so I wouldn’t have to kill the patient in order to understand what is causing the symptoms. I have to admit, the Foo and I have never run our own DNS before (when we were in business as an ISP, that function always went to smarter people than me), so this also goes on the “learn something new, as a hobby” pile. If we can get it going on bar.com, we’ll swing over to corp.com and see what’s cookin’. We might be able to generate some really useful data that way. Thanks.
    4) Yeah yeah, my email example WAS ancient history (I’m ancient, what can I say?). I wasn’t thinking of the “browser adds .com” breakage however (although I note that my phone has a “.com” button in it’s browser). I was talking more about people who run into the domain-name equivalent of the 800 number problem.
    What’s dat “800 number” problem? It’s the idea that as more and more toll-free extensions get added, it gets harder and harder to remember which one is the one you’ve been motivated to tell people about through some kind of marketing campaign. Was I supposed to rave about 888-FLOWERS or 877-FLOWERS or 866-FLOWERS or 855-FLOWERS?? Sheesh, I can’t remember. I’ll try 800-FLOWERS first and see where I get. So I’m talking about what people type in (and I think lots of people will add .com just out of habit), not what their browser puts in there for them. I think my example of controller@acme.corp email suddenly “escaping” into the wild is current, and still valid.
    5) Yeah, I screwed up that thing about Fadi. However, my sense is that he was responding to a series of questions that that I had raised earlier in that session (Page 55 of the transcript). Here’s what I said “[what] about SAC 45 which was a report that was written in 2010 that talks about error strings at the root. We have a number of applications for TLDs that match those. I’ll stick with one example which is dot corp, and the question I have for you is, what’s the process by which ICANN decided to transfer that kind of risk to end users? Another question would be, don’t you think there’s some ethical considerations to that? There are some responsibilities to the community that we ought to address?”
    So you’re right — Fadi didn’t say that directly — my bad. But I think he was responding to my point (plus Bill Smith’s on P.73 and Amy Mushahwar’s on P.59) when he said that. And I’m glad you agree — another point of convergence in our points of view.

  11. Paul Stahura says:

    This is a good exchange Mikey. Point by point:
    1) First, that leakage was not that big, and second, comparing .co and .com “leakage” is not the same as comparing .com and .info “leakage” or .com and .co.uk, .shoe and .loan, and on and on. Com is more like co than shoe is like loan. That is a huge difference. So using that (co/com) to imply all new TLDs will have “leakage” is incorrect. Also, your logic says .corp would get traffic because .corp names are now used on certificates and in private networks, and therefore traffic intended for those will names will wind up on SLDs in the new .corp TLD. Traffic intended for old .corp private network names is good and new .corp TLD names are bad, you say. Now you say traffic intended for .co names “leaks” to the .com names. So by your logic wouldn’t .co names (intended recipient for the traffic) be good and .com names (unintended recipient for the “leaked” traffic) be bad?
    2) It’s obvious where Verisign’s interests lie – they didn’t get the price increase and now want to delay the introduction of highly competitive TLDs to .com such as .corp for as long as possible. You want .xyzzy in the root? No Problem!
    Yes, one can tell which unregistered .corp SLD names get traffic by instrumenting the root, but unregistered .corp domains getting traffic does not mean bad stuff will happen if .corp is in the root. .Corp can reply with NxD for those SLDs just as well as the root can. Plus unregistered .com names get tons of traffic all day long now. But that does not mean we should not allow those unregistered .com names to be registered.
    Plus, one can set up a name server today and pound the root with .corp queries. The instrumentation will not be able to tell whether or not those queries came from good guys, real bad guys or someone trying to mess with the data collection instrumentation.
    3) Good – I’m looking forward to seeing your results. When will you have them? I’d like to know because the absence of any results should not hold up the introduction of competition any longer. The constant call for more studies, instrumentation and the like, has been a great delay tactic used for many years.
    4) I get your point but 800 numbers are numbers. They have no meaning. Names are different. They have meaning and are memorable by humans. It’s not a strong comparison. Also see #1 above
    5) Maintaining stability and security – we agree. But you forget that the third leg of icann’s mission is the introduction of competition. I think we disagree on the magnitude of the impact of the introduction of .corp for bad stuff to occur. It is very minor, if it even exists at all. I am for no longer delaying the introduction of real competition to .com which will bring with it real public benefits (lower prices and innovation). We’ve studied all these issues more than once over the last 15 years. Let me say that again. 15 years. Where was this problem when they lit up .co? .me? .xxx? Funny how its made into such a supposed big deal now, and by who, isn’t it? Incumbents throughout history have always thrown up roadblocks to competitors due to their self interest. AM radio tried to stop FM, the gas companies tried to stop the introduction of electricity (they said gas lighting was safer and bad stuff would happen with electricity!). It goes on and on. The potential (not even actual) of the very narrow harm that the introduction of .corp MAY produce is comparable to the harm caused by the introduction of .xyzzy or the registration of unregistered .com names that today get traffic – it’s so minor it’s very difficult to even measure. Yet the benefits from competition are 1 million times greater than the harm. Talk about a cost-benefit ratio in favor of .corp introduction. Root insertion for .corp should be sped up, for godsake. Let’s quit the delay for the benefit of Verisign shareholders, they’ve had a pretty good 15-year run.

  12. Juan says:

    Given .CO is being thrown into the conversation, I’d like to pitch in with my two cents:
    The stats on traffic leakage from an established TLD into a new TLD are laughable – simple scare tactics used by the brand protection bunch and those who have something to loose from the nTLD process and the growing credibility of alternatives like .CO and .ME.
    Test it, here’s how:
    Register a bunch of top Alexa .COM sites on .CO (Warning: you may get busted for cybersquatting). Put a blank landing page on them, track uniques, and see what happens. You’ll be yawing soon.
    That said, there’s no doubt that if you’re a large consumer brand that tries to rebrand into a URL, from one day to the next, that company will fail miserably – regardless of the TLD chosen.

  13. Sheesh. This is starting to rival some of the threads on working-group lists. Ah, the good old days of Vertical Integration. 🙂
    1) This one’s about leakage — and I think we need to split this into 1a and 1b.
    1a) Leakage from currently-running internal networks to corp.com (and the upcoming SAC45 problem of unexpected breakage of internal networks when strings used for internal networks are delegated).
    I think the big deal on this kind of leakage is what the authors of SAC45 are concerned about — unexpected results, possibility for breakage, possibility of harm. If it’s nonexistent, all is fine. If it’s from a small number of places, that’s fine too. In the case of corp.com, I can let them know what’s going on. In the case of some other new-gTLD string, the registry (or the 2nd-level registrant) can too. The big problem comes if this is a really widespread problem across lots of organizations. What if the registry finds that they’re getting pounded with ibm.corp traffic? What will they do? I think we agree that it would be good to have more data on this — corp.com can serve as a kind of mini test-case for the .corp (and other error-string) stuff that’s coming. To summarize my view on 1a — unexpected breakage to stable internal name-spaces is bad. I think this is also where Verisign is coming from
    1b) Leakage from people mistakenly adding .com to new-gTLD addresses when the new-gTLD delegates a 2nd level name. This is pure speculation on my part. I agree, there may be no parallel to the .co/.com example I gave. I’m less concerned about this 1b use-case from a security/stability standpoint because that’s not an unexpected change to a configuration that’s been working for a long time (like 1a is). Yes, it’s error traffic. Yes, there’s opportunity for bad things happening. But it’s not breaking an internal-network configuration that worked the day before the 2nd-level domain got delegated (like 1a is). My view on 1b — this is less bad, more of a business risk for new-gTLD applicants and their registrant customers — and the .com recipient of that error traffic will have the same ethical issues to deal with that I did back in 1995.
    2a) Versign’s motives. I’m going to continue to push back on attributing bad motives to Verisign. This was the original reason I commented on Kevin’s piece (way up at the top of this long thread). Somebody recently threw out the fact that Verisign is the back-end operator for a couple hundred new gTLDs. I’m not tracking that closely enough to know, but if that 200 number is close to accurate Verisign is hurting themselves along with everybody else if they’re just trying to delay new gTLDs because they didn’t get their price increase. I don’t know, it just doesn’t add up for me.
    2b) I think your point about the traffic to unregistered SLDs in .com (today) and .SAC45-error-strings (tomorrow) is really pivotal. I think there’s a big difference between traffic to unregistered SLDs in an existing gTLD and that traffic to a NEW gTLD that is an exact match for a name that’s been traditionally used for internal name-spaces (and was identified in SAC45 as such a string). Again, to push back on the “Verisign is evil” meme, I think they are coming from the same place I am. It is a bad idea to delegate those kinds of strings without some good analysis to see how bad that risk is.
    2c) Pounding the target strings with bogus queries. Egad. What a diabolical idea. That’s why you don’t want me figuring out the instrumentation approach — I never would have anticipated something like that. Better minds than mine will have to figure out how to mitigate that one.
    3a) My NxD traffic results. Um… The Foo and I have been tinkering with lighting up DNS on one of my little servers. We’re not real good at it, so we’re slow. Happy to share results once we’ve got the plumbing working. We’re getting there, but TTL has not been our friend.
    3b) Studies as delaying tactics. I think the root server operators could probably do those studies fairly quickly — but only once they have permission. One of the questions I asked in Beijing was kindof getting at this — how does something like SAC45 move from just being advice from the SSAC to being actionable? If the GNSO takes it up, we know what the process looks like. But SAC45 falls in a grey area — not really GNSO-policy stuff. I think SAC45 just fell between cracks. I agree, we lost about 3 years on that one. But I don’t think it was Verisign’s fault that we lost 3 years, and I don’t think they should have to bear the brunt of criticism that should really be directed elsewhere for that delay.
    4) Weakness of my 800-number argument. Yeah. Well. I agree. I’m mostly trying to get at the notion that “old habits die hard” and that it may a be harder for people to remember which TLD to go when there are a lot more options. I’ll get into this again in 5) below, but it’s a lot easier for people to remember to buy “tomato sauce” than it is to remember to buy “extra tangy not too hot chunky Italian style tomato sauce.” That’s why market-segmentation marketing is so tricky. Sometimes you win big, like when Yogurt got segmented. But sometimes people just get confused. Time will tell. Granted, the argument was a little flimsy.
    5a) Security and stability — we agree. I’ll drink to that. 🙂
    5b) Introduction of competition. I have to admit I’m not sure that dramatic segmentation of the domain name market (see 4 above) is the same as introducing more competition. If what we’re about is market segmentation but across a really concentrated group of back-end providers, I’m not sure how much we’ve moved the needle on competition. Time will tell on that one — but I have to admit I’ve never quite followed the “increased competition” logic. And I’m *really* curious to see what pricing looks like. If prices drop a lot (anybody coming at us with $2/year domains?), I admit that my eyebrows will rise.
    5c) Incumbent (Verisign) throwing up roadblocks. Back to the nub of it. I think Verisgn has a very complicated row to hoe here. I bet they wish somebody else had written the stuff that they did. But nobody did, and so they had a hard choice to make. On the one hand, they run the risk of being tarred with that “roadblocks” brush — on the other hand they, like all of us, are really committed to the security and stability of the Internet. They decided to proceed with a document which highlights a pretty interesting set of security and stability issues. I’m glad they did, and I hope we in the community can figure out ways that those issues can be understood and mitigated…. quickly. 🙂

  14. Paul Stahura says:

    Mikey,
    Yea, definitely like those Vertical Integration days… That was yet another mountain-out-of-a-molehill debate that delayed the program.
    Regarding your numbered items:
    1) “leakage”
    Juan, I agree with you on this one. Mikey and Verisign are saying new TLDs like .corp (which you applied for too) should not be entered in the root because some smidgin amount of traffic intended for private networks that use “.corp” *may* leak into new second-level names registered in the .corp TLD AND where those SLD’s will get a certificate and spoof the website of the private network. The incidence of this behavior will be so low (and IS so low in the current .com environment), their argument is just an attempt to either delay competition or outright ban it. There is a series of stuff that would have to happen in order for this CA issue to cause any problem at all, including the bad guy having to get on the local network (not easy as that is the whole pont of a local network – to keep outsiders out), take the contents of the website to be spoofed (again not easy on the local website), register the company’s name in the TLD (not easy as its likely a trademark), and many more things which ALL would have to happen. And ALL of which can happen today with unregistered .com names the same as unregistered .corp names (if .corp were ever entered into the root).
    Btw, as I pointed out, this incidentally can happen today, with as I said donuts.co and donuts.com. Traffic intended for donuts.co can be intercepted by the donuts.com owner. But does this mean we ban certain .com names from being registered? No, we don’t. because the incidence of bad stuff happening by allowing .com names to be registered is so low, and the relative benefit of allowing SLD .com names to be registered is high by comparison.
    2) “instrumentation”
    Mikey, interesting you’ve never thought of the pounding idea – but its why instrumentation won’t work in an environment where there is such huge incentive (see below for the magnitude of what is at stake) for those skilled in the DNS to game the instruments to their advantage.
    Bottom line, its not even about the amount of “error” traffic unregistered “.corp” names get – if you look at the amount of “error” traffic unregistered .com names get, I would bet its *much* higher for .com names (which should also be measured with the instrumentation, if the root operators even go down that path, so we can compare). Its about the propensity for harm compared to the benefits.
    Compare the .corp cost/benefit ratio to the .com ratio.
    The .com ratio is smaller (more harm, due to these names get more error traffic, and less benefit as allowing these SLD .com names to be registered provide no competition to Verisign), yet we allow those unregistered .com names to be registered. The same thing (allow them to be registered) should happen for .corp – a TLD where the cost/benefit ratio is big – to do otherwise makes me think (and I bet I’m not the only one) there is some competitive shenanigans going on (not letting .corp SLDs to be registered, but letting other SLDs to be registered where the cost/benefit is worse). ICANN getting an anti-competitive lawsuit is more of a threat than it getting sued for allowing some .com (or .corp) name to be registered…. and THAT is the REAL threat to the stability of the internet.
    5) “competition”
    Mikey, I think we agree competition is the crux of the matter. Its what this whole thing is about. .corp, .inc, .web, .home, and the others bring competition to .com, which is why Verisign does not want to see those entered into the root. Its as simple as that. I also agree that, in your words, you “have not followed” the “increased competition” logic.
    First, do not look solely at the price of SLD registrations, as you are doing. Yes, those prices will decrease – they already have – see .PK for one example. But more importantly you need to look at the price of .com names in the secondary market. For that, you do not need to look further than your own corp.com. It will have less value in the aftermarket once .corp, .inc, .llc are in the root. Why? because the supply is bigger (do not be fooled by the “infinite” supply in .com argument – that’s akin to saying there is a vast supply of water when really 1% of the earth’s water is potable – i’m talking about the supply of good names). homedesigns.com will have less value in the aftermarket when you are able to buy home.design or design.home for $50. And I purposely chose $50 in this example, because $50 is more than the price-regulated $8 Verisign is allowed to charge just to illustrate that its not solely about *registry* prices, because $50 is 1000 *times* less than the aftermarket price for homedesignes.com. Talk about $50 being a *reduction* The introduction of .home and .design and .corp and the hundreds of other new TLDs will lower prices for .com names to consumers (especially in the aftermarket), and these new TLDs will have prices lower than .com in the primary and secondary markets – no doubt about that.
    Second, back to the “rub of it” as you put it, and that is Verisign throwing up roadblocks. I agree that is the rub.
    I’ve heard the same argument you said above proffered by them (and others). I’ll repeat it here. There are two parts to it
    Part 1 to their argument, which is the one you articulated, is:
    “Why would we (verisign) put up roadblocks to new TLDs, when we’ve applied for a number of TLDs ourselves and are doing the back-end for many others? This demonstrates we want new TLDs”
    Part 2,which is the other one I’ve heard from them, is:
    “We (Verisign) want competition because if we can show competition, we can get that price increase for .com names that we want. We love competition so we can raises prices, another demonstration that we want new TLDs”
    re Part 1:
    a) Its a matter of comparing what they win and what they lose. If you added up the benefit Verisign receives by getting all their applied-for names in the root (crappy IDNs), plus the benefit they get by doing the back-end for the others (mostly brands, so no volume), it does not come close to the benefit they get from .com. These TLDs do not canabalize .com, so Verisign is not opposed to them. Plus Verisign gets the side-benefit of the illusion of competition. They are trying to protect their over $1,000,000,000 (one *billion* dollar) business. The back-ends are immaterial, except for the fact that they can hold them up as a smoke screen to their true intentions. Verisign could not care less about .xyzzy – it will produce no volume. Is it a coincidence that IDNs and non-contented names come out first? And the names that will provide the most competition to .com such as .web .corp, .home, .inc etc come out later (or at all)? I think not.
    What Verisign fears most is the volume of names going to these new high-volume TLDs and not to .com. Forget about their so-called “transliteration” of .com apps (notice no one else applied for them? why not? Because they will not canibalize .com registrations) and their brand back-ends (also will not canibalize .com). That is small potatoes.
    To give you an idea of the magnitude of the numbers we are talking about – 1) just allowing the .com price to increase by 10% gives them more benefit than if they had applied for AND won ALL the new generic TLDs and 2) if these “head” new TLDs (.web and the like), together, take 10M new registrations from .com, you could see, what, a billion dollar impact on market cap? No wonder they come out so hard at the last minute with a FUD campaign and try to bring in the rest of the community.
    b) Yes they applied for a few what they describe as “transliterations” of .com. (Which, by the way, are not transliterations of .com. For example the Chinese character one when transliterated has the same sound as “dot dot com”. Not “com” or “dot com” but “dot dot com”. Why did they pick this “transliteration” (not really)? Because it means “click here now” in Chinese.) Having those IDN names in the root will not provide major competition to .com. Notice no one else applied for them? Why not? The great thing for Verisign is that they *seem* like they’d be competition (to for example, policy makers), but they really won’t be. Verisign applied because they win either way – either 1) the unlikely case they somehow turn into competition for .com in which case the Verisign application for them was an insurance policy in which they can get them and put them on a shelf, and 2) they can point at them and say “see ICANN, we want new TLDs!”, and also point them out to the public markets and say “see investors, we are in the new TLD game”.
    re Part 2:
    Names are not the same. Each is different. Verisign wants ICANN to believe that xyzzy3.com is the same as designs.home, and google.com (a registered name) is not the same as google.corp (an unregistered name). They are not the same. Verisign want the ICANN community/DoC to believe that they are the same because then they can say “well, if .corp gets no price control, why can’t we get no price control for .com?” and “.VIP gets to sell names for $50 each, why can’t .com be allowed to sell at that price?”. The bottom line is that Google, for example, is locked-in to their .com name. Verisign can increase the price to a million dollars per year for that one SLD and Google, and at least a thousand other .com registrants, would pay it (I picked a small number like a thousand because it only take 1,000 .com registrants paying this price to exceed verisign’s current revenue for .com). One word: RENEWALS That is why verisign should not be able to raise prices. They conflate the two (new registrations and renewals) so that a “price increase” means a price increase on renewals too. Even new gTLDs are price regulated on renewals. I’d like to see verisign be allowed to raise prices on new registrations. I doubt market pressures at that time would allow them to charge more than $8 for the unregistered crappyzyzzy3.com, even if they were allowed. But that is not what they really want. They use this “pro new TLDs” argument a) in an attempt to be able to raise prices on renewals – which they will never be able to get anytime soon no matter how many new TLDs are in the root (because of that lock-in) and b) as a smoke screen to the real reason behind their “documenting” these “pretty interesting set of stability and security issues”
    Therefore, I firmly believe that their “documenting” these “pretty interesting set of stability and security issues”, despite these bogus pro-newTLD arguments they espouse, is really nothing but desperation on their part in trying to stop or delay competition and that they intimately understand that loss in their market-cap will result from high-volume competitive TLDs like .corp, .web, .home and the rest if/when they go into the root. Not because they are altruistically concerned about the stability and security of the internet (maybe they are, as we all are, but that is not the reason).
    The real reason is protection of their business.

    • Kevin Murphy says:

      I’m loving this comment thread 🙂
      But I disagree with your description of the “leakage” problem for a few reasons, Paul.
      a) I don’t think there’s any particular reason the bad guy needs prior access to a LAN web site to cause trouble. Plenty of other applications use domains aside from web browsers. Bad guys could steal data in transit from those applications.
      b) The bad guy would not necessarily have to register the company’s name as a .corp domain, either.
      Acme Inc may well use acme.corp on its LAN, but it just as easily could use mail.corp or data.corp or foo.corp or mississippi.corp. Some second-level strings are probably more commonly used on LANs than others, but the number of strings is essentially limitless. I don’t think the TMCH is going to protect most of them.
      c) I don’t think the bad guy needs an SSL cert to cause trouble. Stolen plaintext traffic could be just as valuable as encrypted traffic. If the application assumes a LAN, it may not even bother with encryption.
      d) I don’t think .com and .co are particularly useful examples. If one of these TLDs didn’t already exist and had been applied for in the current round, there’s a significant chance that it would not be approved due to the high risk of string confusion. It’s not correct to say we don’t prevent certain domains from being registered in that regard.
      The degree of risk in .corp will depend to a large extent on what the registration policy for .corp is. If you need to own a company of the same name then I assume it will drastically reduce the potential for wrongdoing and make it much easier to track down the wrongdoer.
      The critical point seems to be that Microsoft recommended using .corp on LANs for many many years, which has probably increased the target pool substantially.

  15. Adekunle Hassan says:

    This is quite a comment thread. Mikey and Paul have dominated the comments here but one should note that the ALAC was pretty vocal in their recent letter to ICANN. They take ICANN to task for ignoring the SSAC for “vested interests,” risking security and stability, abusing the public comment process; and suggest that ICANN needs to slow down and address the security and stability issues raised. I wonder if someone from the ALAC leadership could weigh in on this as well.
    See the ALAC piece here. http://atlarge-lists.icann.org/pipermail/alac/attachments/20130611/2e639e17/CoverLetter-AL-ALAC-CO-0613-01-00-EN-0001.pdf

  16. Paul Stahura says:

    2nd try as my formatting didn’t work out that last time…
    Kevin
    I’m loving this comment thread, too 🙂
    But I disagree with your description of the “leakage” problem for a few reasons. For better readability, I’ve put your comments in line below, surrounded by “+”
    +
    a) I don’t think there’s any particular reason the bad guy needs prior access to a LAN web site to cause trouble. Plenty of other applications use domains aside from web browsers. Bad guys could steal data in transit from those applications.
    +
    Bad guys need access to the private network to get the contents of the website in order to spoof it, one of the bigger dangers talked about in all this. To your example, they would still need to “spoof” the server side of those apps. For that, while they don’t need the web content, they do need other stuff, such as the software for the server side of the app, so it’s still not easy.
    And anyway this can happen right now with unregistered .com domains, maybe one that was previously registered or a SLD registered in another TLD. Bad guys could register them, spoof the website (easier as they don’t need to break into an updated private network) or put up an email server etc. Same exact thing. But yet we are not talking about prohibiting *those* .com names from being registered. How often are these NxD-with-traffic-.com-names registered by someone who then does bad things? I would guess that same prevalence would happen in any other TLD that has unregistered subdomains with comparable traffic levels, be it.corp, or .info, or .co, or .ninja. I think it’s very very rare in .com that has tons of NxD traffic, let alone in any other TLD with far less NxD traffic.
    +
    b) The bad guy would not necessarily have to register the company’s name as a .corp domain, either.
    +
    They would if the company was using its company name as their private-network .corp
    If the company was not using their company name (say Acme Inc was using “widget.corp” not “acme.corp”) as its name for a server on its private network, then it’s true (and as I said) the bad guy could more easy register the non-trademarked name in .corp. But even that is not that easy as Acme, in this example, could register “widget.corp” before the bad guy, or the registry may have removed “widget.corp” from registration – in both cases, even though Acme did not use its own company name on its private network, the bad guy has been 100% thwarted from doing bad stuff.
    BTW, for users currently on certain ISPs (Verizon comes to mind), those ISPs will direct the traffic for “acme.corp” *today* right to their servers. Are they “bad guys”? No, because they are not doing the spoofing, getting a cert for those names, etc that they’d have to do to be bad guys. Just because some bad stuff *may happen* with this type of “error” traffic, in no way means it *will happen*
    +
    Acme Inc may well use acme.corp on its LAN, but it just as easily could use mail.corp or data.corp or foo.corp or mississippi.corp.
    +
    or acme.ninja (TLD will exist) or foo.info (TLD does exist) or acme.local (TLD will never exist)
    +
    Some second-level strings are probably more commonly used on LANs than others, but the number of strings is essentially limitless. I don’t think the TMCH is going to protect most of them.
    +
    I’m saying it’s probably more likely that if someone is using a private name on their network, they are probably using their company name, which is protectable by the TMCH. And even if its not in the TMCH, and they are really worried, they can still be the first to register it in the public-root DNS, before any bad guy does.
    And if they are using a non-TM name, then there are other, more classical ways, to protect, and if that’s not enough, then just like the current internet, if someone does bad stuff with “error” traffic that somehow goes to them, then you use the law to go after them.
    +
    c) I don’t think the bad guy needs an SSL cert to cause trouble. Stolen plaintext traffic could be just as valuable as encrypted traffic. If the application assumes a LAN, it may not even bother with encryption.
    +
    True, you don’t need a cert as a bad guy, but it helps greatly – otherwise you cannot elicit more traffic, spoofing the “real” place to where the traffic is intended. Just because SLD.corp gets error traffic does not mean harm has been caused, any more than if SLD.com got error traffic.
    +
    d) I don’t think .com and .co are particularly useful examples. If one of these TLDs didn’t already exist and had been applied for in the current round, there’s a significant chance that it would not be approved due to the high risk of string confusion. It’s not correct to say we don’t prevent certain domains from being registered in that regard.
    +
    I think they are very useful because it illustrates that this issue is happening right now in .com. .com is the recipient of a *huge* amount of error traffic to unregistered names, and its not because .co exists which is (or isn’t) similar to .com
    +
    The degree of risk in .corp will depend to a large extent on what the registration policy for .corp is. If you need to own a company of the same name then I assume it will drastically reduce the potential for wrongdoing and make it much easier to track down the wrongdoer.
    +
    Not if you stupidly used “widget.acme” – widget co can get the name you used on your private network.
    If .corp had an open registration policy, all you’d need to do is to see if the name you are using on your private network exactly matches a name that has been registered in .corp, but not registered by yourself.
    If it has, you can go to the public network website and see if it looks like your private one. If it does, you know bad stuff is happening.
    +
    The critical point seems to be that Microsoft recommended using .corp on LANs for many many years, which has probably increased the target pool substantially.
    +
    I don’t believe Microsoft is recommending using .corp any longer in their current documentation. I did a quick look for their ActiveDirectory docs and they do not recommend .corp or use it as an example. But if they still are, they need to change this dumb recommendation.
    They appended “.com” to essentially unregistered names in their old browser – which still has a negative security effect on the internet (their old browser is probably the source of Mikey’s corp.com traffic, which by the way is a fix people can do today for those using names – any name, not just .corp – on a private network… upgrade your employee’s browsers). That turned out to be a bad thing, too, which Microsoft subsequently changed.
    If people want to have a local name they really need to use “.local”. That is what it is for. ICANN has reserved that name (its banned by the AGB) for just this purpose – it will never be a TLD in the public root. This “.local” name, and its purpose, has been “out there” for many years – Microsoft (and all of us) should recommend the use of that TLD for private network names.
    And also, for users of the “widget.corp” name on their private network who, for whatever reason, can’t change that name on their private network to “widget.corp.local” or “widgetcorp.local” or “widget.local” or “acme.local”, and can’t take the slim risk that bad stuff will happen if “widget.corp” somehow becomes registered to a bad guy, and for whatever reason does not register “widget.corp” for themselves – Microsoft (and the rest of us) should recommend that their employees or other users of that local name, who typically use that name on the private network (but who may inadvertently use that name on the public network after its registered), should set up their client computer so that when those computers are on the public internet they use/check the private name server for “widget.corp” first – that way no matter what the public DNS says for “widget.corp” those users (the ones likely to have traffic go to a place they did not intend if they were to inadvertently use that name on the public network) will ALWAYS have “widget.corp” go to their local server – the place they intended even though they are on the public network.

    • Rubens Kuhl says:

      From RFC 6762: “Using “.local” as a private
      top-level domain conflicts with Multicast DNS and may cause problems for users. ” . And the ones quoted as used to overcome such possible issues include .home and .corp. And although RFC 6762 has a publishing date of Feb 2013, the I-D that it was based on started mentioning .corp and .home in 2003, so Microsoft could reference this I-D, now RFC, as why their documentation mentioned .corp.

  17. I think we’ve just about reached the point where can agree to disagree. This thread is starting to escalate through the stages of argument that go like this:
    – If at first you disagree with me, I figure you’re just ill-informed and I lay a bunch of facts on you to bring you around.
    – If you still disagree, even after all those facts have been laid you, I figure you’re stupid and I repeat my argument really slowly and really loud to try and get it through
    – If you still disagree, I figure you’re evil and try to discredit you.
    I think we’re pretty much done with stage 1 and heading into stage 2 so I’ll leave off.
    I would like to stand up the work of the VI working group. 10-15 people worked really hard on a really difficult problem, did a great job under adverse circumstances and a really aggressive timeline and would probably be just as annoyed as I am to have that work dismissed as merely part of the Grand Conspiracy To Delay New gTLDs.
    My other concluding point is that Danny McPherson (Versign’s VP of Security) has started doing a series of CircleID posts about this stuff that y’all should know about. Here’s the link to his thoughts on this topic.
    http://www.circleid.com/posts/20130628_name_collisions_why_every_enterprise_should_care_part_3_of_5/

Leave a Reply to Mikey O'Connor