Latest news of the domain name industry

Recent Posts

Controlled interruption as a means to prevent name collisions [Guest Post]

Jeff Schmidt, January 8, 2014, 12:23:45 (UTC), Domain Tech

This is a guest post written by Jeff Schmidt, CEO of JAS Global Advisors LLC. JAS is currently authoring a “Name Collision Occurrence Management Framework” for the new gTLD program under contract with ICANN.
One of JAS’ commitments during this process was to “float” ideas and solicit feedback. This set of thoughts poses an alternative to the “trial delegation” proposals in SAC062. The idea springs from past DNS-related experiences and has an effect we have named “controlled interruption.”
Learning from the Expired Registration Recovery Policy
Many are familiar with the infamous Microsoft Hotmail domain expiration in 1999. In short, a Microsoft registration for passport.com (Microsoft’s then-unified identity service) expired Christmas Eve 1999, denying millions of users access to the Hotmail email service (and several other Microsoft services) for roughly 20 hours.
Fortunately, a well-intended technology consultant recognized the problem and renewed the registration on Microsoft’s behalf, yielding a nice “thank you” from Microsoft and Network Solutions. Had a bad actor realized the situation, the outcome could have been far different.
The Microsoft Hotmail case and others like it lead to the current Expired Registration Recovery Policy.
More recently, Regions Bank made news when its domains expired, and countless others go unreported. In the case of Regions Bank, the Expired Registration Recovery Policy seemed to work exactly as intended – the interruption inspired immediate action and the problem was solved, resulting in only a bit of embarrassment.
Importantly, there was no opportunity for malicious activity.
For the most part, the Expired Registration Recovery Policy is effective at preventing unintended expirations. Why? We call it the application of “controlled interruption.”
The Expired Registration Recovery Policy calls for extensive notification before the expiration, then a period when “the existing DNS resolution path specified by the Registrant at Expiration (“RAE”) must be interrupted” – as a last-ditch effort to inspire the registrant to take action.
Nothing inspires urgent action more effectively than service interruption.
But critically, in the case of the Expired Registration Recovery Policy, the interruption is immediately corrected if the registrant takes the required action — renewing the registration.
It’s nothing more than another notification attempt – just a more aggressive round after all of the passive notifications failed. In the case of a registration in active use, the interruption will be recognized immediately, inspiring urgent action. Problem solved.
What does this have to do with collisions?
A Trial Delegation Implementing Controlled Interruption
There has been a lot of talk about various “trial delegations” as a technical mechanism to gather additional data regarding collisions and/or attempt to notify offending parties and provide self-help information. SAC062 touched on the technical models for trial delegations and the related issues.
Ideally, the approach should achieve these objectives:

  • Notifies systems administrators of possible improper use of the global DNS;
  • Protects these systems from malicious actors during a “cure period”;
  • Doesn’t direct potentially sensitive traffic to Registries, Registrars, or other third parties;
  • Inspires urgent remediation action; and
  • Is easy to implement and deterministic for all parties.

Like unintended expirations, collisions are largely a notification problem. The offending system administrator must be notified and take action to preserve the security and stability of their system.
One approach to consider as an alternative trial delegation concept would be an application of controlled interruption to help solve this notification problem. The approach draws on the effectiveness of the Expired Registration Recovery Policy with the implementation looking like a modified “Application and Service Testing and Notification (Type II)” trial delegation as proposed in SAC62.
But instead of responding with pointers to application layer listeners, the authoritative nameserver would respond with an address inside 127/8 — the range reserved for localhost. This approach could be applied to A queries directly and MX queries via an intermediary A record (the vast majority of collision behavior observed in DITL data stems from A and MX queries).
Responding with an address inside 127/8 will likely break any application depending on a NXDOMAIN or some other response, but importantly also prevents traffic from leaving the requestor’s network and blocks a malicious actor’s ability to intercede.
In the same way as the Expired Registration Recovery Policy calls for “the existing DNS resolution path specified by the RAE [to] be interrupted”, responding with localhost will hopefully inspire immediate action by the offending party while not exposing them to new malicious activity.
If legacy/unintended use of a DNS name is present, one could think of controlled interruption as a “buffer” prior to use by a legitimate new registrant. This is similar to the CA Revocation Period as proposed in the New gTLD Collision Occurrence Management Plan which “buffers” the legacy use of certificates in internal namespaces from new use in the global DNS. Like the CA Revocation Period approach, a set period of controlled interruption is deterministic for all parties.
Moreover, instead of using the typical 127.0.0.1 address for localhost, we could use a “flag” IP like 127.0.53.53.
Why? While troubleshooting the problem, the administrator will likely at some point notice the strange IP address and search the Internet for assistance. Making it known that new TLDs may behave in this fashion and publicizing the “flag” IP (along with self-help materials) may help administrators isolate the problem more quickly than just using the common 127.0.0.1.
We could also suggest that systems administrators proactively search their logs for this flag IP as a possible indicator of problems.
Why the repeated 53? Preserving the 127.0/16 seems prudent to make sure the IP is treated as localhost by a wide range of systems; the repeated 53 will hopefully draw attention to the IP and provide another hint that the issue is DNS related.
Two controlled interruption periods could even be used — one phase returning 127.0.53.53 for some period of time, and a second slightly more aggressive phase returning 127.0.0.1. Such an approach may cover more failure modes of a wide variety of requestors while still providing helpful hints for troubleshooting.
A period of controlled interruption could be implemented before individual registrations are activated, or for an entire TLD zone using a wildcard. In the case of the latter, this could occur simultaneously with the CA Revocation Period as described in the New gTLD Collision Occurrence Management Plan.
The ability to “schedule” the controlled interruption would further mitigate possible effects.
One concern in dealing with collisions is the reality that a potentially harmful collision may not be identified until months or years after a TLD goes live — when a particular second level string is registered.
A key advantage to applying controlled interruption to all second level strings in a given TLD in advance and at once via wildcard is that most failure modes will be identified during a scheduled time and before a registration takes place.
This has many positive features, including easier troubleshooting and the ability to execute a far less intrusive rollback if a problem does occur. From a practical perspective, avoiding a complex string-by-string approach is also valuable.
If there were to be a catastrophic impact, a rollback could be implemented relatively quickly, easily, and with low risk while the impacted parties worked on a long-term solution. A new registrant and associated new dependencies would likely not be adding complexity at this point.
Request for Feedback
As stated above, one of JAS’ commitments during this process was to “float” ideas and solicit feedback early in the process. Please consider these questions:

  • What unintended consequences may surface if localhost IPs are served in this fashion?
  • Will serving localhost IPs cause the kind of visibility required to inspire action?
  • What are the pros and cons of a “TLD-at-once” wildcard approach running simultaneously with the CA Revocation Period?
  • Is there a better IP (or set of IPs) to use?
  • Should the controlled interruption plan described here be included as part of the mitigation plan? Why or why not?
  • To what extent would this methodology effectively address the perceived problem?
  • Other feedback?

We anxiously await your feedback — in comments to this blog, on the DNS-OARC Collisions list, or directly. Thank you and Happy New Year!


If you find this post or this blog useful or interestjng, please support Domain Incite, the independent source of news, analysis and opinion for the domain name industry and ICANN community.

Tagged: , , , ,

Comments (38)

  1. M says:

    What is the expected decision making time on this name collision issue?
    The reason there is ‘collision’ for strings like .机构 or .คอม (.org in Chinese or .com in Thai), Is that those strings are already in the root for 14 years with the ascii tld (e.g 例子.org and ตัวอย่าง.com).

    • David Conrad says:

      Sorry, not sure I understand: what do you mean “.机构 or .คอม […] are already in the root for 14 years”?

      • Kevin Murphy says:

        I think he’s referring to the fact that Verisign is going to grandfather old IDN.com registrants into its IDN transliterations of .com, etc.

  2. Drewbert says:

    I say it yet again. You’re still assuming that all the “error” NXDOMAIN traffic appearing at the root servers are collisions between the public and private networks.
    This is patently false. Blocking non-collision traffic via blocklists or this “controlled interruption” idea is seriously damaging newGTLD launches especially the .com/net/org IDN transliterations.

  3. Rob says:

    It is a very interesting idea and should be effective.
    I would think you would want to add one more element to it, which is a variable of how often the specific strings are looked up. That variable could be tied to the length of the 2 phases you mention (.53 and .1)
    While it is great to put a given domain in the DNS and point it at 127/8, if that domain is never looked up in a given period, then there is no need to proceed to the next one. On the other hand, if a given domain is instantly and constantly pounded perhaps that period should be extended.
    You could start with fixed length periods and adjust based on actual activity rather than speculated activity.
    This would give the Registry incentive to actually monitor the activity and proactively reach out to sys admins causing massive traffic to their DNS.

    • Drewbert says:

      Sorry, Rob, but you’re making the assumption that all traffic is error traffic, and that something that is causing a pounding is inherently bad.
      There are plenty of circumstances where a constant “pounding” is actually internet users expecting a hostname to resolve. Such as яндекс.онлайн (xn--d1acpjx3f.xn--80asehdb) transliteration:Yandex.online – Yandex is the largest search engine in Russia and it is NATURAL for people to try to reach it in the Russian .online gTLD. Your suggestion would have Yandex’s traffic permanently censored just because they’re popular.
      That’s just NUTS!

      • Bob Harold says:

        If I type that URL in my browser, I get “Server not found”. If it resolves to 127.0.53.53, I would get “Unable to connect”. That might confuse some people, but in either case I expect that they will search and eventually find the correct URL. I don’t think that is “permanently censor”ing them. Is there a case where it would cause problems for usesrs typing a *correct* URL? I am not sure waht combination of URL, DNS search list, and ndots would cause a problem. An example would certainly help.

  4. Regarding the hotmail story, I want to quickly note that you seem to indicate that at that time there wasn’t anything like a “redemption period” yet and Microsofts domain name was already up for grabs. That’s not really the case (at least not to my recollection). The domain name had expired but with NetworkSolutions it was possible for anybody to pay for somebody else his domain name (without actually gaining control over it).

  5. Kevin Murphy says:

    I don’t know why 53.53 was selected, and IP addressing is not my specialty, but I wonder if a number that looks a bit more nefarious or naughty — or at least more deliberate — might not inspire a bit more googling by sysadmins.
    I’m thinking something like 6.6.6 or 69.69.

    • Jeff Schmidt says:

      DNS uses TCP/UDP port 53; we were hoping to inspire some connection to DNS with that number. 🙂

      • Rubens Kuhl says:

        We could use 2 random numbers and let people make relations from those numbers all the way to Illuminati theories… it would have the same property of having a signature, with the added fun of all the theories that would try to explain something with no meaning at all.

      • Kevin Murphy says:

        Ah. Yes, that’s much better.

  6. Mark Andrews says:

    I doubt this will help much. It will catch the use of partially qualified names and some additional leakage.
    It won’t help with sites that are properly contain their local name queries. Those sites will get a rude awakening one day when they try to communicate with someone using the same TLD as they are.
    NXDOMAIN doesn’t move you onto the next server. It only moves you to the next search element.

    • Jeff Schmidt says:

      > It won’t help with sites that are properly contain
      > their local name queries.
      Of course. But the risk isn’t there. The risk is the folks that don’t contain their local queries, yes?
      > Those sites will get a rude awakening one day
      > when they try to communicate with someone
      > using the same TLD as they are.
      This class of problem seems much lower risk. For example, two networks using colliding RFC1918 IP space that suddenly are connected with a VPN have the same problem. We see that all the time.
      The leakage to the public Internet DNS is where the risk is and I think this approach helps there, agree?

  7. Rubens Kuhl says:

    While I think on the risk management side of the proposal, one thing that come to mind is that wildcarding and DNSSEC can require unusual DNS architectures, so wildcarding could be a waiver to the DNSSEC requirement of the agreement while it’s in place, and makes it only possible to apply without any other delegations.

  8. Now I know, my name is Mikey. And Mikey hates everything…
    https://www.youtube.com/watch?v=vYEXzx-TINc
    But… I’m thinking this holds promise. Here are some wild and crazy ideas to add to the pile.
    – what if we did this for ALL the new gTLDs at the same time? Why? Because that way we could alert the worldwide network operator community to take part in this test at 0000 UTC on . I think there are advantages to that over sprinkling these failures out over the random days that registries enter the root across the next few years.
    – what if we did it in steadily-increasing time-increments. Rob’s right — we’d have to pay attention to TTL to make sure that we actually triggered an event. 10 minutes is too short if the going-in TTL is 60 minutes, for example. Maybe we could structure the tests to fire off once a week, each time getting a little longer? With clear procedures for network-operators to shout “help! block this second-level name, I’m dying out here.”
    – I like the 53.53 string — that port-number thing jumped right out at me. A quick search shows a delightfully small number of hits. I was sortof hoping to see this article come up, but nothing yet. Another approach would be to do the Spaceballs 127.123.456.789 combination. Or something equally eye-popping. This would be all about getting sys-administrator’s attention. At first blush, 53.53 seems really good.
    – what if we left the 53.53 wildcard in place after the TLD is in the root? One thing that appeals to me is a wicked-quick emergency mitigation for sys-admins that experience terrible trouble — set the IP address of an offending resource to that same 53.53 IP address. I’m especially thinking about the poor Windows Active Directory user who faces a huge challenge renaming their AD forest. Maybe the quick and dirty fix is for them to renumber their primary AD server to 127.0.53.53 while they go through the pain and agony of reNAMING their AD environment? Sure, maybe some of them never actually rename, but maybe that doesn’t matter if the 53.53 wildcard stays in place forever? I’m crying out to Rubens here — I don’t know enough about DNSSEC to know whether this would work…
    I like the concept of “notification” that’s built into this idea. Notification is a big problem for ISPs who sit between the registry and the network operator. This proposal nicely sidesteps that, especially if it’s coupled with an effective “heads up” campaign beforehand. Especially if that campaign has good SEO so that when people type that 53.53 address into search they get dropped off at a place that gives them all kinds of explanations and mitigation options.
    Sure, Mikey hates everything. But maybe not this time…

    • Rubens Kuhl says:

      Mike, you can use http://0skar.cz/dns/en/ in a sample of networks to see whether wildcarding DNSSEC would work or not. The resolver on the company I work for, a well-known TLD registry, fails all but #5… you can imagine what happens out there in the wild. It could be an issue with the test, but it exists because of the initial lack of wildcard in DNSSEC implementations, followed by implementations lacking interoperability, something that was only recently corrected, leaving a good number of old validators out there.

  9. Ooops. Memo to self, don’t post comments late at night.
    Belay that last idea… Leaving the wildcard up all the time would, um, not be such a great idea. Glad I got here before somebody flamed me. 🙂

  10. The engineers among us know that when I say we thrive on fixing broken things you know what I’m talking about. We love a great technical puzzle. Those of us in the trenches know that no matter how well engineered, our systems break on a regular basis and each time is a new surprise, a new challenge that makes us love our jobs. Most of the time when something breaks, we take it in stride, perform the analysis and ideally fix the issue before it’s even noticed, or at least minimally noticed.
    Plan as we might, we are a reactionary bunch. We take great steps to use our vast experience of previously learned problems and apply them to our beneficiaries but recipes are added to our cookbooks regularly. This is exactly why I like this plan. Those of us that actually FIX problems need to know that there IS a problem.
    The beauty of the idea has already been talked about in the comments above but I can’t help but reiterate and reinforce my support. The selected IP address is brilliant. Its clever because IT professionals will identify this IP as an unusual IP just as I did. They will immediately be alerted to something anomalous and will search the Internet for information regarding that IP and with this solution its 100% certain that the traffic will not leave the local network.
    We recommend NOT having a two-phase approach. Don’t change the IP at some point. In my opinion, if anything will break, its going to break with the proposed IP address. Using a single IP address focuses attention to a single search query for more information, most probably landing them on an ICANN-hosted information page. One IP means, when professionals are diagnosing the problem, their search will yield much more concentrated and relevant information (i.e., from ICANN, registry, JAS, serverfault.com sites etc.). Although I understand the thought and idea behind two IPs, in practical terms I think any benefit from a two-step method is negligible versus the benefits of staying with a single IP. Currently, I’m seeing only 95 hits for a Google search of 127.0.53.53. This means that the density of help will be incredibly high and any diagnostic messaging will be above the fold.
    It crossed my mind that there may be some minor benefit to including a TXT record that contained a string indicating what’s going on. It may provide a time-saving hint to engineers.
    What is particularly advantageous about this plan is that the new gTLD program and ICANN are providing the tremendous benefit to the Internet community by helping engineers identify and fix any misconfigurations which may have been the source of many strange symptoms that, until now, were not easy to track down. The beneficial stability and security implications are widespread.
    Using this solution also helps contribute to a mitigation plan for TLDs that have chosen the alternate path to delegation. It seems simple enough that, rather than a wildcard for those TLDs, registries can simply add appropriate resource records for the labels in the ICANN DITL NXD list to the TLD’s zone with the 127.0.53.53 address.
    One of the next logical steps is to identify a rollout schedule. As a datapoint, the Expired Registration Recovery Policy intentionally “breaks” the registrant’s website to get the attention of the registrant. It mandates that the disruption can be for as little as eight days. Just pointing out that it doesn’t take much time to get the attention necessary when a system needs to be fixed.
    We would recommend a staggered rollout rather than an all-TLDs-at-once approach. This could be accomplished during the period between the normal IANA delegation process and GA for those TLDs not already in GA. And for those already in GA, for some reasonable finite period of time, such as 30 days. IANA could ensure that the wildcards are properly in place during their “tech-check” of the registry DNS servers.
    Before GA also nicely coincides with the CA revocation period. This will allow news to spread rapidly, articles to be written and potentially some engineers will fix any issues, if they exist, before delegation happens in the first place. In any case, there must be a reasonable and finite time period that allows sufficient time for intermittent systems to encounter the IP, and after that the TLD can operate normally.
    Briefly as a last point, I believe ICANN should waive the DNSSEC and wildcard restriction during this period. I can’t see a problem with this as the purpose of both of those is to protect the Internet community and by waiving these for the controlled interruption period, ICANN is continuing to do so (because, since its pre-GA, no registrations have been made in the vast majority of new TLDs).

  11. Hi Jeff..
    If we are going to accept the premise that it is ok to be returning something other than NXDOMAIN as part of the process of mitigating name collisions, then why don¹t we make the Repsonse useful.
    Both your proposal to use 127.0.x.x and others that use RFC1918 address make the assumption that somebody will figure out why stuff broke. If we take the example of http://intranet.bank (which is probably just http://intranet to the user) suddenly breaking. Some poor receptionist is go to call their IT department which will confirm its broken, likely have to call some network engineer to figure out why their DNS servers were doing external lookups before internal ones. Wouldn’t it be better to get a clear “your DNS appears to be broken, here’s why, forward this to your IT team” message ?
    If an automated or unattended system is going to break, its likely going to do so regardless of which unexpected IP it gets.
    Using a valid IP has many benefits. Port 80 requests end up on informational “this is why you are here” websites with explanations of the hows and the whys, contacts etc.. SMTP servers could be configured to return 5xx errors with decent messaging etc.. Services that aren¹t
    configured in this fashion will fail, regardless of it being an internal, external or localhost IP.
    My suggestion is that ICANN provide the IPs and host the informational infrastructure for this. That should address most of the issues with leakage and concerns about abuse. And it creates one central point for all the info.
    I¹m very much against wildcarding. While it makes the setup trivially easy, it introduces a bunch of extra risks by returning non-NXDOMAIN responses for queries to domains that simply would never exist, regardless of delegation (SRV “_” names etc..) We could seed the DNS with the lists provided by ICANN’s “alternate path
    to delegation” and, possibly update periodically from NXDOMAIN responses.
    If this approach is adopted, it should be implemented at delegation and run for at least 60 days or the end of Sunrise. Up until that point, there no SLDs (other than nic.) in the zone and if something was seriously broken, its less difficult to address than having to yank newly
    active domains.

    • Rubens Kuhl says:

      While I’m not a fan of wildcarding either, it could be possible to do wildcards of only A, AAAA and MX and not SRV or other RR types.

    • Jeff Schmidt says:

      Hi, Wayne:
      Good points. One of the things we’ve done as a part of our work is consult with attorneys (grumble, grumble). They have expressed a concern about “soliciting” traffic (by returning an A record to an external IP) not knowing what you’ll get in return. Security researchers have long shown that HTTP query strings often contain surprising amounts of sensitive information, including usernames and passwords. Problems with other protocols like SMTP are similar. By sending a public/routable IP, we’ve now “caused” this traffic to exit the host/LAN/enterprise. Multiple jurisdictions make this even more complex. One of the nice features of this idea is that no traffic (and thus potentially sensitive information) should ever leave the host – certainly not the network. It’s a more conservative approach we believe. (I am not an attorney, JAS is not a law firm, this is not legal advice, etc)

  12. Drewbert says:

    Jeff,
    What are your plans for the traffic that isn’t traffic leaking from private networks? You have been silent on that.

    • Rubens Kuhl says:

      Do you mean the so called alternate roots ?

      • Avtal says:

        Let me give this a try (restating Drewbert’s points, but in a different order).
        Here is a story.
        One day years ago, a fat-fingered user typed “video online” in his browser bar. The browser helpfully replaced the space by a dot, prepended “http://”, and sent out a query for the URL http://video.online. The DNS system returned “no such domain”, but recorded the attempt.
        As a result of this fat-fingered user, the string “video.online” made it into the “day in the life of the internet” database, and then made its way onto ICANN’s block list.
        Because it is on the block list, no one is allowed to register “video.online” in the .online gTLD.
        But there is no reason to block the registration of “video.online”; this wasn’t a leak from a private network. This string showed up on the block list not because of a misconfigured private network, but because it’s a string that users like to type in the browser bar.
        What I (and I assume Drewbert and M) would like to see is a way of removing frequently-typed strings from the block list, and reserve the block list for bona fide leaks from private networks.
        One way to do this might be to run the current block list through google (replacing the dots by spaces), and see which items on the block list are in fact frequently searched terms.

        • Rubens Kuhl says:

          Although the final framework is yet to be published, Jeff went on record during ICANN Buenos Aires that he thought that in the end no blocking would be part of the framework. So there might be a focus here on something that is already not there… if any kind of blocking reappears either in his blog posts and/or the final framework then we can question why.
          Popular terms are actually not collisions, but lack of something the user wished for, like .rio… that’s the point you and others are trying to make, but this is probably already factored into the framework.

      • Drewbert says:

        Alternate roots? Certainly not! I think I stick to ICP3 more than ICANN does!

    • Jeff Schmidt says:

      Hi, Drewbert: I’m still not sure I completely understand your points/questions. Apologies. The “block lists” are driven entirely by the queries sent to the root asking for information about a non-existent TLD. These queries at the time, correctly, returned NXDOMAIN because the TLD didn’t exist. There is no linguistic, cultural, or other “context” to these queries/responses (and thus the block lists). They are what they are. As you know the root isn’t aware of IDNs directly and the linguistic implications of the strings; all a-labels are the same in the eyes of DNS. Correctly.
      But let’s not get sidetracked on the block lists; other than the reality that our plans must recognize that some (but not all) registries may be operating with block lists deployed, this is about the “big picture” not block lists. In fact, we must specify a way to get away from the block lists.
      Does that help?

      • Drewbert says:

        So where do you believe all this NXDOMAIN traffic is coming from, Jeff?

      • Drewbert says:

        >There is no linguistic, cultural, or other “context” to these
        >queries/responses (and thus the block lists).
        Ah, but there is.
        I’ll give you an example…
        If I ring you on the phone and verbally ask you to visit the website “championat.com”, you will naturally go to your browser and type championat.com into it, and the DNS will duly deliver the correct information for you to get to that website ( a russian language ice hockey site).
        If a Russian Internet user rings his friend up as ands him to visit the website, he will say championat.com in Russian and his friend, not knowing a word of English, will type “чемпионат.ком” into his browser in Cyrillic. Since browsers these days can handle Unicode thanks to IDNA2008, the browser will translate that to “XN–80AJJPGFF2A1B.xn--j1aef” and will head off to the DNS. Since Verisign is yet to sign a contract with ICANN for .xn--j1aef it is not yet delegated, and will reply NXDOMAIN.*
        And that NXDOMAIN will then be added to the statistics and a bunch of anglocentric engineers will mistake it for a “collision”, rather than a positive indication that the non-latin world is sick and tired of waiting around for .com/.net/.org transliterated GTLD’s and want to see them delegated so they can play catch-up with those citizens of the world lucky enough to use the same character set as the people who first invented DNS.
        If you don’t believe me, and want more proof of this, please take a look at the blocklist for xn--j1aef. There are 6416 xn-- strings listed, and if you convert them to Unicode, they read like the top 6416 words/brands in the Russian dictionary. If you have someone who speaks Russian, I can send that list to them and they can confirm this with you, if you don’t believe me.
        The NXDOMAIN stats contain much more than simple “network collision” errors, and any engineering solution must take this into account. Treating the entire NXDOMAIN data set as errors and sending them to special IP numbers destroys the usefulness of many newGTLD’s (not just IDN ones) and will affect the viability of the TLD owner as many top domains will be blocked by your suggested system (because they are already receiving natural traffic even though they are not yet delegated). Your suggested system BREAKS THE INTERNET.
        * The owners of championat.com have registered чемпионат.com (XN–80AJJPGFF2A1B.com) and are waiting for Verisign and ICANN to finally delegate .xn--j1aef so they can use their full Cyrillic domain. Of course, if Verisign make the mistake of going along with the ICANN blocklist proposed, the championat.com owners will be out of luck, because чемпионат XN–80AJJPGFF2A1B is listed in the blocklist because the people making up the blocklists mistake natural traffic for private network leakage.
        How do you propose to differentiate between these different causes of NXDOMAIN? Is source IP number logged?

        • V. B. Ivanov says:

          As a native Russian speaker, I state that Drewbert is 100% correct. These so called “block lists” are nothing but ignorance on the part of English speakers who have no idea about linguistic issues.
          The fact is, this is a bandad tool to prevent the use of our language, to the benefit of some big-pocketed “private internal network” operators who spend a lot of money on incompenent IT-staff that can’t work DNS corretly.
          Do it right in your local networks and leave other language domains out of it.

          • Rubens Kuhl says:

            As an applicant and back-end for non-English ASCII TLDs, I saw the same thing. But this also applies to English words… all of the good domains are in the APD reports blocking lists. This bad idea comes from unwillingness to set an acceptable risk metric, but not from language bias. Actually, it’s a language bias to think that this is a language bias… it’s indeed stupid, but from a global type.
            Blocking lists were actually a way to hide a delay in the program, by moving forward with contract signing, delegation and sunrise in parallel for TLDs willing to risk knowing what they will actually do with the TLD only after the framework is published. But this is opt-in; ICANN is not running the 9-month to sign agreements clock yet, so everyone that is unhappy with APD can wait until the framework is developed, publicly commented and implemented.

  13. Drewbert says:

    >As an applicant and back-end for non-English ASCII
    >TLDs, I saw the same thing. But this also applies to
    >English words… all of the good domains are in the APD
    >reports blocking lists. This bad idea comes from
    >unwillingness to set an acceptable risk metric, but not
    >from language bias. Actually, it’s a language bias to think
    >that this is a language bias… it’s indeed stupid, but from
    >a global type.
    Agreed. It’s harder to argue the IDN case though, as the anglo-centric engineers just see a raft of “xn--…” entries in the NXDOMAIN stats and do not take the time to translate them.
    Does Jeff’s proposal include an “acceptable risk metric” to allow these popular browser requests to be delegated, or will they be mistaken as private network leakage, just as they are in the current system, to the detriment of the gTLD’ owners and users?

    • Rubens Kuhl says:

      NTAG has advocated IDNs to be exempt of this since this issue first appeared, but unfortunately nobody wanted to draw a line in the sand for it.

  14. Drewbert says:

    Well, Jeff seems to be conspicuous by his absence in this ongoing conversation, but maybe someone else following this blog post would care to find a friend with a Blackberry,
    fire up the Opera web browser on it, and type in “dns guru” (without quotes) into the URL (or “address”) field and tell us what happens please?

Add Your Comment