Latest news of the domain name industry

Recent Posts

Neustar leading the new gTLD back-end scores so far

Kevin Murphy, March 25, 2013, 17:45:15 (UTC), Domain Registries

New gTLD applications backed by registry service provider Neustar scored the highest results in the first batch of Initial Evaluation results.
All 27 of the applications that have had their IE results revealed by ICANN so far have easily passed the 22 out of 30 points threshold required for a passing score on the technical evaluation.
In most cases, each application had its technical questions answered by the applicant’s chosen back-end provider.
Eight different back-ends are involved in the first 27 bids, some with more applications than others.
Here’s the average score out of 30 for each company.
[table id=12 /]
Only Neustar and Verisign scored the full 30 points in an application with their name on it, but their averages were reduced by applications in which they fared less well.
It’s very early days, of course, with the full set of IE results not due to be completely published until August.
We’ll be tracking these scores as more results are released on DI PRO.

Tagged: , ,

Comments (6)

  1. Jean Guillon says:

    Neustar seems to be good!

  2. Please remember that a higher score does not necessarily signify a better application. In fact, a customer or provider may have opted to only achieve one point in a two-point question for various reasons. Therefore, any passing result may actually be a 100% achievement of the goals set by the applicant in the application, as opposed to a “bad” result.
    That said, we are naturally very happy with our result, having gotten each point that we set out to get.

  3. Chris Wright says:

    A comparison of evaluation scores between applications is not an accurate measure of the technical capability of each registry provider.
    It’s important to note that some applicants strategically chose to opt for a lower mark on certain parts of the application in line with their business and commercial plans.
    For instance, in the WHOIS section, we intentionally chose not to do WHOIS Search recognising that we would lose points on this section as we believe it can be a burden to the success of a TLD to offer WHOIS Search. Registrants wanting to register under TLDs with WHOIS Search may be discouraged knowing that their personal contact details will be open to being queried by any user of the WHOIS Search service. We believe WHOIS Search has not been properly thought out and presents a serious privacy concern for registrants. As such we were happy to forgo these extra points in order to appease these concerns.
    Furthermore, another section we intentionally received lower marks for was on Rights Protection Mechanisms (RPMs). For this question, ICANN outlined mandatory requirements to receive a score of one, or you could go above and beyond the mandatory requirements to receive a score of two. We chose to only offer the mandatory RPM requirements because we felt anything beyond this would be a significant and unnecessary burden on our clients. We didn’t force extra RPMs on our clients so that we could help to reduce the costs on them and ultimately the costs to registrants. This is a strategic decision to increase the success of the TLD.
    Ultimately, we were so confident in the accuracy of our responses that we did not need to add an extra layer of burden on our clients to ensure they passed the evaluation. The whole notion of a comparison between evaluation scores is flawed because there were optional components within the application that applicants did not have to choose.
    ARI Registry Services scored exactly the score we were predicting and targeting to score, so we were 100% on target and 100% successful as far as we were concerned. Also we will be watching those registries that have said they are doing extra RPMs and WHOIS Search to ensure that they do indeed offer those services… or indeed they may have just ‘made them up’.
    It’s also worth noting that we have observed differences between application scores which is likely a reflection of inconsistencies between evaluators. We have had identical answers receiving different scores which shouldn’t be possible.

    • Kevin Murphy says:

      Rest assured that for any serious, detailed analysis of scores I’ll be going a lot deeper than in this post.

      • Zack says:

        Scores don’t mean anything. All it means is that they were able to answer the question better i suppose. It is not necessarily indicative of who has a better back end. As far as ICANN is concerned, a pass is a pass. I am not sure what value this story is trying to deliver?

  4. Rubens Kuhl says:

    It’s also of notice that the IDN question was an extra point, the only one where applicants can have a 0 grade. Although we would support IDN even with no extra point, we noticed that this could be worth doing just for getting the extra point, provided IDN support was limited to a restricted charset of one language of one script. Even US-targeted applications could have added spanish “ñ” as a possible character as it’s a growing demographic in that country with very little work and get an extra point in return.
    We preferred not to implement WHOIS Search but noticed that our usual RPMs handling (ccTLD-style) was very near what Q28 and Q29 asked. We described what we would do anyways; if we get the 2 points, fine. If we don’t, that’s fine too. We wouldn’t increase recurring work just to get shiny grades.

Add Your Comment