Another possible explanation has been put forward for ICANN’s suspension of digital archery, this time by one of the third-party digital archery service providers.
The ambitiously named Digital Archery Experts says it alerted ICANN to the presence of a technical problem a week ago.
Chief technology officer Dirk Bhagat described it thus:
Instead of generating the timestamp immediately, we believe the TAS timestamp generation process may be delayed by increases in system load…
Since most applicants are aiming for the 000 millisecond variance at the minute mark, this can introduce varying timestamps since applicants are shooting for the exact same second on the minute. We have also noted that our results were a lot more consistent when attempts were made to hit the target at various offsets after the minute mark, for example, aiming for 15:32:07 instead of 15:32:00.
It’s not exactly rocket science. In short, he’s saying that the TAS can’t handle too many applicants logging in and shooting at the same time; more load equals poorer performance.
This won’t be news to many applicants, some of whom saw downtime last week that seemed to be caused by a meltdown of the sluggish Citrix virtual machine software.
It also seems to be consistent with the hypothesis that the massive amount of calibration going on — much of it by digital archery service providers themselves — has caused more load than TAS can handle.
With only 20% of applications currently assigned a timestamp, and only a week left on the clock, the situation could only have been exacerbated by lots of last-minute arrows being fired.
While digital archery may be conceptually similar to grabbing a dropping domain or hitting a landrush, it seems pretty clear that TAS is not as redundantly provisioned as the typical registry SRS.
Bhagat said that ICANN could mitigate the impact of the problem by separating timestamp generation as much as possible from the parts of the infrastructure impacted most by system load.
This might all be academic, however.
Digital archery and batching are high on the agenda here at ICANN 44 in Prague, and many attendees hope that the controversial system may be gone for good before the week is out.
That includes some members of the Governmental Advisory Committee, which in an open meeting yesterday seemed to be coming to the conclusion that it would advise ICANN to ditch digital archery.
The GAC and the ICANN’s board’s new gTLD program committee are having their first public facetime this afternoon at 1630 local time, at which a better sense of how both plan to proceed might emerge.
ICANN has turned off its unpopular “digital archery” system after new gTLD applicants and independent testing reported “unexpected results”.
As delegates continue to hit the tarmac here in Prague for ICANN 44, at which batching may well be hottest topic in town, digital archery is now surely doomed.
ICANN said in a statement this morning:
The primary reason is that applicants have reported that the timestamp system returns unexpected results depending on circumstances. Independent analysis also confirmed the variances, some as a result of network latency, others as a result of how the timestamp system responds under differing circumstances.
While that’s pretty vague, it could partly refer to the kind of geographic randomness reported by ARI Registry Services, following testing, earlier this week.
It could also refer to the kind of erratic results reported by Top Level Domain Holdings two weeks ago, which were initially dismissed as a minor display-layer error.
TLDH has also claimed that the number of opportunistic third-party digital archery services calibrating their systems against the live site had caused latency spikes.
Several applicants also said earlier this week that the TLD Application System had been inaccessible for long periods, apparently due to a Citrix overloading problem.
Only 20% of applications had so far registered their archery timestamp, according to ICANN, despite the fact that the system was due to close down on June 28.
Make no mistake, this is another technical humiliation for ICANN, one which casts the resignation of new gTLD program director Michael Salazar on Thursday in a new light.
For applicants, ICANN said evaluations were still proceeding according to plan, but that the batching problem is now open for face-to-face community discussion:
The evaluation process will continue to be executed as designed. Independent firms are already performing test evaluations to promote consistent application of evaluation criteria. The time it takes to delegate TLDs will depend on the number and timing of batches
The information gathered from community input to date and here in Prague will be weighed by the New gTLD Committee of the Board. The Committee will work to ensure that community sentiment is fully understood and to avoid disruption to the evaluation schedule.
Expect ICANN staff to take a community beating over these latest developments as ICANN 44 kicks off here in Prague.
There’s light support for batching, and even less for digital archery. It’s looking increasingly likely that neither will survive the meeting.
ICANN has named Fadi Chehade as its new CEO.
Lebanon-born Chehade is a California-based software industry executive currently CEO of Vocado, a maker of educational software.
“I’m here because I owe the internet everything I’ve achieved to date,” he said at a press conference (ongoing).
He’s not due to take over until October 1. Until then, COO Akram Atallah will hold the reins, ICANN confirmed.
Chehade has known Atallah since they were kids — they used to be in the same boy scout troop, he said — and they worked together at Core Objects, where Chehade was CEO.
ICANN chairman Steve Crocker pointed to Chehade’s role as founder of RosettaNet, a supply chain software standards consortium, as evidence of his experience of consensus-building work.
Michael Salazar, director of ICANN’s new gTLD program, has quit.
He’ll be replaced on an interim basis by Kurt Pritz, senior director of stakeholder relations, according to a statement from ICANN this evening.
No reason for his resignation, which comes shortly after the Big Reveal and on the eve of ICANN’s public meeting in Prague, was given.
Salazar, a KPMG alum, joined ICANN in July 2009. Unlike Pritz, he’s not been a particularly public face of the program.
It’s not entirely unusual for people to leave companies after hitting project milestones, but the timing in this case, given ICANN’s ongoing public perception problem, is unfortunate.
The organization is due to reveal its new CEO in about 12 hours time, and from what I gather the new appointee isn’t expected to take on the role for a couple months.
Having another senior staffer with responsibility over the new gTLD program quit at the same time will look bad.
ICANN’s new generic top-level domain program could create almost 900 closed, single-user namespaces, according to DI PRO’s preliminary analysis.
Surveying all 1,930 new gTLD applications, we’ve found that 912 – about 47% – can be classified as “single registrant” bids, in which the registry would tightly control the second level.
Single-registrant gTLDs are exempt from the Registry Code of Conduct, which obliges registries to offer their strings equally to the full ICANN-accredited registrar channel.
The applications include those for dot-brand strings that match famous trademarks, as well as attempts by applicants such as Amazon and Google to secure generic terms for their own use.
Our definition of “single registrant” includes cases where the applicant has indicated a willingness to lightly share second-level domains with its close affiliates and partners.
It also includes applications such as those for .gov-style zones in non-US jurisdictions, where domains would be available to multiple agencies under the same government umbrella.
But it does not include gTLD applications that would merely require registrants to provide credentials, be a member, or agree to certain restrictions in order to register a domain.
Since there’s been a lot of discussion this last week about whether the single-registrant model adds value to the internet, I thought I’d try to measure the likely scale of the “problem” when it comes to eventual delegation into the DNS root zone.
How many closed registries could we see?
According to the DI PRO database, of the 912 single-registrant applications, 132 are in contention sets. There are 101 contention sets with at least one such applicant.
Some are up against regular multiple-registrant applications (both open and restricted gTLDs), whilst others are only fighting it out with other single-registrant applicants.
Let’s look at a couple of hypothetical scenarios.
Scenario One – Single-Registrant Applicants Win Everything
First, let’s assume that each and every applicant passes their evaluations, does not drop out, and there are no successful objections.
Then let’s imagine that every contention set containing at least one single-registrant bidder is won by one of those single-registrant bidders.
According to my calculations, that would eliminate 31 single-registrant applications and 226 multiple-registrant applications from the pool.
Another 264 multiple-registrant gTLD applications would be eliminated in normal contention.
That would leave us with 881 single-registrant gTLDs and 528 regular gTLDs in the root.
Scenario Two – Single-Registrant Applicants Lose Everything
Again, let’s assume that everybody passes their evaluations and there are no objections or withdrawals.
But this time let’s imagine that every single-registrant applicant in a contention set with at least one multiple-registrant bidder loses. This is the opposite of our first scenario.
According to my calculations, that would eliminate 117 single-registrant applications and 140 multiple-registrant applications.
Again, normal contention would take care of another 264 multiple-registrant applications.
That would leave us with 795 single-registrant gTLDs in the root and 614 others.
In both of these scenarios, at either extreme of the possible contention outcomes, single-registrant gTLDs are in the comfortable majority of delegated gTLDs.
Of course, there’s no telling how many applications of all types will choose to withdraw, fail their evaluations, or be objected out of the game, so the numbers could change considerably.
As another disclaimer: this is all based on our preliminary analysis of the applications, subject to a margin of error and possible changes in future as we refine our categorization algorithms.