ICANN stuck between Ukraine and Russia in time zone debate
As the world waits nervously to see whether Russia’s weeks-long troop build-up on the Ukrainian border will result in an invasion, ICANN is embroiled in an infinitely more trivial conflict between the two nations.
As well as overseeing domain names, IP addresses and protocol numbers, a decade ago ICANN took over the time zone database that most of the world’s computers rely on to figure out what the correct time is or was.
The Time Zone Database or tzdb has been hosted by ICANN’s IANA unit since 2011, when it stepped in to relieve the previous host, which was being badgered in court by astrologers.
It’s managed and regularly updated — such as when a country changes its time zone or modifies its daylight savings practices — by Paul Eggert of the University of California.
While it’s apolitical, governed by IETF best practice, it occasionally finds itself in the firing line due to political controversies.
In recent years, a recurrent controversy — which has raised its head again this month in light of the current border crisis — has been the spelling of the Ukrainian capital city.
It has long been rendered in English as “Kiev”, but that’s the Latin-script transliteration of the Russian-Cyrillic spelling Киев, rather than the Ukrainian-Cyrillic spelling, Київ, which is transliterated as “Kyiv”.
With tensions between Russia and Ukraine intensifying since the 2014 annexation of Crimea, Ukraine has for years appealed to the international community to change its “painful” spelling practices.
The Ukrainian Ministry of Foreign Affairs in 2019, part of its #CorrectUA and #KyivNotKiev campaigns, described the situation like this:
Under the Russian empire and later the Union of Soviet Socialist Republics (USSR), Russification was actively used as a tool to extinguish each constituent country’s national identity, culture and language. In light of Russia’s war of aggression against Ukraine, including its illegal occupation of Crimea, we are once again experiencing Russification as a tactic that attempts to destabilize and delegitimize our country. You will appreciate, we hope, how the use of Soviet-era placenames – rooted in the Russian language – is especially painful and unacceptable to the people of Ukraine.
Many English-language news outlets — including the Associated Press and Guardian style guides, the BBC, New York Times and Wall Street Journal — have since switched to the “Kyiv” spelling, though many are still using “Kiev”.
The US and UK governments both currently use “Kyiv”. Wikipedia switched to “Kyiv” in 2020. ICANN’s own new gTLD program rules, which draw on international standards, would treat both “Kiev” and “Kyiv” as protected geographic names.
My Windows computer used “Kyiv”, but the clock on my Android phone uses “Kiev”.
The tzdb currently lists Kyiv’s time zone as “Europe/Kiev”, because it follows the English-language consensus, with the comments providing this October 2018 explanation from Eggert:
As is usual in tzdb, Ukrainian zones use the most common English spellings. For example, tzdb uses Europe/Kiev, as “Kiev” is the most common spelling in English for Ukraine’s capital, even though it is certainly wrong as a transliteration of the Ukrainian “Київ”. This is similar to tzdb’s use of Europe/Prague, which is certainly wrong as a transliteration of the Czech “Praha”. (“Kiev” came from old Slavic via Russian to English, and “Prague” came from old Slavic via French to English, so the two cases have something in common.) Admittedly English-language spelling of Ukrainian names is controversial, and some day “Kyiv” may become substantially more popular in English; in the meantime, stick with the traditional English “Kiev” as that means less disruption for our users.
Because the tzdb is incorporated in billions of installations of operating systems, programming frameworks and applications worldwide, a conservative approach to changes has been used for compatibility reasons.
In addition, the spelling in the database is not supposed to be exposed to end users. Developers may use tzdb in their code, but they’re encouraged to draw on resources such as the Unicode Common Locale Data Repository to localize their user interfaces.
As Eggert put it on the tzdb mailing list recently “the choice of spelling should not be important to end users, as the tzdb spelling is not intended to be visible to them”.
Based on past changes, it seems that “Kyiv” could one day before too long supplant “Kiev” in the tzdb, if the current political status quo remains and English-speaking nations increasingly support Ukraine’s independent sovereignty.
But if Russia should invade and occupy, who knows how the language will change?
This article has been part of an irregular series entitled “Murphy Feels Guilty About Covering Incredibly Serious Current Events With A Trivial Domain Angle, But He Writes A Domain Blog So Cut Him Some Slack FFS”.
UDRP cases soar at WIPO in 2021
The World Intellectual Property Organization has released statistics for cybersquatting cases in 2021, showing one of the biggest growth spurts in UDRP’s 22-year history.
Trademark owners filed 5,128 UDRP complaints last year, WIPO said, a 22% increase on 2020.
There have been almost 56,000 cases since 1999, covering over 100,000 domains names, it said.
The number of annual cases has been growing every year since 2013, its numbers show.
WIPO took a punt that the increase last year might be related to the ongoing coronavirus pandemic, but didn’t really attempt to back up that claim, saying in a release:
The accelerating growth in cybersquatting cases filed with the WIPO Center can be largely attributed to trademark owners reinforcing their online presence to offer authentic content and trusted sales outlets, with a greater number of people spending more time online, especially during the COVID-19 pandemic.
The number of domains hit by UDRP that include strings such as “covid” or “corona” or “vaccine” are pretty small, amounting to just a few dozen domains across all providers, searches show.
The growth does not necessarily mean the total number of UDRP cases has increased by a commensurate amount — some of it might be accounted for by WIPO winning market share from the five other ICANN-approved UDRP providers.
It also does not indicate an increase in cybersquatting. WIPO did not release stats on the number of cases that resulted in a domain name being transferred to the complaining trademark owner.
“It’s not our fault!” — ICANN blames community for widespread delays
ICANN may be years behind schedule when it comes to getting things done on multiple fronts, but it’s the community’s fault for making up rubbish policies, bickering endlessly, and attempting to hack the policy-making process.
That’s me paraphrasing a letter sent last week by chair Maarten Botterman to the Registries Stakeholder Group, in which he complained about the community providing “ambiguous, incomplete, or unclear policy recommendations”.
RySG chair Samantha Demetriou had written to Botterman (pdf) in December to lament the Org and board’s lack of timely progress on many initiatives, some of which have been in limbo for many years.
Policies and projects related to Whois, new gTLDs and the Independent Review Process have been held up for a long time, in the latter case since 2013, she wrote, leading to community volunteers feeling “disempowered or discouraged”.
As I recently reported, ICANN has not implemented a GNSO policy since 2016.
The lack of board action on community work also risks ICANN’s legitimacy and credibility, Demetriou wrote.
But Botterman’s response (pdf), sent Thursday, deflects blame back at the community, denying that the delays are “simply because of failure at the level of the organization and Board.”
He wrote:
we need to continue to find our way forward together to address the challenges that affect the efficiency of our current decision-making processes, including, for example, ambiguous, incomplete, or unclear policy recommendations, the relitigation of policy issues during implementation, and the use of the review process to create recommendations that should properly be addressed by policy development
In other words, the community is providing badly thought-out policy recommendations, continuing to argue about policy after the implementation stage is underway, and using community reviews, rather than the Policy Development Process, to create policy.
The RySG, along with their registrar counterparts, put their concerns to the board at ICANN 72 in October, warning of “volunteer burnout” and a “chilling effect” on community morale due to board and Org inaction.
At that meeting, director Avri Doria presented staff-compiled stats showing that across five recent bylaws-mandated community reviews (not PDPs), the board had received 241 recommendations.
She said that 69% had been approved, 7% had been rejected, 18% were placed in a pending status, and 6% were “still being worked on”.
CEO Göran Marby provided a laundry list of excuses for the delays, including: reconciling differing community viewpoints, the large number of recommendations being considered, the potential for some recommendations to break ICANN bylaws, sensitivity to the bottom-up nature of the multi-stakeholder process, lack of staff, and the extra time it takes to be transparent about decision-making.
Just this week, ICANN has posted eight job listings, mostly in policy support.
In his letter last week, Botterman pointed to a “Prioritization Framework”, which is currently being piloted, along with further community conversations at ICANN 73 next month and a “thought paper” on “evolving consensus policies”.
Because why fix something when you can instead create another layer of bureaucracy and indulge in more navel-gazing?
Verisign and PIR join new DNS abuse group
The domain name industry has just got its fourth (by my count) DNS abuse initiative, with plans for work on “trusted notifier” programs and Public Interest Registry and Verisign as members.
topDNS, which announced itself this week, is a project out of eco, the German internet industry association. It said its goals are:
the exchange of best practices, the standardisation of abuse reports, the development of a trusted notifier framework, and awareness campaigns towards policy makers, decision-makers and expert groups
eco’s Thomas Rickert told DI that members inside and outside the industry had asked for such an initiative to combat “the narrative that industry is not doing enough against an ever-increasing problem”.
He said there’s a “worrying trend” of the domain industry being increasingly seen as an easy bottleneck to get unwelcome content taken down, rather than going after the content or hosting provider.
“There is not an agreed-upon definition of what constitutes DNS abuse,” he said.
“There are groups interested in defining DNS abuse very broadly, because it’s more convenient for them I guess to go to a registrar or registry and ask for a domain takedown rather than trying to get content taken down with a hosting company,” he said.
topDNS has no plans to change the definition of “DNS abuse” that has already been broadly agreed upon by the legit end of the industry.
The DNS Abuse Framework, which was signed by 11 major registries and registrars (now, it’s up to 48 companies) in 2019 defines it as “malware, botnets, phishing, pharming, and spam (when it serves as a delivery mechanism for the other forms of DNS Abuse)”.
This is pretty much in line with their ICANN contractual obligations; ICANN itself shudders away from being seen as a content regulator.
The big asterisk next to “spam” perhaps delineates “domains” from “content”, but the Framework also recommends that registries and registrars should act against content when it comprises child sexual abuse material, illegal opioid sales, human trafficking, and “specific and credible” incitements to violence.
Rickert said the plan with topDNS is to help “operationalize” these definitions, providing the domain industry with things like best practice documents.
Of particular interest, and perhaps a point of friction with other parties in the ecosystem in future, is the plan to work on “the development of a trusted notifier framework”.
Trusted notifier systems are in place at a handful of gTLD and ccTLD registries already. They allow organizations — typically law enforcement or Big Content — a streamlined, structured path to get domains taken down when the content they lead to appears to be illegal.
The notifiers get a more reliable outcome, while the registries get some assurances that the notifiers won’t take the piss with overly broad or spammy takedown requests.
topDNS will work on templates for such arrangements, not on the arrangements themselves, Rickert said. Don’t expect the project to start endorsing certain notifiers.
Critics such as the Electronic Frontier Foundation find such programs bordering on censorship and therefore dangerous to free speech.
While the topDNS initiative only has six named members right now, it does have Verisign (.com and .net) and PIR (.org), which together look after about half of all extant domains across all TLDs. It also has CentralNic, a major registrar group and provider of back-end services for some of the largest new gTLDs.
“Verisign is pleased to support the new topDNS initiative, which will help bring together stakeholders with an interest in combating and mitigating DNS security threats,” a company spokesperson said.
Unlike CentralNic and PIR, Verisign is not currently one of the 48 signatories of the DNS Abuse Framework, but the spokesperson said topDNS is “largely consistent” with that effort.
Verisign has also expressed support for early-stage trusted notifier framework discussions being undertaken by ICANN’s registry and registrar stakeholder groups.
PIR also has its own separate project, the DNS Abuse Institute, which is working on similar stuff, along with some tools to support the paperwork.
DNSAI director Graeme Bunton said: “I see these efforts as complementary, not competing, and we are happy to support and participate in each of them.” He’s going to be on topDNS’s inaugural Advisory Council, he and Rickert said.
Rickert and Bunton both pointed out that topDNS is not going to be limited to DNS abuse issues alone — that’s simply the most pressing current matter.
Rickert said issues such as DNS over HTTP and blockchain naming systems could be of future interest.
ICANN hasn’t implemented a policy since 2016
It’s been over five years since ICANN last implemented a policy, and many of its ongoing projects are in limbo.
Beggars belief, doesn’t it?
The ongoing delays to new gTLD program policy and the push-back from ICANN on Whois policy recently got me thinking: when was the last time ICANN actually did anything in the policy arena apart from contemplate its own navel?
The Org’s raison d’être, or at least one of them, is to help the internet community build consensus policies about domain names and then implement them, but it turns out the last time it actually did that was in December 2016.
And the implementation projects that have come about since then are almost all frozen in states of uncertainty.
ICANN policies covering gTLD domains are usually initiated by the Generic Names Supporting Organizations. Sometimes, the ICANN board of directors asks the GNSO Council for a policy, but generally it’s a bottom-up, grass-roots process.
The GNSO Council kicks it off by starting a Policy Development Process, managed by working group stocked with volunteers from different and often divergent special interest groups.
After a few years of meetings and mailing list conversations, the working group produces a Final Report, which is submitted to the Council, and then the ICANN board, for approval. There may be one or more public comment periods along the way.
After the board gives the nod, the work is handed over to an Implementation Review Team, made up of ICANN staff and working group volunteers, which converts the policy into implementation, such as enforceable contract language.
The last time an IRT actually led to a GNSO policy coming into force, was on December 1, 2016. Two GNSO consensus policies became active that day, their IRTs having concluded earlier that year.
One was the Thick WHOIS Transition Policy, which was to force the .com, .net and .jobs registries to transition to a “thick” Whois model by February 2019.
This policy was never actually enforced, and may never be. The General Data Protection Regulation emerged, raising complex privacy questions, and the transition to thick Whois never happened. Verisign requested and obtain multiple deferrals and the board formally put the policy on hold in November 2019.
The other IRT to conclude that day was the Inter-Registrar Transfer Policy Part D, which tweaked the longstanding Transfer Dispute Resolution Policy and IRTP to streamline domain transfers.
That was the last time ICANN actually did anything in terms of enforceable, community-driven gTLD policy.
You may be thinking “So what? If the domain industry is ticking over nicely, who cares whether ICANN is making new policies or not?”, which would be a fair point.
But the ICANN community hasn’t stopped trying to make policy, its work just never seems to make the transition from recommendation to reality.
According to reports compiled by ICANN staff, there are 12 currently active PDP projects. Three are in the working group stage, five are awaiting board attention, one has just this month been approved by the board, and three are in the IRT phase.
Of the five PDPs awaiting board action, the average time these projects have been underway, counted since the start of the GNSO working group, is over 1,640 days (median: 2,191 days). That’s about four and a half years.
Counting since final policy approval by the GNSO Council, these five projects have been waiting an average of 825 days (median: 494 days) for final board action.
Of the five, two are considered “on hold”, meaning no board action is in sight. Two others are on a “revised schedule”. The one project considered “on schedule” was submitted to the board barely a month ago.
The three active projects that have made it past the board, as far as the IRT phase, have been there for an average of 1,770 days (median: 2,001 days), or almost five years, counted from the date of ICANN board approval.
So why the delays?
Five of the nine GNSO-completed PDPs, including all three at the IRT stage, relate to Whois policy, which was thrown into confusion by the introduction of the European Union’s introduction of the GDPR legislation in May 2018.
Two of them pre-date the introduction of GDPR in May 2018, and have been frozen by ICANN staff as a result of it, while three others came out of the Whois EPDP that was specifically designed to bring ICANN policy into line with GDPR.
All five appear to be intertwined and dependent on the outcome of the ICANN board’s consideration of the EPDP recommendations and the subsequent Operational Design Assessment.
As we’ve been reporting, these recommendations could take until 2028 to implement, by which time they’ll likely be obsolete, if indeed they get approved at all.
Unrelated to Whois, two PDPs relate to the protection of the names and acronyms of international governmental and non-governmental organizations (IGOs/INGOs).
Despite being almost 10 years old, these projects are on-hold because they ran into resistance from the Governmental Advisory Committee and ICANN board. A separate PDP has been created to try to untangle the problem that hopes to provide its final report to the board in June.
Finally, there’s the New gTLD Subsequent Procedures PDP, which is in its Operational Design Phase and is expected to come before the board early next year, some 2,500 days (almost seven years) after the PDP was initiated.
I’m not sure what conclusions to draw from all this, other than that ICANN has turned into a convoluted mess of bureaucracy and I thoroughly understand why some community volunteers believe their patience is being tested.
“GDPR is not my fault!” — ICANN fears reputational damage from Whois reform
Damned if we do, damned if we don’t.
That seems to an uncomfortable message emerging from ICANN’s ongoing discussions about SSAD, the proposed Standardized System for Access and Disclosure, which promises to bring some costly and potentially useless reform to the global Whois system.
ICANN’s board of directors and the GNSO Council met via Zoom last night to share their initial reactions to the ICANN staff’s SSAD Operational Design Assessment, which had been published just 48 hours earlier.
I think it’s fair to say that while there’s still some community enthusiasm for getting SSAD done in one form or another, there’s much more skepticism, accompanied by a fear that the whole sorry mess is going to make ICANN and its vaunted multistakeholder model look bad/worse.
Some say that implementing SSAD, which could take six more years and cost tens of millions of dollars, would harm ICANN’s reputation if, as seems quite possible, hardly anyone ends up using it. Others say the risk comes from pissing away years of building community consensus on a set of policy recommendations that ultimately don’t get implemented.
GNSO councillor Thomas Rickert said during yesterday’s conference call:
One risk at this stage that I think we need to discuss is the risk to the credibility of the functionality of the multi-stakeholder model. Because if we give up on the SSAD too soon, if we don’t come up with a way forward on how to operationalize it, then we will be seen as an organization that takes a few years to come up with policy recommendations that never get operationalized and that will certainly play into the hands of those who applaud the European Commission for coming up with ideas in NIS2, because obviously they see that the legislative process at the European and then at the national state level is still faster than ICANN coming up with policies.
NIS2 is a formative EU Directive that is likely to shake up the privacy-related legal landscape yet again, almost certainly before ICANN’s contractors even type the first line of SSAD code.
While agreeing with Rickert’s concerns, director Becky Burr put forward the opposing view:
The flip side of that is that we build it, we don’t have the volume to support it at a reasonable cost basis and it does not change the outcome of a request for access to the Whois data… We build it, with all its complexity and glory, no one uses it, no one’s happy with it and that puts pressure on the multi-stakeholder model. I’m not saying where I come out on this, but I feel very torn about both of those problems.
The ODA estimates the cost of building SSAD at up to $27 million, with the system not going live until 2027 or 2028. Ongoing annual operating costs, funded by fees collected from the people requesting private Whois data, could range from $14 million to $107 million, depending on how many people use it and how frequently.
These calculations are based on an estimated user base of 25,000 and three million, with annual queries of 100,000 and 12 million. The less use the system gets, the higher the per-query cost.
But some think the low end of these assumptions may still be too high, and that ultimately usage would be low enough to make the query fees so high that users will abandon the system.
Councillor Kurt Pritz said:
I think there’s a material risk that the costs are going to be substantially greater than what’s forecast and the payback and uptake is going to be substantially lower… I think there’s reputational risk to ICANN. We could build this very expensive tool and have little or no uptake, or we could build a tool that becomes obsolete before it becomes operational.
The low-end estimates of 25,000 users asking for 100,000 records may be “overly optimistic”, Pritz said, given that only 1,500 people are currently asking registrars for unredacted Whois records. Similarly, there are only 25,000 requests per year right now, some way off the 100,000 low-end ODA assumption, he said.
If SSAD doesn’t even hit its low-end usage targets, the fee for a single Whois query could be even larger than the $40 high-end ODA prediction, creating a vicious cycle in which usage drops further, further increasing fees.
SSAD doesn’t guarantee people requesting Whois data actually get it, and bypassing SSAD entirely and requesting private data directly from a registrar would still be an option.
There seems to be a consensus now that GDPR always requires registries and registrars to ultimately make the decision as to whether to release private data, and there’s nothing ICANN can do about it, whether with SSAD or anything else.
CEO Göran Marby jokingly said he’s thinking about getting a T-shirt printed that says “GDPR was not my fault”.
“The consequences of GDPR on the whole system is not something that ICANN can fix, that’s something for the legislative, European Commission and other ones to fix,” he said. “We can’t fix the law.”
One idea to rescue SSAD, which has been floated before and was raised again last night, is to cut away the accreditation component of the system, which Marby reckons accounts for about two thirds of the costs, and basically turn SSAD into a simplified, centralized “ticketing system” (ironically, that’s the term already used derisively to describe it) for handling data requests.
But the opposing view — that the accreditation component is actually the most important part of bringing Whois into GDPR compliance — was also put forward.
Last night’s Zoom call barely moved the conversation forward, perhaps not surprisingly given the limited amount of time both sides had to digest the ODA, but it seems there may be future conversations along the same lines.
ICANN’s board, which was in “listening mode” and therefore pretty quiet last night, is due to consider the SSAD recommendations, in light of the ODA, at some point in February.
I would be absolutely flabberghasted if they were approved in full. I think it’s far more likely that the policy will be thrown back to the GNSO for additional work to make it more palatable.
No SSAD before 2028? ICANN publishes its brutal review of Whois policy
Emergency measures introduced by ICANN to reform Whois in light of new privacy laws could wind up taking a full decade, or even longer, to bear dead-on-the-vine fruit.
That’s arguably the humiliating key takeaway from ICANN’s review of community-created policy recommendations to create a Standardized System for Access and Disclosure (SSAD), published this evening.
The Org has released its Operational Design Assessment (pdf) of SSAD, the first-ever ODA, almost nine months after the Operational Design Phase was launched last April.
It’s a 122-page document, about half of which is appendices, that goes into some detail about how SSAD and its myriad components would be built and by whom, how long it would take and how much it would cost.
It’s going to take a while for the community (and me) to digest, and while it generally veers away from editorializing it does gift opponents of SSAD (which may include ICANN itself) with plenty of ammunition, in the form of enumerated risk factors and generally impenetrable descriptions of complex systems, to strangle the project in the crib.
Today I’m just going to look at the timing.
Regular DI readers will find little to surprise them among the headline cost and timeline predictions — they’ve been heavily teased by ICANN in webinars for over a month — but the ODA goes into a much more detailed breakdown.
SSAD, ICANN predicts, could cost as much as $27 million to build and over $100 million a year to operate, depending on adoption, the ODA says. We knew this already.
But the ODA contains a more detailed breakdown of the timeline to launch, and it reveals that SSAD, at the most-optimistic projections, would be unlikely to see the light of day until 2028.
That’s a decade after the European Union introduced the GDPR privacy law in May 2018.
Simply stated, the GDPR told registries and registrars that the days of unfettered access to Whois records was over — the records contain personal information that should be treated with respect. Abusers could be fined big.
ICANN had been taken off-guard by the law. GDPR wasn’t really designed for Whois and ICANN had not been consulted during its drafting. The Org started to plan for its impact on Whois barely a year before it became effective.
It used the unprecedented top-down emergency measure of the Temporary Specification to force contracted parties to start to redact Whois data, and the GNSO Council approved an equally unprecedented Expedited Policy Development Process, so the community could create some bottom-up policy.
The EPDP was essentially tasked with creating a way for the people who found Old Whois made their jobs easier, such as intellectual property lawyers and the police, to request access to the now-private personal data.
It came up with SSAD, which would be a system where approved, accredited users could funnel their data requests through a centralized gateway and have some measure of assurance that they would at least be looked at in a standardized way.
But, considering the fact that they would not be guaranteed to have their requests approved, the system would be wildly complex, potentially very expensive, and easily circumvented, the ODP found.
It’s so complex that ICANN reckons it will take between 31.5 and 42 months for an outsourced vendor to build, and that’s after the Org has spent two years on its Implementation Review Team activities.
That’s up to almost six years from the moment ICANN’s board of directors approves the GNSO’s SSAD recommendations. That could come as early as next month (but as I reported earlier today, that seems increasingly unlikely).
The ODA points out that this timetable could be extended due to factors such as new legislation being introduced around the world that would affect the underlying privacy assumptions with which SSAD was conceived.
And this is an “expedited” process, remember?
Ten years ago, under different management and a different set of bylaws, ICANN published some research into the average duration of a Policy Development Process.
The average PDP took 620 days back then, from the GNSO Council kicking off the process to the ICANN board voting to approve or reject the policy. I compared it to an elephant pregnancy, the longest gestation period of all the mammals, to emphasize how slow ICANN had become.
Slow-forward to today, when the “expedited” PDP leading to SSAD has so far lasted 1,059 days, if we’re counting from when Phase 2 began in March 2019. It’s taken 1,287 days if we’re being less generous and counting from the original EPDP kicking off.
Nelly could have squeezed out two ankle-nibblers in that time. Two little elephants, one of which would most assuredly be white.
ICANN board not happy with $100 million Whois reform proposals
ICANN’s board of directors has given its clearest indication yet that it’s likely to shoot down community proposals for a new system for handling requests for private Whois data.
Referring to the proposed System for Standardized Access and Disclosure, ICANN chair Maarten Botterman said “the Board has indicated it may not be able to support the SSAD recommendations as a whole”.
In a letter (pdf) to the GNSO Council last night, Botterman wrote:
the complexity and resources required to implement all or some of the recommendations may outweigh the benefits of an SSAD, and thus may not be in the best interests of ICANN nor the ICANN community.
The SSAD would be a centralized way for accredited users such as trademark lawyers, security researchers and law enforcement officers to request access to Whois data that is currently redacted due to privacy laws such as GDRP.
The system was the key recommendation of a GNSO Expedited Policy Development Process working group, but an ICANN staff analysis last year, the Operational Design Phase, concluded that it could be incredibly expensive to build and operate while not providing the functionality the trademark lawyers et al require of it.
ICANN was unable to predict with any accuracy how many people would likely use SSAD. It will this week present its final ODP findings, estimating running costs of between $14 million and $107 million per year and a user base of 25,000 to three million.
At the same time, ICANN has pointed out that its own policies cannot overrule GDPR. Registries and registrars still would bear the legal responsibility to decide whether to supply private data to requestors, and requestors could go to them directly to bypass the cost of SSAD altogether. Botterman wrote:
This significant investment in time and resources would not fundamentally change what many in the community see as the underlying problem with the current process for requesting non-public gTLD registration data: There is no guarantee that SSAD users would receive the registration data they request via this system.
ICANN management and board seem to be teasing the GNSO towards revising and scaling back its recommendations to make SSAD simpler and less costly, perhaps by eliminating some of its more expensive elements.
This moves ICANN into the perennially tricky territory of opening itself up to allegations of top-down policy-making.
Botterman wrote:
Previously, the Board highlighted its perspective on the importance of a single, unified model to ensure a common framework for requesting non-public gTLD registration data. However, in light of what we’ve learned to date from the ODP, the Board has indicated it may not be able to support the SSAD recommendations as a whole as envisioned by the EPDP. The Board is eager to discuss next steps with the Council, as well as possible alternatives to design a system that meets the benefits envisioned by the EPDP
The board wants to know whether the GNSO Council shares its concerns. The two parties will meet via teleconference on Thursday to discuss the matter. The ODP’s final report may be published before then.
ICANN splits $9 million new gTLD ODP into nine tracks
ICANN has added a little more detail to its plans for the Operational Design Phase for the next round of the new gTLD program.
VP and ODP manager Karen Lentz last night blogged that the project is being split into nine work tracks, each addressing a different aspect of the work.
She also clarified that the ODP officially kicked off January 3, meaning the deadline for completion, barring unforeseen issues, is November 3. The specific dates hadn’t been clear in previous communications.
The nine work tracks are “Project Governance”, “Policy Development and Implementation Materials”, “Operational Readiness”, “Systems and Tools”, “Vendors”, “Communications and Outreach”, “Resources, Staffing, and Logistics”, “Finance”, and “Overarching”.
Thankfully, ICANN has not created nine new acronyms to keep track of. Yet.
Pro-new-gTLD community members observing how ICANN’s first ODP, which addressed Whois reform, seemed to result in ICANN attempting to kill off community recommendations may be worried by how Lenzt described the new ODP:
The purpose of this ODP, which began on 3 January, is to inform the ICANN Board’s determination on whether the recommendations are in the best interests of ICANN and the community.
I’d be hesitant to read too much into this, but it’s one of the clearest public indications yet that subsequent application rounds are not necessarily a fait accompli — the ICANN board could still decide force the community to go back to the drawing board if it decides the current recommendations are harmful or too expensive.
I don’t think that’s a likely outcome, but the thought that it was a possibility hadn’t seriously crossed my mind until quite recently.
Lentz also refers to “the work required to prepare for the next round and subsequent rounds”, which implies ICANN is still working on the assumption that the new gTLD program will go ahead.
The ICANN board has give Org 10 months and a $9 million budget, paid out of 2012-round application fee leftovers, to complete the ODP. The output will be an Operational Design Assessment, likely to be an enormous document, that the board will consider, probably in the first half of next year, before implementation begins.
Crain named ICANN CTO
ICANN veteran John Crain has been named the Org’s new chief technology officer.
He’s replacing David Conrad, who he’s been subbing in for since Conrad left at the end of September.
Crain has been with ICANN for 20 years and was most recently chief security, stability, and resiliency officer.







Recent Comments