-
Notifications
You must be signed in to change notification settings - Fork 209
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Explanation on the DID Methods in the registries' document #83
Comments
Let me try to be more specific.
I hope this helps. As an aside, I wonder whether the registration process for DID methods should not be more demanding. I just glanced into some descriptions and, I must admit, I simply do not see what makes them interesting, useful, why they are there. In some cases the only information I really get is "it is a DID implementation on the XYZ blockchain". This is not very helpful. I believe we should require a 1-2 paragraph description for each of the methods that would describe why that DID method is interesting, unique in some way, etc. |
@iherman I have tried to address some of your concerns here. https://github.com/w3c/did-spec-registries/pull/115/files |
Ack. |
PR that addressed this issue was closed. Still need a PR for this. @peacekeeper will take a look |
This was not resolved, the PR noted above wasn't ever merged. Some text might have made it into DID Core to address the issue. |
The issue was discussed in a meeting on 2021-08-03
View the transcript5.1. Explanation on the DID Methods in the registries' document (issue did-spec-registries#83)See github issue did-spec-registries#83. Brent Zundel: explanation on did methods in registries document, raised by ivan Ivan Herman: more than a year ago Manu Sporny: the PR, there was a massive permathread in it and it got closed, never went in Ivan Herman: vaguely remember when I raised it registration of terms and methods and they looked different from one another but don't know what happened since then Brent Zundel: The PR that tried to address the issue was closed rather than merged Manu Sporny: this was orie and markus going back and forth over normative language in did core... Markus Sabadello: I can't check right now but will look later |
I think some of this has been resolved (e.g. the CBOR column has been removed, and there is also some language now on how DID methods will get accepted into the table). But some other issues here are probably still open, e.g. about the structure and contents of the table. A few weeks ago there was an idea that the "Status" field in the table could contain the value "tested" or "implemented", if an implementation of the DID method was submitted to the test suite. Probably need to discuss this topic again on a WG call with @iherman who raised this issue, to see how much of it still needs to be addressed. |
Related issue is #174, which also discusses tracking contact information for DID methods and other additions to the registry. |
See also the suggestion I just made in #265 . |
The issue was discussed in a meeting on 2021-09-14
View the transcript7. Explanation on the DID Methods in the registries' document (issue did-spec-registries#83)See github issue did-spec-registries#83. Brent Zundel: issue has been around a while. DID method section has a status column. 99% says provisional for status. Drummond Reed: Added a reference to where I had put another comment. Original suggestion was to create a new table that li-sts methods where authors have upgraded their methods to match the Recommendation. Joe Andrieu: worried about how you proposed it. Worried about new name squatting.
Ivan Herman: since we want this to become a formal registry, the doc itself has a registration process and that process says nothing about registering a new method. There is just a table, but no registration process. We need a clear policy for how things get into the table.
Brent Zundel: we should use provisional (written before there was a spec), v1.0 compliant (submitted after the Recommendation), and deactivated (for no longer in use).
Drummond Reed: if we keep the current table, prefer what justin typed above Brent Zundel: next step should be a pull request proposing new language Joe Andrieu: whoever the owner is of an entry, they should be able to self-assert which version of spec they claim to be compliant with
Brent Zundel: any volunteers to write a PR?
Brent Zundel: reminder for Imp. Guide PR review and request for other PRs, we will keep you informed of the progress of the spec |
My read of the end of the conversation was that there was general approval to add a (blank) column to the table of DID Methods which would link to their updated-for-1.0 DID Method spec, that the (generally) "Status: PROVISIONAL" column should be removed, that old links should be labeled as pre-1.0 versions, and that since DID Method authors should self-certify, the registry should not attempt to declare their status. I will submit a pull request with these changes and briefly describe the change above the table. |
As part of this work, I've reviewed all the existing DID Method Specifications and noticed that several do not resolve to existing web pages. I believe that #83, as it's currently scoped, does not cover editorial judgement on changing the status of these DID Methods, but point out that we need a process that has certain minimum standards. |
The following changes are for [issue 83](/w3c/issues/83#issuecomment-924061109): - add a (blank) column to the table of DID Methods which would link to their updated-for-1.0 DID Method spec - remove "Status: PROVISIONAL" column - label old links as pre-1.0 versions - add notes solumn for author-submitted status changes - rename WITHDRAWN status to DEPRECATED, per spec
See pull request #341 |
Per a request from @OR13 in PR #341, and in light of the feedback received in the formal objections to the DID 1.0 spec, for the third time I will put forth the proposal that we split the DID method registry table into two tables:
Proposed rules for these two tables
RationaleBesides giving greater visibility to v1.0-compliant DID method specifications, the two-table approach would enable us to put an explanatory paragraph before each table that should reduce confusion, not increase it. The para before the first table can explain that these are DID method specifications submitted AFTER the DID 1.0 spec reached PR and that meet all the requirements of a compliant DID method. The para before the second table can explain that these were all DID method specifications submitted prior to completion of the DID 1.0 spec, and thus are all provisional until they submit a v1.0-compliant DID method specification. This way it becomes much easier for implementers to "separate the wheat from the chaff". |
I'd be happy to implement this in the existing pull request. Any objections? |
@rxgrant Not from me! I suggest we see if there are any objections or modifications on tomorrow's DID WG call. Then let's go for it. |
The issue was discussed in a meeting on 2021-10-19
View the transcript4.1. change registry columns per issue #83 (pr did-spec-registries#341)
See github pull request did-spec-registries#341. See github issue did-spec-registries#83. Daniel Burnett: framing questions: what's necessary to continue the work? can everything else work on github as issues? Orie Steele: i want to thank ryan for an issue-first, PR-second approach that resolves many registry problems Drummond Reed: i think this PR is urgent vis-a-vis the formal objections! Manu Sporny: since we're on that issue (pr 341), my only suggestion is to replace "non-compliant" with "provisional"
Manu Sporny: replace "non-compliant" with "provisional" in the PR, i mean
Ryan Grant: I was trolling, it's true, or put a little fire under them. I would support drummond's solution and I think it addresses manu's objection
Manu Sporny: maybe we are not thinking enough about ungenerous readings-- we don't want people marked as "noncompliant" for having been compliant and having passed a test suite before breaking changes
Manu Sporny: and we also don't want to hand a "gotcha" opportunity to those who will comb through our github looking for evidence that we aren't running a proper WG here
Drummond Reed: I put a link to a sidestepping solution-- a 1.0 compliant table distinct from the existing table that includes all the provisionals as-is
Drummond Reed: I will work with Ryan on doing this in PRs Ryan Grant: First of all, Manu thanks for correcting the record on the amount of interop that these specs have already achieved |
@rxgrant We didn't get any objections in the DID WG meeting today, but we didn't get any strong reactions in general. So here's my proposal: if you're willing to update your PR, let me know if you want me to draft text for the intro paragraphs for each of the two tables. Or alternately just go ahead and update your PR and I can comment on that. Whichever you prefer. |
I agree in principle with Drummond's proposal and I think it get's us mostly there. Some further refinements I'd suggest -
So at a high level I'm at a major +1 to this proposal (and have been for awhile now - thanks for re-proposing it for the 3rd time @talltree) and think with a bit of more specifics about the details in a follow up PR to flesh out the details of point 2a through 2c of this registry we can make this work. Would others here prefer I open a separate issue to discuss the requirements or do we want to consider that here if people agree this is necessary? |
Here are the methods that IMHO don't have a reasonable spec at a reasonable URL that, at minimum, addresses how to read a DID Document from the VDR: some variant of a 404
didn't bother posting a DID Method spec that describes how to read a DID Document from the VDR
posted a DID Method specification that takes a form too confusing for the author of this comment to figure out how to retrieve the DID Document
|
Earlier I made a comment about a DID Method with a very short name. But they're building stuff, so the comment wasn't appropriate. |
open on this - possibly one PR to get the JSON format established, and then a second that updates the respec from that as part of the build process on commit @OR13 any thoughts? |
If we're going the JSON file route, please don't dump everything into a single JSON file (we repeatedly have merge conflicts or have to teach people how to rebase when we do that). Rather, each DID Method gets its own JSON file, put 'em all in a subdirectory, please. |
@mprorock yep, I would build a directory of json files, and a dynamic index built from parsing it. |
I can implement either the div elements or a list of (over one hundred) subdirectories. However, I am worried about obsfucating the build process and thus requiring that people learn ReSpec build intricacies in order to keep this running. I think we'd need excellent documentation on either process. Who's willing to write that up? Which one is simpler, yet will still result in non-conflicting merges? |
I don't know what this means or how to write my code in order to pass this test. |
I meant 112 JSON documents in a single subdirectory labeled "didMethods" or something like that. :) There is no build process w/ ReSpec, but someone will have to extend ReSpec to pull all 112 files in at page load time and translate that to HTML (which is what ReSpec does in realtime). Exceedingly bad examples on how to do that here: https://github.com/w3c-ccg/vc-api/blob/main/common.js#L404-L422 and invoked here: https://github.com/w3c-ccg/vc-api/blob/main/index.html#L70 with target markup here: https://github.com/w3c-ccg/vc-api/blob/main/index.html#L343-L344 That is almost certainly a hacky way to do it, but ya gotta start somewhere, right?! :) I agree that we shouldn't need an external build process to do this (or we've failed). |
I can't help with the coding process, but I'm assuming that if we go this way (which again I favor), will we not still need to publish in the DID Methods section of the document a description of the registration process and the requirements that have to be met, yes? If so, I'm willing to help work on that. But it sounds like we need a reset on what the registration properties are and what is required for each property. |
I hate to be the voice of dissent on details that affect the job of did method reviewers, especially when there's shared enthusiasm on a data driven approach. Bare with me here because I'm airing some controversial opinions here, but I think they need to be said. Right now we've got a lot of dog**** methods that are accepted here because there's little measure of quality that's being set. My hope in setting some ground rules is to thread a fine line between IANA processes I've encountered which feel like a wizard's ritual that only the blessed can perform and the open floodgates approach that we have today. The fact of the matter is expert review takes time and includes implicit bias, but what we have today and what's being proposed with an automated approach isn't working either because we're left with a lot of low quality half baked stuff that assumes tons of tribal knowledge into the inner workings of each method in order to implement. So, while I'm absolutely empathetic to the reality that any form of expert review flies exactly against the ethos of decentralization and much of what this work tends to require a lot of human effort to achieve this, I view this as a necessary tradeoff to create a valuable ecosystem built on DIDs. In fact, I see it as an opportunity for us to raise the bar on what quality means for people authoring DID Methods. Can we please consider the impact of the long term viability here by being transparent about what we think good did methods look like and place at least some bar of quality on what's necessary to register a did method? After all, a did doesn't suddenly become not a compliant did just because it's not blessed by the registry. It's just a did that no one knows how to interact with which is effectively the same as a did method that's published but nobody understands how to implement interoperably. |
Continuous integration and test suites could prevent the politics while retaining the quality. I know how to do that for implementation libraries, but not for the specifications themselves. |
@kdenhartog wrote:
😆 ... 🤔.oO(Rename the registry to "Dog**** DID Method Registry"?) I sympathize with your viewpoint @kdenhartog, and I think much of what you wrote is valid. I also agree with @rxgrant -- the more we can automate, the better off we'll be. I have ideas on how we could do that, but it's all work that people have to do (write DID Method spec parsers that check for DID Core Method requirements -- that's a 2-4 month project in and of itself). All that said, the issues remain:
There is an analogy here that I think might help us, and that is the "5-star Linked Data" approach. In essence, it suggested a 5 star deployment scheme for Linked Data. The 5 Star Linked Data system is cumulative. Each additional star presumes the data meets the criteria of the previous step(s). Before it, people had heated debates about what is and isn't Linked Data, and those debates often excluded new communities. So, instead of drawing a line in the sand, what was proposed was a gradual entry into the ecosystem. I think we have the same sort of thing here -- For example... first you publish a provisional spec, then you implement it, then you demonstrate that your implementations output is conformant to DID Core v1.0, then you stand up a test net, then you provide a resolver for others to use, then you go into "production", then you provide multiple implementations and perhaps fully define your specification, and then you have a test suite demonstrating multiple implementations interop'ing, and then you take it through a global standardization process with consensus and expert review. We want people registering at the provisional spec phase... and then what comes next might not happen in the order I mentioned above... but, IMHO, we do want to expose that in DID Spec Registries and perhaps use it as sorting/bucketing criteria. When you're trying to build a open and inclusive community, it helps to have a gradual onboarding process that's inclusive instead of setting up fences to keep people out. Food for thought... |
@msporny I find your "5-star Linked Data" approach to be very compelling for all the reasons you mentioned. I do believe it can address @kdenhartog concerns about the quality of the entries by making it relatively objective how each additional star is achieved. (If someone is truly trying to game the system, that should be pretty easy for the editors to detect.) Can you say a little more about how you'd recommend structuring the five stars? And what specifically we'd need to do to put that approach into place for the registry? |
I have no firm ideas there other than "people seem to go through a basic progression to get to 'five stars'"? Maybe... I don't know if they do... the list I provided above kinda falls apart toward the end wrt. linear progression. So we might skip the stars thing? Don't know, haven't thought about it enough yet.
I think the JSON files per DID method with some variation of the contents mprorock and I suggested above gives us that general structure. |
I think in general where you're coming from is a safe bet for the maintainers of this registry over time and I get where you're coming from here by not wanting to turn this into an overtly political process that raises more headaches than it's worth. Additionally, I'm fully supportive of the idea of making this as automated as possible for very strict and transparent rules. I think there's a balance here that needs to be considered and at the very least getting the automated infrastructure in place is a good first step. I'm hesitant to say that a big tent approach like what's done for the MIME types registry is going to end up being what we need here when the bare minimum for interoperability of DIDs and DID Documents is far more involved. I think this is where the idea of having the standard developed through a standards org is going to be an important factor here because that's the step where rigor can be applied without placing the burden on the editors here. So what if we stick with things operating at a machine readable approach for the initial phases which allows for early registration and good open tent approach, but also allow ourselves to lean on standards bodies with good processes in place to define what an "approved open standard" means. For example, we can say that in order for a standard to be considered approved it needs to be approved by a predetermined list of SDOs which we believe have the set practices in place to evaluate the method in order to elevate those methods that do achieve that higher bar with the "approved open standard" status. |
Based on the uncertainty regarding which conflicting TAG/EWP items excuse formal objections in this standards org, I am certain that no standards org requirement for any star/badge/level is appropriate when dealing with decentralized protocols that disrupt traditional institutions. (Proof-of-work has become a powerful shibboleth.) The value-stack merge-conflict implications of DID Methods are too great for Internet engineers to wield their votes objectively.
No. For the reasons given above. I further believe that if you did force this requirement, it would move the fight to creating standards organizations that do whatever it takes to get approved by any critera listed here, but either disallow any criteria in their voting that could be a shibboleth, or carefully prevent infiltration by individuals who respond in an oppositional way to the shibboleth. All you would cause is delay and cost in hacking the process to obsolete the political aspects. It would be better for marketplace fit to sort the technologies. |
This seems a bit of strong allergic reaction because of the current problems we're facing. While this may be true in an SDO like W3C I can't say that we'd encounter the same issue in IETF and if we wanted to consider something like DIF an SDO (I don't believe this opinion is shared by all within the community) which is far more friendly to the work being done by us in this space. Point being here is that as long as we're transparent about the SDOs we believe are acceptable to prevent rug pulling on a controversial did method, I think we can circumvent the concerns you raise while still maintaining the high level of rigor that's expected from a well baked standard.
I'm a bit less concerned about this. While I expect there to be some political maneuvering to occur I don't think it will be long standing and I generally believe that the issues that get raised during these conversations should be considered legitimate and useful to the development of the technology. If this did become a legitimate concern that hurts the legitimacy of any particular did method I think it would then be worth evaluating the effects that our process has set and considering modifying them to mitigate these concerns. The issue I take with the "let the marketplace" philosophy is that for the most part it hasn't been effective over the years that the marketplace has been working with DIDs. Instead what I've more commonly seen is that the did methods that get chosen are not chosen based on their technical merits but rather on their marketing and the gaps get filled via tribal knowledge. Take for example did:sov, a method that has been around for a long time. It's been very successful in garnering adoption of the method by way of promoting a particular implementation (indy-sdk) which gets reused for the majority of implementations which are either producing, consuming, or resolving DID Documents from an Indy ledger. There's been legitimate and useful effort to build libraries which help to circumvent this as well as other concerns, but for the most part if you want to use did:sov you're left to a few libraries to achieve this since there's a fair amount of tribal knowledge that's necessary in order to implement this method. That community has made great strides to place a greater emphasis on a standard rather than a particular implementation by starting work on did:indy which goes leaps and bounds beyond the current state of where things were a few years ago. That's useful in the legitimacy of the method and it shouldn't be understated that it's been useful, but I don't believe it was necessary for the marketplace to select that method since there was a good enough implementation available to make it work. So why is this a concern? The reason I'm raising this is because building implementations on one or a few implementations which were built will get methods over the adoption barrier, but I don't believe the end state of what makes a good method should be just adoption. I believe that in order to build a robust method a well documented specification is necessary so that new implementors can also work with the method. In a bit more dystopian what-if scenario too, I could see the day where a wildly successful method which was deployed by a large corporation over night in order to achieve that success could be abused to lock in licensing fees for did resolvers for example. To play this scenario out a bit, I could see that did:example is deployed to a billion users overnight and the user isn't even aware they're using DIDs and this method now becomes the most used method. Then since this method is built on a single implementation and deployed by a single corporation every implementer in the ecosystem realizes that in order for them to resolve the did document they are expected to use a library that was authored by the corporation who's patented the method and expects any developer who wishes to use the library to agree to their license in order to do so and collect royalty fees for doing so. Now I'd hope that there's a push back by many people to choose not to support that method, but inevitably some will and this whole concern could have been avoided by us choosing to say good methods require a standard not just adoption. Scenarios like that are the reason why I'm advocating to see a standards based approach to this problem rather than a market based approach. I think with a market based approach we're likely to end up making much of the work here irrelevant even though it's well designed, robust technology because the market sided with the method that was well marketed not the one that solved the legitimate concerns of users. |
@kdenhartog Isn't a "standards-based approach" a subset of a "market-based approach"? In other words, nowadays most standards only happen if there's enough market demand to see them all the way through the process. From a practical standpoint, don't we have to treat a market-based approach as the baseline—because with DID 1.0 as an open standard, there's nothing we can do to prevent it. So IMHO the only goal of the DID method registry is to surface as much helpful information as we can about DID methods that choose to be registered and which meet our baseline registration criteria. |
I believe we should consider comment threads the same way as we handle email threads at W3C in this respect. The overall policy for those is to be extremely reluctant of removing anything from the archives (barring very exceptional cases); the same should be true here imho. |
I have done something similar in the EPUB testing repository: https://github.com/w3c/epub-tests/. The EPUB tests, as well as the implementation reports, are submitted in JSON. I have created a TypeScript process that gathers all the information and generates a bunch of HTML tables which are then imported by a respec skeleton. I then defined a github action to run that script whenever there is a change. It is doable. (B.t.w., I actually run respec from the action script, too, because the processing of respec, involving lots of and large tables, may be a bit slow when done run-time. But that is a detail.) |
Note that the comment referred to by @kdenhartog has in fact been deleted (@kdenhartog was not referring to @rxgrant's #83 (comment)). I think it likely this was done by GitHub admins, as they have tools for reporting such content (look under the three dots at upper-right of any comment) and/or users (look to the bottom of the left-hand column of any GitHub user profile page), which I had used to report that comment before @kdenhartog added his. In general, I concur with @iherman that deletion should be extremely rare, but that can only be achieved if repo admins or the like can can easily hide (which should provide the option to reveal it for themselves to any reader at any time) and unhide such apparently-noise content to minimize its distraction effects ... and if the GitHub tools can be disabled, such that reports like mine don't lead to deletion of content the repo admin just wants to hide. |
@kdenhartog @iherman @TallTed Today, I no longer see it the comment at all and am not sure why that is. Deleting a comment should create an event in the timeline saying that the comment was deleted and by whom. |
@brentzundel -- Might be worth some followup with the GitHub powers-that-be? I'm betting it's their tooling and/or intervention that deleted it. Question is whether that should leave no trace, as now, or should leave similar evidence as would be there if one of us GitHub users (at whatever place in the repo's privilege hierarchy) deleted it. My understanding is that GitHub itself is Git-based, so it should be just another commit in the stack, so should be displayable.... |
As mentioned in today's WG call, I see a registry column that could announce any standardization process underway as unobjectionable, when the answer is not required for a DID method to be listed. I agree with @OR13 's point that requesting the data is an excellent way for Rubric evaluation authors, and end users, to get more informed about the DID Method. |
The more I think about this, the more opposed to embedding value judgments in the did spec registries I am... including "v1 conformance" ... since we can't really confirm this, it seems dangerous to say anything about a registered method other than linking to its spec, and possibly the rubric entry for it.... I think we should keep all forms of evaluation (including recommended status or conformance status) to the did rubric.... and keep the did spec registries a pretty boring list of URLs. |
The automated check I had in mind had to do with whether or not there was an entry for the DID Method in the DID Test Suite report. The individual would submit a link as a part of the registration process... so perhaps a better term is "implemented" or "test report exists" or something more objective. I'd like us to not get too wrapped up in what we call the objective measure just yet (as we can always change that), and rather, focus on what the objective measures are (which in my mind, are "links to things"). For example: link to the specification, link to a code repository that implements the method, link to the DID Method results in the DID Test Suite, link to live resolver data (to demonstrate that a live network exists for the DID Method) ... and so on. |
I am getting more comfortable with @msporny's suggestion that a DID method registration consist entirely of a filled-out JSON template of "links to things" with two caveats:
I believe this is how we keep a baseline of quality in the DID method registry (albeit a pretty low baseline). |
Position change from me incoming: I've been watching some of the discussions in the did-wg-charter on what a "quality" did method is and the effects of picking and choosing winners via the standardization process. It's become clear to me that while standardization can be a clear way to identify quality methods, it should not be the only one because it's an inherently biased process. It's also likely that standardization will likely be used to promote or tarnish the brand of a particular method for the majority of people who want to rely on dids but not join us in the mud to debate and critically assess. Instead, I suspect many people who don't want to deeply evaluate the merits of many did methods will defer to the authority of people they deem as experts and that's effectively means looking at the registry to decide which method should be chosen. I consider the tradeoffs here to likely be more harmful in the long term than the short-term problems I'm faced with when trying to evaluate whether a did method is something I should advocate implementation support for. Given the way I'm watching this play out, I'm changing my position and consider it acceptable to go ahead with the limited number of status categories that can be automated for now until we can find suitable methods to objectively indicate the quality of a method without intentionally promoting or tarnishing a methods brand. |
The issue was discussed in a meeting on 2021-10-28
View the transcript4. DID Method Registration.See github issue did-spec-registries#83.
Brent Zundel: What specifically do we have to do to make the registry process as straightforward and clear as possible, both both those who register, and for those who look at it.. Manu Sporny: This concrete proposal could address a number of challenges we have had with DID method registration.
Manu Sporny: What are the "good" ones that have way more implementation experience than e.g. someone's weekend project..
Manu Sporny: If we do that, we can annotate the DID method registry in an objective way.
Kyle Den Hartog: +1 to manu, that's a really good starting point.
Kyle Den Hartog: My frustration is that it doesn't get us the full way there to decide what's a "quality" DID method.. Drummond Reed: Encourage people to contribute to the Github issue..
Drummond Reed: We wanted to be inclusive in the beginning. I've been an advocate of keeping the current table, but start a new table that has a baseline bar. You must revise your specification for all DID Core 1.0 requirements, and you can't handwave at Security+Privacy Considerations..
Drummond Reed: I don't think it's going to be a large burdens, but you should only go into the new table if you are 1.0 compliant.. Orie Steele: I agree with some of what drummond said. Other things make me nervous. In Privacy+Security Considerations, there is sometimes only one sentence. Sometimes that's okay, and sometimes it is not..
Manu Sporny: I wanted to respond to Kyle. I'm nodding in agreement with a lot. The original proposal is something we can execute on today..
Manu Sporny: With that proposal we will end up with either the same document or a better one that has labels e.g..
Manu Sporny: We have a concrete proposal in front of us that can give us immediate improvements that we can continue to iterate on.
Ryan Grant: Requiring validation from a standards organization is a difficult bar for some decentralized protocols..
Ryan Grant: Some decentralized protocols are based on VDRs that disrupt traditional institutions.. Eric Siow: This is a question that hopefully can educate me. Is this issue related to one of the objections (diverging instead of converging)?.
Eric Siow: If that's the issue, then if the group can define a way to come up with objective methods, that might be helpful..
Kyle Den Hartog: Responding to manu, I wholeheartedly agree that editors should be able to handle this in a programmatic way. Managing this is a tragedy of the commons problem. Leaning on programmatic approach is better..
Drummond Reed: Wanted to Eric_Siow 's really good question. It's easy to look at a registry with ≈114 registered methods and seeing divergence. I want to make it clear that comparing DID methods to URIs/URNs, that comparison makes sense in some parts (URI schemes, URN namespaces, DID methods), but they are also different..
Drummond Reed: This design was intentional. Every DID method is an attempt to provide a verifiable identifier using any combination of cryptography and VDR. There are many ways of doing that. We wanted to accelerate and standardize the competition. We built an abstraction layer on top of all of them, that's the primary reason of the specification..
Drummond Reed: We want the market to choose and let the best DID methods rise to the top. This is different from encouraging divergence.. Eric Siow: Can you standardize the ones that have some objective measure (e.g. widely implemented and use), vs. those that are not widely used could be standardized later?. Drummond Reed: I wanted to talk about standardization. The existence of a standard (effort) associated with a DID method is another one of those objective criteria. I want to see W3C standardize more DID methods, but some DID methods are also going to happen elsewhere..
Drummond Reed: The marketplace can develop DID methods anywhere they want, but we want an objective process for adding them to the registry. If there is a standard, then we will have a way to point to it..
Drummond Reed: Once we improve the quality of the registry, that will help the market make its decisions..
Drummond Reed: There are also many URI schemes.. Manu Sporny: We optimized the registry to learn early about DID methods that are being created. We wanted to know about DID methods that are being created..
Manu Sporny: I want to push back hard against making it harder for people to register DID methods. It should be easy to sort by criteria that matter to people..
Orie Steele: We can't sort on criteria, unless we require people to provide them, which will make it harder for people to register..
Orie Steele: The challenge I see is that the registry is attempting to do more than just being a registry. See JOSE/COSE which is simple. If we add criteria, it will not just be about adding a link to a spec, it will also about additional tasks for the editors..
Orie Steele: To some degree, the Rubric has begun to capture some of the things we were also envisioning for the registry..
Orie Steele: It might be better to keep it a very boring registry, and refer to the Rubric for a better way to add comparision, sorting, etc..
Brent Zundel: I think we got some good data points. We seem to have agreement around a desire for registration to remain simple, to benefit those who are making those registrations happen (the editors).
Brent Zundel: Thanks to scribes, thanks to all, see you next week.. |
At present, §6 in the document is clearly different from the others. I presume the process described in §3 is not directly relevant for the methods, the table contains a column ("Status") whose meaning is not clear, and there is no more explanation. It is good to have this registry here, and I know it has a different origin to the other sections, but I believe it would need some urgent editorial care...
The text was updated successfully, but these errors were encountered: