-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make compensation requests programattically parsable #32
Comments
I'd like to work on this! [First edit was an exploration into a yaml formatted compensation request. Unfortunately yaml is extremely sensitive to whitespace indenting and therefore considered troublesome for use in user-generated compensation requests.] After discussing with @m52go here is a second attempt at understanding how a parsable compensation request might look, using markdown tables. Additional context could be included in the compensation report for human consumption, outside of the table. The bot would sum up the amounts by team, convert to BSQ at the cycle's rate and report issuance by team as well as a total issuance (the user would claim the total in the Bisq DAO).
|
Glad to hear your're interested @jmacxx. What you've posted above is similar to what I had in mind. A couple notes:
|
Regarding the bot commenting on CRs with issuance numbers broken down by team, I did experiment with that in https://github.com/jmacxx/compensation-bot/issues/2 where you can see the bot making issuance comments. @m52go reminded me that such issuance comments are only to be made after the cycle is complete. Alerting users of mistakes -> am I correct in understanding the only way to do this is to have the bot comment on the CR whenever it is updated? (if mistakes are detected). I have put some work into the linter/parser and come up with an HTML/JavaScript test tool that enables compensation requests to be checked. It comes with 4 sample CRs and an edit box where you can try it out with your own CR text. |
Updated the linting tool to the latest 5 column format received from @cbeams & @m52go. |
@cbeams we might have something worth testing. @jmacxx has created bots that evaluate compensation requests that follow this template. Please check to see if it looks acceptable to you. It should largely conform to the ideas you and I discussed earlier, with one small difference:
For what it's worth, I also drafted bullet points for documentation that would need to be added according to the proposed template and bots:
If this format is ok, I think it would make sense to proceed with testing...perhaps provision an API key to let @jmacxx test the bots with real requests. Maybe team leads could make their requests using the new format in this cycle. |
I've only just taken a brief look at the template. Looks pretty good; I'm afraid it might end up being confusing for folks, but having real world examples out there should go a long way to making it clear. I just submitted my compensation request, and unfortunately didn't get it together in this format. While I'm away for the next couple weeks, I'd say just keep going. Sorry I didn't have more time to check this out in depth! |
With the new compensation request issue template now merged, all contributors should now use the new format that came about as a result of this project. Here I'll run through each of the objectives that has been completed so far.
The new format, in a nutshell, simplifies all line-items for contributions and roles into a single table.
The updated issue template should include all details a contributor would need to compose a valid compensation request.
Currently the bot will post an issuance breakdown by team after DAO voting when the request is marked as accepted or rejected by the compensation maintainer. This will help achieve the near-term goal of determining total issuance by team. For now, results will be posted in a spreadsheet that will be made public. The next step is to add context to the raw totals by aggregating line-item titles to give more context to team issuance numbers.
@jmacxx took care of all of this. Thank you! It's been a pleasure working with you.
See tentative documentation in the wiki. That article will be reworked so that "Making a compensation request" is a standalone article, but the linked section covers the basics. Aside from the new format (detailed in the issue template) and the linting (errors are posted in the issue), there isn't anything additional users need to know or do. I'll make announcements on Keybase and in https://github.com/orgs/bisq-network/teams/dao to draw attention to this new format and encourage contributors to test it in this cycle. |
Next step is to follow usage and performance for the remainder of this cycle (Cycle 14) to ensure the bot performs as expected. Once it goes well, it probably makes sense to discuss devops / ownership of the bot. Then we'll require the new format in Cycle 15 so that all requests can be parsed, so that results are actually useful (only a portion of Cycle 14 requests will be in the new format, so the results won't be useful). Those numbers will go in a public spreadsheet to help team leads with budgeting and reporting. At that point, this project will be completed as currently defined, and further efforts (to add context to numbers, etc) can either be added to this project or to a new project. |
It's becoming clear that reporting will be more of a project of its own, so I will make a new issue detailing reporting goals after discussing specifics with jmacxx and wiz. For this particular project, the new compensation request format, linting, and parsing implemented by @jmacxx seems to work well! Reporting issuance as GitHub comments didn't work as well as initially expected, so we need to determine a better way to get those numbers to the budgeting spreadsheet. Also needed: uploading bot code to GitHub (somewhere) and figuring out hosting, ownership, etc. I'll leave this issue open until we figure out those loose ends, but this project's successor will be a new soon-to-be-created project focused on reporting. |
This project is complete with the delivery of the compensation and reporting bots: The rest of the reporting initiative will be carried on here: |
I'd also work on this. I'm thinking instead of making a linter we can make a form. I'm not that familiar with bisq internals to know if linter is rally needed here. And form is easier to make and process on back-end. Anyhow, I'll be taking a look in this issue these days and report back with results. |
Join matrix to coordinate efforts, the compensation bot stopped but there is a contributor who said he wold be working on that. |
See https://matrix.to/#/!TwAZqiZiZbDvHoaPco:matrix.org/$3hVhylNeO_0HL4OWS4vMwGyKsocCp-zGmykWNTKfQ54 |
Description
Currently, in order to determine how funds were allocated in a particular cycle, compensation requests must be manually analyzed and aggregated. This is time-consuming and error-prone.
This mini-project seeks to establish and implement a new structure for compensation requests that can be parsed by a script.
Rationale
The project reorganization implemented in Cycle 10 established a budgeting structure with team leads. This has helped the project look forward and plan how resources should be allocated. But planning is useless if one cannot look backward and evaluate results.
Criteria for delivery
This project should result in:
The existing compensation request template and wiki documentation will need to be updated to reflect the new requirements, along with announcements in all major Keybase channels to ensure contributors are aware (#compensation, #dev, #chinese, #transifex, etc).
The project will be complete when the items above are complete: linter, parser, and related communications.
Measures of success
Contributors must make correctly-formed compensation requests on their own (this will demonstrate awareness of the initiative). The linter must alert compensation request makers of mistakes. The parser must make comments on approved compensation request issue with issuence numbers broken down by team.
The project can be considered a success if team leads actually use the issuance numbers provided by the parser bot for budgeting and tracking issuance over time.
Risks
Not applicable as no Bisq code is touched.
The most significant risk is probably a bot that reports incorrect numbers for some reason, but such a mistake should be discovered quickly and only impacts reporting (not issuance or software or anything else).
Tasks
I don't think this project is complex enough to warrant a whole GitHub board, so here's a checklist.
Estimates
Since this is largely a reporting initiative, it probably makes most sense to come out of the growth budget. Ongoing server costs should come from ops.
Maybe 1500 USD is sufficient for the whole project, as described above (initial implementation and documentation)? This is based on it taking a day to create the bots. Ongoing costs for the bots should be very low/negligible. Open to feedback if it any of this is off.
Notes
Tracking issuance in a more automated way is the first step of a bigger drive to report issuance, burn, and trading volume better.
Compensation request details are an important first step to enabling other reporting, so a new project can be created to pursue further reporting once this project has been successfully completed.
The text was updated successfully, but these errors were encountered: