Rewarding translators #2907
Replies: 4 comments 9 replies
-
Aside from being able to reward/punish translators, I think that reworking the post-completion survey slightly could have other nice benefits too. Consider the following mockup example below. This essentially ranks the points Another option might be to leave the current "Are you satisfied", but add an additional (optional) "Give detailed feedback" drop down menu which could ask for ratings on every specific part (description, concept, task, code set up, sample tests, full tests) as well as perhaps a spot for written feedback that would be visible to the author/translator responsible. |
Beta Was this translation helpful? Give feedback.
-
I generally like the idea, one potential problem I see are not transparent, not always intuitive, and probably not the most fortunate rules of "being a translator". |
Beta Was this translation helpful? Give feedback.
-
it's more and less fair at the same time. It's actually the usual problem.
The opposite strategy also has its problem, because currently, a "not satisfied" vote actually make the author lose honor. So what about menders fixing tiny issues, but not the whole thing? They shouldn't lose points for this. So I guess your suggestion is "safer", rather than "more fair". |
Beta Was this translation helpful? Give feedback.
-
Despite using the rejection of parent fork as a criteria, it still brings issues especially when menders make tiny fixes to kata like adding I like the core idea, but it is not so feasible from current system's POV, even though all translators or language "forkers" are given respective panel to vote for satisfaction, it can lead to a huge fuzz with a lot of beginners potentially trolling votes, specifically those that do not understand how the test framework and assertions is setup. Or another occasion --> "owh, translator does not include specific fixed tests like empty strings or invalid data types, DOWNVOTE option 2!!! But I like the kata's idea and workings, so UPVOTE option 1." whilst the description mentions that all inputs will be valid and it is the voter's fault of not reading description thoroughly. There will be lots of cases of targeting downvotes too for unreasonable justifications, specifically for those who translated a lot and who authored a lot as well (specifically G964).... Then it'll become an endless rabbit hole to maintain IMO! |
Beta Was this translation helpful? Give feedback.
-
In Codewars the author of a Kata can get awarded by users for their experience with the Kata, by upvotes or downvotes.
Translators are currently not rewarded. I request the introduction of a similar feature to let users award translators too.
Translating a Kata requires quite some effort, first by writing a solution in a language of choice, but also to write all testcode.
Typically the quality of translation is not consistent between languages for the same katas, which shows there are "good" and "bad" translations. The user's experience of solving the kata is tightly coupled to the (translated) language he solves it in. By rewarding (or punishing) the author only, the role of the quality of the translation gets fully attributed to the Kata author only, although the translator plays a significant role too.
Introducing such a feature requires some careful thoughts and a matching design. I will elaborate.
Note that it might be helpful to think of author and translator not as two different persons, but to discern between two concepts:
In such a way, the author of the kata is both responsible for 1 and for 2 in language of the first implementation. He might also translate to more languages.
Incentives both for "authors"/1 and "translators"/2 act as a stimulant to give good katas to users. Having a fair distribution of incentives between authors and translators will stimulate more translations and better translations of katas, which will improve the overall Codewars experience for users. However, how to determine what incentives scheme will result in such a better experience, while avoiding negative effects of such a scheme that might deter translators to produce more and better translations?
A careful consideration should be made how different incentive schemes impact . Also, how are users going to show their appreciation, and how can they tell how much they like the kata idea ("author"/1) and how much they like the kata implementation ("translator"/2) ? Should that be two concepts they can vote on? Or a single combined one?
If a combined one, a more sophisticated mechanism could be used to "calculate" which part of could be attributed to the "author" and which part to the "translator" e.g. by comparing votes in different languages of the same kata and see if there are significant differences which might show difference in quality of translations.
Also next to distribution of rewards, how will a "punishment" impact the overall amount and quality of translations?
And last I would also like to mention the role of translation approvals, and incentives to help that (e.g. #1518), which is somewhat related to the grander scheme of how incentives can help improve the overall Codewars experience
Beta Was this translation helpful? Give feedback.
All reactions