-
Notifications
You must be signed in to change notification settings - Fork 3
Open
Labels
needs-feedbackNeeds more people to give their opinionNeeds more people to give their opinion
Description
When verifying a commit, we have a list of reviews that all have:
- A trust level associated to the key of the signer
- A
context-understanding, adiff-understanding, and athoroughnesslevel - A
result(that can be taken fromresult-otherwise, see RFC)
(I don't think priority should be taken into account here… actually, see #8)
How do we compute, from this list of (trust in reviewer, work done by reviewer, result) triplets, the binary result “do I trust this commit”?
Currently my thinking would be:
For each review,
- Take the minimum between
context-understanding,diff-understandingandthoroughness, and consider it as the level of work the reviewer did for their review. Convert to integer between 1 and 3 (higher is more work), hereafter namedwork - Take the trust in the reviewer, convert it to integer between 1 and 3 (higher is more trust), name
trust - Multiply (todo: just thinking of that randomly, there are most likely much better options)
workwithtrust. This gives a value between 1 and 9 (higher is better) that represents the confidence the user has towards this reviewer's review, hereafter namedconfidence - Take the result.
- If it is
!, then end result is! - If it is
-, then removeconfidencefrom the confidence accumulator - If it is
+, then addconfidenceto the confidence accumulator - If it is
0, then ignore review
- If it is
Then, if the confidence accumulator exceeds the configured required confidence level, then accept the review.
Drawbacks:
- It's likely hard to understand. There is no easy translation of “require 2 reviews from this contributor set”, because each review has its own self-trust level. This can be mitigated by having a “simple” tool just default to setting all review parameters to the maximum, setting all trust to 1, and requiring confidence level of 4… but it can be problematic if not everyone uses this tool, because some commits may actually require more than 2 reviews (because a reviewer would not use max. parameters for their review)
- It doesn't allow to set a different confidence level to different files (eg. for nixpkgs, requiring changes to the linux kernel to have more confidence than a change to random-package-used-by-1-person)
If you have any idea of how to handle these drawbacks… I'd be glad to hear them :)
Metadata
Metadata
Assignees
Labels
needs-feedbackNeeds more people to give their opinionNeeds more people to give their opinion