- 
                Notifications
    You must be signed in to change notification settings 
- Fork 735
Description
With the recent integration of quantifiedcode PRs started being labeled with unsuccessful checks. QC tests the diff of develop to the branch being merged and reports 'bad code' in those parts only. Whenever a code issue is found the PR is labeled with an unsuccessful check.
The problem(s):
Most of the checks boil down to guidelines we don't want to follow. Having a PR look 'unsuccessful' because of that is, well, ugly (#563, #580).
In addition, @tylerjereddy opened a PR (#580) where QC complained about code issues in parts of code that hadn't been touched. I'm assuming this is a QC bug and will bring it up with them.
What can be done?
The QC tests are fairly customizable. We can disable as many as we want, individually. I'd suggest that every time we bump into a test we don't agree with we disable it. We'll end up with a selected set of tests that are important to us, and then an 'unsuccessful check' will be much more meaningful. (admins of the QC MDAnalysis project should be able to disable tests directly on the issue list).
Still, the above solution won't be perfect. After all, we're talking about coding guidelines, not absolutely required syntax. There will be times when we will want to break rules. In such cases we'll always have that annoying unsuccessful check (and possibly a discussion in the PR on why the guideline wasn't followed).
In other cases it might happen that a check covers too much: both cases we want to avoid and to keep. We might then think about disabling that test, and rolling our own similar test adapted to our style.
I can also look into having QC report something more neutral than 'unsuccessful', or see if it's possible to have the success depend on the severity of the found issues (there's 4 levels, from 'Recommendation' to 'Critical').
Finally, if you think this isn't worth the hassle we can stop QC from giving feedback on PRs. Reviewers could then go to quantifiedcode.com to check the issues that were found (an analysis is triggered on every commit).
I vote that for the upcoming PRs we take the time to prune the tests we don't want, knowing that we'll have plenty of unsuccessfully checked PRs (a discussion about some checks might be useful; maybe we can do it in this issue's conversation?). If we see it's leading nowhere we can disable PR feedback. In the meantime I can try to silence the unsuccessful labels per the ideas above.
What say you?