Conversation
There was a problem hiding this comment.
Great idea with a template. In fact I think we should have three: new feature (AKA whole new test suite), new test (AKA documenting an edge-case), new exception (AKA platform limitation), because instructions would be slightly different e.g. we should ask people adding exceptions to document them like https://github.com/duckduckgo/privacy-reference-tests/tree/main/https-upgrades#platform-exceptions.
I don't agree that it should be responsibility of a person adding test to ensure it passes on all platforms - it's just too steep of an ask and will kill contributions. Let's discuss on O-E meeting today.
pull_request_template.md
Outdated
| @@ -0,0 +1,29 @@ | |||
| <!-- Please add the WIP label if the PR isn't complete. --> | |||
There was a problem hiding this comment.
People can now create draft PRs so we should probably go with that rather than WIP label. (those have UI that prevents merging)
| Product PRs to test against: | ||
| - | ||
|
|
||
| **Note: Upon merging this PR, you are responsible for updating the affected products to the latest version of this repo.** |
There was a problem hiding this comment.
I'm afraid that this will make sure that people don't contribute edge-cases etc. IMO all products should automatically pull and run changes and if test is correct (and this is up to person creating PR and feature DRI and/or us) then failing test on some platform is like a bug report for them to align.
|
|
||
| **Note: Upon merging this PR, you are responsible for updating the affected products to the latest version of this repo.** | ||
|
|
||
| ## Steps to test this PR: |
There was a problem hiding this comment.
Not sure if that applies? The only step is "run tests see if they pass". The bigger concern is if test is correctly describing the edge-case that we are trying to capture.
There was a problem hiding this comment.
I guess that really goes to the role of the author.. is it just the write the test and sort of mic-drop, or ensure the test is valid on the major products (and note platform exceptions where its not)? Sometimes there are subtle implementation difference that aren't clear before you try them (i.e. on one platform you might get an "ignore", another might be null).
My concern is to avoid a scenario where the tests are just frequently broken on some products, to the degree that it becomes a disincentive to keep updating them. If we roll out automation maybe that becomes less of a concern? Not sure TBH.
Perhaps at least ensure they are passing on the extension?
PR template