You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We should have tests for the tools. But there are some hang-ups around getting a centralized, tool-agnostic test set.
Each tool is going to have its own output (and what data that output reports).
Spectral, for example, reports the rule name and desciription along with a "pointer" in array form, and the way I've been writing them, they tend to (but not always) point to the problem keyword.
A tool may report the problem at the keyword or at the containing schema, and tests might need to account for both. E.g. "if requires then or else" could be reported by pointing to the if or by pointing to the subschema that contains if. A (bad?) tool might also just say what the problem is without indicating where it is.
I think the other option is to have a test suite for each tool, but that's doesn't really let you compare tools. We'd want to keep them in sync to do that, and that's a manual job.
The text was updated successfully, but these errors were encountered:
On the "test suite for each tool" frontier, we could have a master "test case" set that just defines an identifier and description of what the test is supposed to check, maybe even a schema and a pointer to the error. Then each tool's suite would be responsible for implementing each test case in a way that's customized for that tool.
We could then have a CI check to verify that every test case is either implemented or ignored in each suite.
We should have tests for the tools. But there are some hang-ups around getting a centralized, tool-agnostic test set.
Each tool is going to have its own output (and what data that output reports).
Spectral, for example, reports the rule name and desciription along with a "pointer" in array form, and the way I've been writing them, they tend to (but not always) point to the problem keyword.
A tool may report the problem at the keyword or at the containing schema, and tests might need to account for both. E.g. "
if
requiresthen
orelse
" could be reported by pointing to theif
or by pointing to the subschema that containsif
. A (bad?) tool might also just say what the problem is without indicating where it is.I think the other option is to have a test suite for each tool, but that's doesn't really let you compare tools. We'd want to keep them in sync to do that, and that's a manual job.
The text was updated successfully, but these errors were encountered: