You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've come up with a couple of questions that could be helpful for someone like me getting started with cutEr. They might be dumb, even deliberately. Maybe you can transfer this to the Wiki and start some sort of FAQ?
What solver should one currently use? Should one use --z3py or not?
What coverage should one aim for? is 100% coverage to be aimed for?
This is bad, right?:
Coverage Metrics ...
- 1 of 341 clauses (0.29 %).
- 1 of 255 clauses [w/o compiler generated clauses] (0.39 %).
- 0 of 312 conditions (0.00 %).
- 0 of 292 conditions [w/o compiler generated clauses] (0.00 %).
- 1 of 624 condition outcomes (0.16 %).
- 1 of 584 condition outcomes [w/o compiler generated clauses] (0.17 %).
If coverage is 100% for a given depth, have I proven that there is no input able to crash my MFA and "anything downstream"?
What depth (-d) should one use normally? what does depth actually stand for? (number of branches?)
Is there a general recommended approach when one tries to test a big codebase with cutEr?
If testing a big codebase, is there a recommended order in testing all the exported functions?
How could I make testing a big codebase more automatic?
What can cutEr not do currently, when testing a big codebase? (I'm thinking concurrency problems, starting of multiple OTP apps etc?). Is there something I might want to know before digging into cutEr testing for a big codebase?
Are SOLVER ERRORS critical or not?
From your experience, will complex tests lead to high time cost? i.e am I doing something wrong if
a tests runs for 12h?
Why is there a different number of models for different MFAs? is there a minimum and a maximum
number of models?
what is the difference between a solved and an unsolved model?
What is a simple way/metaphor/explanation for what a model is?
MFAs should always be provided with specs, I guess. Specs are a kind of a generator for the tested input.
In what case should one use more elaborated tests wrapped in pre-conditions and post-conditions?
What are the consequences of using --disable-pmatch? (I know I can use it when a MFA uses maps, but apart from that)
--no-type-normalization: "disable the normalization specifications & types". What does that mean? What are the consequences of doing that?
Is there a list of recommended papers to get into concolic testing with cutEr?
(I've found the original paper ("Concolic Testing for Functional Languages"), and the 'Examensarbete' from Manuel Cherep), which are both great.
The text was updated successfully, but these errors were encountered:
I've come up with a couple of questions that could be helpful for someone like me getting started with cutEr. They might be dumb, even deliberately. Maybe you can transfer this to the Wiki and start some sort of FAQ?
What solver should one currently use? Should one use
--z3py
or not?What coverage should one aim for? is 100% coverage to be aimed for?
This is bad, right?:
If coverage is 100% for a given depth, have I proven that there is no input able to crash my MFA and "anything downstream"?
What depth (-d) should one use normally? what does depth actually stand for? (number of branches?)
Is there a general recommended approach when one tries to test a big codebase with cutEr?
If testing a big codebase, is there a recommended order in testing all the exported functions?
How could I make testing a big codebase more automatic?
What can cutEr not do currently, when testing a big codebase? (I'm thinking concurrency problems, starting of multiple OTP apps etc?). Is there something I might want to know before digging into cutEr testing for a big codebase?
Are SOLVER ERRORS critical or not?
From your experience, will complex tests lead to high time cost? i.e am I doing something wrong if
a tests runs for 12h?
Why is there a different number of models for different MFAs? is there a minimum and a maximum
number of models?
what is the difference between a solved and an unsolved model?
What is a simple way/metaphor/explanation for what a model is?
MFAs should always be provided with specs, I guess. Specs are a kind of a generator for the tested input.
In what case should one use more elaborated tests wrapped in pre-conditions and post-conditions?
What are the consequences of using
--disable-pmatch
? (I know I can use it when a MFA uses maps, but apart from that)--no-type-normalization
: "disable the normalization specifications & types". What does that mean? What are the consequences of doing that?Is there a list of recommended papers to get into concolic testing with cutEr?
(I've found the original paper ("Concolic Testing for Functional Languages"), and the 'Examensarbete' from Manuel Cherep), which are both great.
The text was updated successfully, but these errors were encountered: