Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A FAQ list? #117

Open
ioolkos opened this issue Mar 10, 2017 · 0 comments
Open

A FAQ list? #117

ioolkos opened this issue Mar 10, 2017 · 0 comments
Assignees

Comments

@ioolkos
Copy link

ioolkos commented Mar 10, 2017

I've come up with a couple of questions that could be helpful for someone like me getting started with cutEr. They might be dumb, even deliberately. Maybe you can transfer this to the Wiki and start some sort of FAQ?

  • What solver should one currently use? Should one use --z3py or not?

  • What coverage should one aim for? is 100% coverage to be aimed for?

This is bad, right?:

Coverage Metrics ...
  - 1 of 341 clauses (0.29 %).
  - 1 of 255 clauses [w/o compiler generated clauses] (0.39 %).
  - 0 of 312 conditions (0.00 %).
  - 0 of 292 conditions [w/o compiler generated clauses] (0.00 %).
  - 1 of 624 condition outcomes (0.16 %).
  - 1 of 584 condition outcomes [w/o compiler generated clauses] (0.17 %).

  • If coverage is 100% for a given depth, have I proven that there is no input able to crash my MFA and "anything downstream"?

  • What depth (-d) should one use normally? what does depth actually stand for? (number of branches?)

  • Is there a general recommended approach when one tries to test a big codebase with cutEr?

  • If testing a big codebase, is there a recommended order in testing all the exported functions?

  • How could I make testing a big codebase more automatic?

  • What can cutEr not do currently, when testing a big codebase? (I'm thinking concurrency problems, starting of multiple OTP apps etc?). Is there something I might want to know before digging into cutEr testing for a big codebase?

  • Are SOLVER ERRORS critical or not?

  • From your experience, will complex tests lead to high time cost? i.e am I doing something wrong if
    a tests runs for 12h?

  • Why is there a different number of models for different MFAs? is there a minimum and a maximum
    number of models?

  • what is the difference between a solved and an unsolved model?

Solver Statistics ...
  - Solved models   : 69
  - Unsolved models : 1291

  • What is a simple way/metaphor/explanation for what a model is?

  • MFAs should always be provided with specs, I guess. Specs are a kind of a generator for the tested input.
    In what case should one use more elaborated tests wrapped in pre-conditions and post-conditions?

  • What are the consequences of using --disable-pmatch? (I know I can use it when a MFA uses maps, but apart from that)

  • --no-type-normalization: "disable the normalization specifications & types". What does that mean? What are the consequences of doing that?

  • Is there a list of recommended papers to get into concolic testing with cutEr?
    (I've found the original paper ("Concolic Testing for Functional Languages"), and the 'Examensarbete' from Manuel Cherep), which are both great.

@aggelgian aggelgian self-assigned this Mar 12, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants