You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, Z3 is commonly used for formal verification where you generally want to be really really sure you haven't made mistakes and everything should be failsafe. So I find it really weird that by default Z3 ignores errors and will produce sat or unsat even if there are obvious errors in the model.
This was noted here and I ran into it here but not changed.
I find this quite surprising - you really don't want your formal verification tool to say something is proven because it parses the Z3 output incorrectly (perhaps because it made the reasonable assumption that errors wouldn't be ignored).
Wouldn't the safer behaviour be to stop on errors. If there are any situations where you actually do want to continue (@Alasdair suggested competitions) then you can opt in with a --dangerous-ignore-errors flag or similar).
The text was updated successfully, but these errors were encountered:
It isn't about competitions.
It is about what should be expected of a tool in these cases.
Each command can fail. It is up to users of the command-line tool to validate the results of commands and make decisions.
Tools can opt to fix their input and continue. Recall, there is an interactive mode of the text mode.
Consider in comparison a python interpreter or LISP or ML read-eval-loop. They don't terminate the shell because some command failed.
If you are using the command-shell in batch mode you have some options:
set smtlib2_compliant = true. It prints "success" or "failure". You can trust the verdicts then if all return values are "success" for your declarations and assertions.
you can grep for "error".
you can add a pull request with the option you suggest.
Hi, Z3 is commonly used for formal verification where you generally want to be really really sure you haven't made mistakes and everything should be failsafe. So I find it really weird that by default Z3 ignores errors and will produce
sat
orunsat
even if there are obvious errors in the model.This was noted here and I ran into it here but not changed.
I find this quite surprising - you really don't want your formal verification tool to say something is proven because it parses the Z3 output incorrectly (perhaps because it made the reasonable assumption that errors wouldn't be ignored).
Wouldn't the safer behaviour be to stop on errors. If there are any situations where you actually do want to continue (@Alasdair suggested competitions) then you can opt in with a
--dangerous-ignore-errors
flag or similar).The text was updated successfully, but these errors were encountered: