-
Hello, all. Finally I've managed to type all expressions and operators in my specs, and replaced all recursive operators by FoldSet (FoldSeq) . Static analyzer produces no errors. TLC works fine on my model. Please help. Thanks in advance. |
Beta Was this translation helpful? Give feedback.
Replies: 12 comments 12 replies
-
TLC only cares about assignments as it explores a branch of execution, whereas Apalache requires that all possible syntactic branches have guaranteed assignments (even if they're unreachable), which is why there's often a discrepancy. In your particular instance, it's the use of As for a solution, you should be able to replace
|
Beta Was this translation helpful? Give feedback.
-
Hello. Happy New Year!
Can not understand why it is so. It looks pretty much the same as dozen of other disjuncts...
This time it gives an error on stage # 12 : I believe it happens because of some of these kind of operators :
Thanks in advance. ConferenceProcFS_final.tla is the main spec |
Beta Was this translation helpful? Give feedback.
-
Short of a miracle. This time i built it from source.
version 0.19.1. All specs are from my last archive spec02.zip Please, any guess ((( |
Beta Was this translation helpful? Give feedback.
-
Hi. If you have time to help i describe the progress I've made in the remainder of the post. First of all I've finally succeeded in making a spec which successfully passes all preliminary checks and make it to the stage # 13. As result i have the spec simplified to some small part of the desired transitions which is successfully played by TLC (few seconds) and leads to resource exhaustion (if killed message means so) when launched by Apalache. In the archive i put the ready to start specs (in both TLC and Apalache). To understand better let me briefly describe the scenario. We are developing the information flow control mechanism for pl/sql blocks (db stored procedures). The idea is simple: values from high sensitive sources can not flow to low sensitive sinks. Something like taint tracking but much more stronger. Information flow control is actively embedded into programming platforms. Thus the task described is usually solved using SAST or DAST. This approach though implies code markup So we are trying to shift IFC problem into the area of formal verification. DB blocks seems to be appropriate platform. The are relatively small, don't include redundant algorithms, we can leverage dependency control mechanism (standard in dbs) for extracting restricted number of procedures which have something to do with sensitive data processing. So we have developed plsql2tla utility. Our specs implement information flow abstract semantics in multi-session db environment. The simplified specs (ConferenceProcFS_final is the main) work as follows.
the first transition leads us through
sequentially forwards the system to new states via RuntimeFS implements the rules of the abstract semantics such as insert (in our case). The rules update security labels of env elements (vars, columns etc) and fill New2Old structure which is used by the security invariant. ParametersFS holds security labels of formal arguments and other parameters. The fact is TLC replays the model (invariant disabled) with three parallel sessions and all procedures (7 plsql blocks) in 6 seconds (on my laptop) > 100000 states. But Apalache dies with just one session and one proc (3 states). Unfortunately in TLC performance dramatically falls with the number of sessions, i.e increasing the sessions number to 5 leads to 20 m delay (with the same tlc settings). I read that Apalache is not yet optimized well for sequences which are hardly used in our specs... Any help would be appreciated. Thanks. |
Beta Was this translation helpful? Give feedback.
-
Hi @timimin, I've started looking into your specs, and I'm in the middle of compiling a list of suggestions/strategies (e.g. "replace pattern X with Y", and the like), which should help performance. I should hopefully have a rewritten version of If you're interested, we could also set up a user interview over Zoom; it's very helpful for us to know who our users and their backgrounds are, and what problem domains they operate in, and we could answer additional questions you might have directly. |
Beta Was this translation helpful? Give feedback.
-
Hello. We appreciate your participation. Certainly i tried to increase the maximum java heap size, it had no effect. java.lang.OutOfMemoryError fatal error changed for I can imagine how hard it might be optimizing one's code. It would be great if you manage to show me the principle in a manner you wrote, certainly i can then extrapolate for the sake of improving the rest part of my spec. As for the user interview ... seems interesting. But everything is quite simple. We represent a small research group in Russian Technological University Specification tuning with respect to performance issues has special value for our project. The approach i depicted will only work in practice iff the model checking process takes acceptable time for real size procedures. Deduction is quite tedious and not applicable for successive small projects. And certainly when we make our project public (though it may remain in the academic field for quite a while) we will enjoy to see it in the list of your examples. Thank you again. |
Beta Was this translation helpful? Give feedback.
-
I've attached a reworked version of In other words, when defining an expression Concretely, a common anti-pattern I've found is that, when trying to define e.g. a function, Another is the frequent use of TLDR - Replace iteration ( |
Beta Was this translation helpful? Give feedback.
-
Thank you for such a detailed explanation. I think i got the idea. It will take sometime to reconstruct the specs. Later I'll share the results.
--
Отправлено из Mail.ru для Android четверг, 13 января 2022г., 13:02 +03:00 от Kukovec ***@***.*** :
…That is because the issue at hand is not that folds are being used, but how they are being used (I suppose the TLDR is a bit reductive in this aspect). Consider the following examples:
\* (1)
\* Assume S \subseteq Nat and S /= {}
MaxOr0(S) ==
LET Max2(a,b) == IF b < a THEN a ELSE b
IN FoldSet(Max2, 0, S)
\* (2)
\* assume N > 0
FirstNSquares(N) ==
LET Add(S, i) == S \union {i * i}
IN FoldSet(Add, {}, 1..N)
In (1), the following is true:
a) Fold computes an integer at every step (a less complex expression)
b) MaxOr0 doesn't have a closed-form solution
c) If we expressed MaxOr0(S) as CHOOSE x \in S: P, then P is a quantified FOL formula: \A y \in S: x > = y
Conversely, in (2):
a) Fold computes a set at every step, monotonically increasing in cardinality
b) FirstNSquares does have a closed-form solution: {i * i: i \in 1..N}
c) There is no convenient CHOOSE-based characterization
(a) is important, because a k-step fold in (1) generates O(k) constraints - the evaluation of the ITE predicate and the chosen value at each step. In (2), however, the value at each step (k) is a set, which has O(k) elements. So to describe just the elements of each set, we need \sum_i=1^k O(k) constraints, which totals to O(k^2).
This is not particularly noticeable in the above toy example, but in your large spec with sets of sets, and values more complex than i * i, this adds up.
(b) is important, because it tells us whether folding is redundant. If a closed-form solution exists, it's the better choice 99.99% of the time.
(c) is important, because it is often the case that FoldSet(Op, v, S) = CHOOSE x \in S: P
for some Op, v, P. To answer the question which form is more efficient, a good rule of thumb is to look at how quantified P is. Each quantifier is roughly equivalent to one nested for-loop over S, in terms of complexity, and the call to CHOOSE x \in S: P itself is roughly equivalent to an additional nested loop over S.
In the LUB case, a LUB for a set S \subseteq T is a value w with the property:
/\ IsUB(w, S)
/\ \A y \in T: IsUB(y, S) => y >= w
where IsUB(r, Q) == \A x \in Q: r > = x. This property is doubly universally quantified, so computing LUB with CHOOSE is roughly as complex as a triply-nested for-loop over S, which is slow.
For all of the above reasons, fold is efficient in (1) (and the LUB case), and redundant in (2).
So it's not fold that is the source of slowdowns in your spec, it's the sets/functions that are being spuriously constructed using fold.
—
Reply to this email directly, view it on GitHub , or unsubscribe .
Triage notifications on the go with GitHub Mobile for iOS or Android .
You are receiving this because you were mentioned. Message ID: @ github . com>
|
Beta Was this translation helpful? Give feedback.
-
Hi, I'm currently upgrading my specs according to your recommendations. To see real effect i think we must revise not only paralocks.tla, but all other specifications (the current version diverges also). Comparing closed forms against CHOOSE (fold)... One argument for using folds was my concern about the sizes of underlying sets. I thought its better to build necessary structures in nested loops with countable iterations then |
Beta Was this translation helpful? Give feedback.
-
Hi, all. I would like to express my gratitude for all your efforts and useful recommendations concerning writing friendly to Apalache specifications. For the last few days i have been trying to revise my project. I found an incremental antipattern in Runtime.tla. Unfortunately all improvements had partial effect. The simplified version of the model (with just one session and one procedure) still diverges. After several minutes of work it produces I believe the problem is in the number and types of the state variables. As i said our transition system is based on the abstract semantics of information flows. So we model some typical memory structures (the stack, the program counter, the global variables section etc) and the state is represented by a number of compound elements such as tuples and sequences of relatively large sisize If we play the model for some small number of parallel sessions TLC works fine. May be it happens because it doesn't store entire states but instead it keeps track of their signatures. |
Beta Was this translation helpful? Give feedback.
-
Certainly. Here they are. Now we remain:
the invariant which is always true (RuntimeFS.tla)
cli:
|
Beta Was this translation helpful? Give feedback.
-
Hi. Anything on my issue? Are there chances of success )? |
Beta Was this translation helpful? Give feedback.
I've attached a reworked version of
Paralocks
, that contains some comments within.In short, the primary reason why Apalache is slow on your examples, is that you're frequently using what effectively amounts to anti-patterns in Apalache - iterative object mutation. This is not uncommon for people who come to TLA+ with previous experience in programming languages.
In other words, when defining an expression
Y
fromX
, instead of saying "Y
is an expression with propertiesa,b,c,...
", you tend to define all the intermediate steps of transformingX
intoY
: "X
is slightly changed intoX1
(e.g. by adding one element to a set, or viaEXCEPT
), which is changed intoX2
, etc. untilXn = Y
".TLC like…