Skip to content

TSS Control

Rachit Nigam edited this page Mar 25, 2021 · 4 revisions

Control-based Verification

There are two essential reasoning principles needed in this system:

  1. Reasoning about Temporal Facts: Requirements and guarantees for signals relies on shared knowledge about the current time. This shared knowledge needs to be generated by either observing signals (latency-insensitive) or clock cycles (latency-sensitive).
  2. Linearity of Invocations: Invocations derived from an instance must be temporally linear, i.e, only one of them can be active at any given time. The nature of pipelines makes this complicated because multiple invocations can be active at the same time with some ready signal deciding when invocations can be legally overlapped.

Given these constraints, let's build a system where control-flow aids the generation and use temporal facts.

Key Idea: Each basic block (or group) is parameterized with a start time, input & output variables along with time-based guarantees and requirements. Each invocation requires a timestamp (either concrete or in terms of other timestamps). Reasoning about each use of the signal can be derived from either the guarantees on the input or using the invocations.

The writeup uses yet another syntax with SSA-style basic blocks. However, I think it's straightforward to turn it into groups (famous last words).

Pipeline example. The following example reasons about the pipelined execution of a multiplier accumulator. The mult.ready signal allows for new invocations while mult.valid denotes that the output has been computed.

// XXX: Not clear how `start` is compiled since
// its "zero"-time (only used to instantiate things).
// Groups, in comparison, need to take at least one cycle.
start<S>():
  m0 = mult[G = S]  // Starts at time S
  init = r.out;     // init is a wire, r is a register
  jmp pipe<S>(m0, init);

pipe<P>(ml: [P, P + mult.L], left: [P, P + 1]):
  ml.left = left; ml.right = ...;
  // Invocation annotated with start time
  mp = mult[G = ml.done] @ {T = P + mult.L}
  br cond,
    // Tie output of previous invocation to next invocation.
    pipe<T>(mp, ml.out),
    finish<T>(ml.out)

// Another funky "zero"-time block
finish<F>(out: [F, F + 1]):
  // ret is special syntax to set `this.valid` to high.
  ret out

There are two observations about this representation:

  1. For each invocation, there is a precise start time: either using G or written down using the @ syntax. One of the two syntaxes might be redundant.
  2. For each "jump" to a new basic block, the requirements can be checked completely locally.

Together, this means that every port will have a precise timing guarantee because they are derived from annotated argument or a local invocation with a precise start time.

Oddities

  • The start basic block is purely combinational—it performs no computation; it just defines variables and passes control to pipe. How are these zero-time groups scheduled? Also how are they related to groups in Calyx? Do we need special groups to do this kind of setup?
  • br cond, tru, fal is probably not the right abstraction for conditional flow—it requires both the branches to execute at the same start time T.
  • In general, it seems that statements in a "basic block" can observe only a subset of "events" and "fork" their computations based on those. This is not totally surprising because the we're working with dataflow graphs that might only impose a partial order on the statements but does make the reasoning convoluted.

RQ: How do you separate the "controller" or the loop at feeds data into a pipeline from the computational definition of the pipeline?