-
Notifications
You must be signed in to change notification settings - Fork 57
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reconsider simulator interface #228
Comments
The reason for this is to ensure determinism when there are multiple processes on the same clock communicating through signals. The result is independent of the order in which those process are run. |
Was this changed already? This prints from nmigen import *
from nmigen.back.pysim import *
class Dummy(Elaboratable):
def __init__(self):
self.x = Signal()
self.y = Signal()
def elaborate(self, platform):
m = Module()
m.d.sync += self.y.eq(self.x)
return m
if __name__ == "__main__":
dummy = Dummy()
with Simulator(dummy) as sim:
def process():
yield dummy.x.eq(1)
print((yield dummy.x))
yield dummy.x.eq(0)
print((yield dummy.x))
sim.add_clock(1e-6)
sim.add_sync_process(process)
sim.run() |
You aren't using |
Yes. It is a dummy signal to work around #262. |
I didn't intend to change this. I think the reason you see the behavior you do is that I did not understand the design of the oMigen simulator, so I didn't implement it entirely correctly.
OK, so this works like VHDL. That's good. Unfortunately, the problem I outlined in this issue stands: not being able to write a single-cycle I've been thinking about this problem and came to the conclusion that there should probably be three kinds of simulator processes:
|
What about an option to run synchronous processes at twice the clock rate? For example, a synchronous process could yield a full cycle (default), or a high half-cycle, or a low half-cycle. At the beginning of a cycle, it could yield either a full cycle (default) or a high half-cycle. At the middle of a cycle it could yield only a low half-cycle. Doing otherwise raises an error, which ensures that processes are always "in phase" with the clock. This is backward compatible, can be deterministic using the VHDL-style behavior, and supports an arbitrary number of processes. |
The basic idea is sound, thanks. But, what happens if someone uses a negedge clock domain (#185) in the design? There are many valid reasons to do so, and even some FPGA primitives (Xilinx UltraRAM) trigger on both clock edges. So I think we should change it a bit: rather than use a half cycle, run all "sync" processes at the posedge (or the negedge) as it currently happens, and then run all "testbench" (or whatever they are called) processes simultaneously, a very small amount of time later (however much it takes for combinatorial logic to propagate, and for "comb" processes to run as well). |
I would want to differentiate that What you actually want instead is to be able to differentiate between yielding to update the sync signals and yielding to settle the comb signals. |
In terms of the UI, currently This solves an important issue where something like |
And this extends to multiple clock domains accordingly (sync cycles executed in the pattern defined by their timing beat and comb/delta cycles in between). A process should be able to specify what kind of cycle it wants to yield for: not just a bare |
I don't think |
Can you provide some examples of those reasons? |
Otherwise |
E.g. if you don't have any synchronous clock domain. yield stb.eq(1)
yield # settle comb
assert (yield busy)
yield stb.eq(0)
yield # settle comb
assert not (yield busy) |
Comb processes are something I wanted to introduce explicitly (that's the second oldest nMigen issue, #11). In my mind, they would require specifying the exact set of signals they will wait on during creation, and then always wait on exactly those signals; that way, they would have no choice but to be well behaved. However, that would not work for the snippet you show. And I think you are making a very good point with it. I'm inclined to agree that a separate command for comb settling is desirable. |
Perhaps a trio of |
Yes. |
The |
Ah. But |
Do you even need to differentiate |
No, but for that matter, One option would be to get rid of |
I'm confused by this paragraph. The current behavior is something like "settle, stop just before updating sync, run all processes", not "settle, update, settle". Or I misunderstand you. |
Yes. |
The current behavior of |
And "stop just before updating sync" is in fact a NOP because after settling comb nothing would happen until the next sync update. |
You're right. We want the same thing but I was not describing it precisely. Thank you for this discussion. I now have a very clear picture of how the next iteration of the simulator should look like, and I am happy with every aspect of it. |
A negedge clock domain is just like another clock domain but with the clock shifted 180 degrees. If simulation supports multiple clock domains (it should) then there is nothing really special about it, I think? |
There is if there are two clock domains defined to be clocked by different edges of the same signal. Then you have the same problem in a single process that interacts with both. |
Not if such processes are not allowed (and I think they should not be allowed: a synchronous process must be associated with a single clock signal and a single sensitivity edge). |
Among other things this restriction would make it impossible to simulate ResetSynchronizer, which is something that nMigen does today. So it would be a step back. |
@jordens Bikeshed: what should be the names for the new commands? I propose:
|
As an additional nicety, I propose an ability to attach generator functions to fragments, so that any given fragment could be implemented either using nMigen code or Python code completely transparently to its users. |
You need to define whether the One thing that I'm annoyed by: In many processes, I'd like to return the data that I have gathered during the run. But with the current setup, I have to pass in a handle or use a global and attach all the data to that. With the Python async/await structure it seems to me that one could solve this in a better way. Make the simulator behave like the asyncio event loop, and have: async def process(dut):
await Settle()
await dut.x.eq(1)
log = []
for i in range(100):
await WaitSync("sys");
log.append(await dut.y)
return log Maybe there are other niceties that would come with this for free (e.g. hooking up vpi or yosys or other external components). |
And maybe we want to get rid of |
I think they should not. For sync logic: without the final For comb logic: it doesn't make sense to have Assuming that having full feature parity between simulator tasks implemented in HDL and ones implemented in Python is important (and I think it is quite important, as it allows you to gradually migrate code from Python to HDL, or replace slow numeric HDL with fast numpy-based code), we should have a non-settling version of these commands. We could also have a settling version of
Integration with async/await is on my radar, mostly because it makes it possible to write integration tests where host Python code talks to device nMigen code through some PC interface (USB, Ethernet, ...) by cutting out the interface logic and directly driving simulated FIFOs. Glasgow applets are tested that way, for example. However, I'm not yet convinced this is a good approach in general, because I don't know how to integrate nicely with the asyncio event loop. Simulator tasks that call async functions are easy (the simulator only needs to block if it receives one, or have an async run method, or both), but the other way around seems very gnarly. For your use case, we could return a handle from
Not as far as I know. I have a Yosys-based C++ simulation generator planned as well, but, like VPI, it would export a C API. If you wanted to use it with asyncio you would have to wrap it first, which is actually more work.
It is, if you are writing a task using
You can use that command right now (in the old simulator). In the new simulator, which can support while True:
yield WaitComb(dut.y)
yield dut.x.eq(dut.y) does the exact same thing as: m = Module()
m.d.comb += dut.x.eq(dut.y) |
This comment has been minimized.
This comment has been minimized.
Nevermind the previous comment. I think that's an unrelated bug. |
Let me elaborate on the problem I have with I think the reason might lie in the fact that commands of different type are passed through the same generator interface and that the
With
from nmigen import *
from nmigen.back.pysim import *
class Dummy(Elaboratable):
def __init__(self):
self.x = Signal()
self.y = Signal()
def elaborate(self, platform):
m = Module()
z = Signal()
m.d.sync += If(~z, self.y.eq(1)), z.eq(1)
return m
dummy = Dummy()
with Simulator(dummy) as sim:
def process():
yield
# a)
yield dummy.y.eq(dummy.x)
yield
# b)
i = yield dummy.x
yield dummy.y.eq(i)
sim.add_clock(1e-6)
sim.add_sync_process(process)
sim.run() |
Do you mean it's not clear in general, or not clear in comparison to b)? I'm not sure what the case b) is demonstrating. It is exactly equivalent to a) in your code.
This isn't possible to implement because simulation does not (and should not) mutate the design itself, just like synthesis doesn't, and similarly, there shouldn't be a 1:1 correspondence between designs and simulator instances. Some alternatives are
I don't think
I think your view of the simulator is unnecessarily limiting. I see nMigen statements as describing behavior. The statement I think the ability to use nMigen values in commands, as opposed to just signals, is valuable. It allows performing computations like nMigen does, which can otherwise be a major hassle. Bit-slicing, wrapping arithmetics, signed arithmetics, match with wildcards, ... all would otherwise require the user to reimplement tricky and already well-tested parts of nMigen on their own, introducing subtle bugs. I see absolutely no reason to prohibit adding (or removing) logic during simulation. There is no technical reason to do so and in fact it would be more work to prevent that. Conversely, it can be useful when simulating a design that uses partial reconfiguration, or perhaps a simulation of an entire platform with peripherals that are hot-swapped. To summarize, I now see nMigen as a pair of languages that are duals and appear on even terms: a synchronous hardware [description] language and a synchronous [reactive] software language. I am certain that the ergonomics of this arrangement can be improved, but I do not understand why you want to strip it down to something much less elegant and useful. |
Pretty sure it's not. They are executed in different clock cycles. The next question is: assuming that elaboration defines whether a signal is a register or a combinatorial value, can simulation override that behavior? You say both
and
In my understanding of the terms this is strictly contradictory. AFAICT currently altering the logic in a design is actually prevented by the fact that you generally don't have access to the elaborated I don't see the problem with |
Sorry, I was unclear. I mean that this code: def process():
yield
# a)
yield dummy.y.eq(dummy.x)
yield
# b)
i = yield dummy.x
yield dummy.y.eq(i) is exactly equivalent to this code: def process():
yield
# b)
i = yield dummy.x
yield dummy.y.eq(i)
yield
# a)
yield dummy.y.eq(dummy.x) in terms of simulation behavior. (Of course the variable
I would say no: if you did that, then the simulation would end up in an inconsistent state, and there's no case where this would be useful or deterministic. So if you add a combinatorial function to the simulation, either with
That assert would always fail without
Let me state this more clearly. "The design" is a collection of user objects that inherit from Elaboratable. After they are created, these user objects (normally) contain only nMigen Signals; they do not immediately create any Modules or Statements. So in typical user code that looks something like:
the Moreover, the following code is not only allowed but desirable:
because it allows e.g. parallelizing tests. (One can also imagine copying a simulator so as not to perform elaboration twice.)
Intentional. Migen finalization had a severe issue where (a) finalization order is not controllable or reliable and (b) the correctness of your design sometimes depends on it. E.g. m-labs/migen#158 / m-labs/microscope#1. The reason this issue happens is this code: Consider what happens if you want to add a microscope probe that samples the state of some FSM. FSMs use finalization, and the state signal does not exist until that happens, so you have to add another finalization method to insert the probe. But, microscope also uses finalization to insert probe logic. There is an implicit dependency from the microscope core module to every microscope probe, even those which may not have been created yet. I think the only way to solve this problem while keeping finalization itself would be to iterate the design to fixpoint (i.e. until finalization does not create any further logic). In fact, Migen does one iteration of that (collecting the submodules that finalization added), but stops afterwards. However, it would be difficult to write code for e.g. microscope that can robustly handle this; you might find that you need to resize a FIFO for example, which isn't really possible right now. nMigen avoids the problem entirely by replacing finalization (which repeatedly mutates design logic in the sequence chosen by Migen, and can be performed at most once) with elaboration (which creates design logic from scratch in the sequence chosen by the user, and can be performed however many times). In other words, the Is it a panacea? No. There is nothing that inherently stops you from changing the design in the (The other reason I chose it is because Migen had the convention where
Note that the above does not in any way impact adding or removing logic during simulation. Consider the following code, where cd_sync = ClockDomain()
m = Module()
m.clock_domains += cd_sync
m.submodules += sub1, sub2
sim = Simulator(m) With a 2-line change to the new simulator to add a public cd_sync = ClockDomain()
m1 = Module()
m1.clock_domains += cd_sync
m1.submodules += sub1
m2 = Module()
m2.clock_domains += cd_sync
m2.submodules += sub2
sim = Simulator()
sim.add_hdl_process(m1)
sim.add_hdl_process(m2) The behavior would be exactly equivalent. In fact, this is almost identical to what Why is it this easy? Well, because the new simulator is designed to make it possible to use Python code as a drop-in replacement for e.g. the HDL in
I think there is no conceptual problem with your proposal. It is an implementation issue. async def foo():
await simulator.interval(1.)
asyncio.run(foo()) I don't know a way to make the above work nicely without adding a massive amount of overhead by creating lots of futures internally in the simulator. I know how to do it in a way where the simulator becomes the event loop (although that still has considerable overhead beyond what we can do with generators), but then you couldn't use asyncio futures together with the simulator. Given that a lot of people complain that the old nMigen simulator is very slow, and that I put in a lot of effort making the new nMigen simulator faster (it is currently 4× faster than the old nMigen simulator, and should be 2× faster than the Migen simulator while being significantly more flexible), I have a lot of reservations about adding overhead there. |
I think I figured out semantics that would result in a robust simulator. I'm going to write a detailed description in a separate document. |
The oMigen simulator had a strange interface where user processes would run after a clock edge, but before synchronous signals assume their new value. Not only it is very hard to use and bug-prone (my simulation testing workflow, for example, involves adding
yield
until tests pass...), but it also makes certain patterns impossible! For example, let's consider theFIFOInterface.write()
simulation function:It has to take two clock cycles. This is because, after the first
yield
, it is not yet possible to read the new value ofself.w_rdy
; running(yield self.w_rdy)
would return the value from the previous cycle. To make sure it's updated, it is necessary to deassertself.w_en
, and waste another cycle doing nothing.Because of this, I've removed these methods from nMigen FIFOInterface in a57b76f.
I think we should look at prior art for cosimulation (cocotb?) and adopt a usage pattern that is easier to use. This is complicated by the fact that it is desirable to run oMigen testbenches unmodified.
The text was updated successfully, but these errors were encountered: