Hi Steven, Jonathan-- thanks for the detailed explanations/discussions. I think I'm getting a clearer idea of what's going on. > Gord is suggesting that it be defined based on the process waking up, independently of where that happens and what the process > executed before it suspended itself. That definition is more general, and fits with how Verilog is actually defined to work. > However, if an always block contains multiple event controls and/or delay controls, that could mean there are multiple > "executions" between the top and the bottom of the always block. My main problem with that definition is that, in my mind, one essential requirement for our glitch-free asserts is this: - If a glitch-free immediate assertion is executed (or bypassed due to conditionals) only once in procedural code, and it fails, that failure should be reported. The process-waking-up model breaks that requirement in examples like the #0 case we discussed. I think this could be very dangerous in its potential to deceive RTL designers. For that reason, I think we should try to define this in terms of the process-entry model if it's at all viable. > The correct model of an always block is that "always" is equivalent to writing "initial forever". It has a single process > that is created before simulation starts, and starts executing at the beginning of the statement inside it. If it reaches the > end of that statement, it loops back to the beginning of the statement and executes it again, forever. The event control at > the top does not launch a new process; it stops the existing process until the event control occurs. ... > If you try to base it on execution reaching the bottom or top of the block, this doesn't work. It might work for always_comb. > But for an always block with an explicit event control at the top, it reaches the bottom and then the top just before it > stops, not just after it starts. So the violations would always get discarded in this case with this definition. I'm not sure I see how this causes a major problem with the "flush deferred assertions at the top of the block" model we were discussing before. Why can't we define the flushing to occur after the event control at the top of each procedural block? This way the flushing, like the rest of the code, restarts each time the event is triggered and execution of the block begins. I think as long as our description is precise enough (hoping you guys will help me with that), this should be well-defined. -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Mon Oct 15 13:25:57 2007
This archive was generated by hypermail 2.1.8 : Mon Oct 15 2007 - 13:26:22 PDT