My biggest concerns with checkers are their interactions with procedural code. These concerns do not apply just to checkers, but in general to concurrent assertions in procedural code (and extending this further in proposals such as Mantis 1995). I have heard a number of members of this and other SV committees express surprise when they heard that this was allowed, and the intended semantics. I was surprised and concerned when I heard about it. I would like to describe some of my concerns here. The starting point of this was in IEEE Std 1800-2005 subclause 17.13.5 "Embedding concurrent assertions in procedural code". It may look rather innocuous on first reading, but there are significant issues with it. From the start it assumes that all procedural code is written in such a way that cycle semantics can be inferred from it, which is an invalid assumption. It talks about "the clock from the event control of an always block." Note that it just assumes that there is exactly one event control in an always block. There appears to be another assumption that this event control must appear at the start of the always block, though this is not as clear. I understand that this is being clarified for the next revision of the standard. Rules are being created for when a clock should be inferred and what to do if it cannot (presumably an error if there is a concurrent assertion in the always block). From the viewpoint of the language in general, such rules are arbitrary, and are a kludge. Even from a synthesis view, these rules are unlikely to match those of any specific synthesis tool. Such rules would be more justifiable if they started by being restricted to an always_ff construct, and the LRM defined restrictions for an always_ff that ensured a clock could always be inferred. A more serious concern is that the concurrent assertion is embedded in the procedural code, but it does not execute when the flow of execution reaches it. It can refer to the same variables as the procedural code around it, but it is not referencing the same values for those variables as the procedural code. Instead, it is executing in the Observed region and referencing values sampled in the Preponed region. This is inconsistent with anything else in procedural code, and seems likely to lead to much confusion and many mistakes. As an example, the procedural code may assign a value to a variable, and then have a concurrent assertion that appears to use that value. But in fact it will use a sampled value of that variable. If the variable value changes again before it is next sampled, the assertion will never see that value. Even if it doesn't, the assertion is using a value that is running one clock behind the rest of the procedural code it is in. Assertion writers may take care to avoid referring to such intermediate variables explicitly in assertions. But what about inferred enabling conditions? One of the advantages of putting these assertions inside procedural code seems to be that the enabling conditions will be inferred automatically from the procedural if and case conditions. Just write normal procedural code, and the inferrence process will take care of the enabling conditions. But what if those enabling conditions turn out to be intermediate variables? If you aren't explicitly specifying the conditions, but are trusting the implicit rules, you may not notice if this happens. Take the case-statement example at the end of the subclause. Suppose it were written as always @(posedge mclk) begin a = {sel1, sel0}; case (a) 1: begin q <= d1; r4_p: assert property (r4); end default: q1 <= d1; endcase a = something else; case (a) ... end If I understand things correctly, this doesn't work as you would expect. It works differently than if you had written "case ({sel1, sel0})". The assertion will sample the value of 'a' in the Preponed region, not at the point where execution reaches the concurrent assertion, where its value has actually been computed. So the assertion will have a stale value, unless of course the value gets changed later in the procedural code, in which case the assertion doesn't even get a stale value. This looks like a serious problem to me. You may be able to spot it fairly easily in a simple case like this one. But the advantages of embedding assertions seem to be closeness to the related procedural code and not having to explicitly specify the clock and enables. For those advantages to be significant, the procedural code must be large and the enables complex. So this problem will not be easy to spot. Even if you are careful when you insert the assertion, somebody may come along later and modify some of the procedural code to insert an intermediate variable. This all gets worse if Mantis 1995 is included. My example of using an intermediate variable and then re-using it for another value later may have seemed unusual. But if you are writing code inside a loop, it is normal to have an intermediate variable that gets re-used each time around the loop. Another minor issue I have with this involves the use of tasks and functions. Normally you can take procedural code and modularize it, moving pieces into tasks and functions. But if the procedural code contains concurrent assertions, it cannot be moved into a subroutine. There is no way to infer clocks and enables for a concurrent assertion in a subroutine. So the use of concurrent assertions in procedural code will prevent modularizing it. To me, the fact that procedural code can normally be moved into tasks, but concurrent assertions cannot, is another indication that concurrent assertions do not belong in procedural code. Steven Sharp sharp@cadence.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Fri, 7 Mar 2008 21:40:38 -0500 (EST)
This archive was generated by hypermail 2.1.8 : Fri Mar 07 2008 - 18:41:49 PST