There has been considerable discussion regarding how to handle unique/priority "assertion" failures in the context of glitches. Someone here came up with a different way of thinking about the problem that I think clarifies and simplifies things while producing behavior that addresses user issues. The basic idea is that as a *process* executes, any unique/priority failure is determined but NOT reported. The deferred report is associated with the *process* and is reported in the observed region. If the process re-evaluates prior to the observed region, all pending unique/priority failures are discarded. This cleanly handles all scenarios including function and task enables, internal delays, etc. It seems to be pretty easy to reason about and completely covers the truly problematic cases that have been raised. There are a few edge cases involving re-triggering with internal process side-effects where one could argue that reports might get lost, but those cases would almost always be subject to simulator dependent scheduling and other effects that would likely lead to ill-behaved designs. Given the trade-offs here, we believe that the suggested conceptual approach is a simple and effective solution to the problems that have been raised. If there is general consensus on this in BC, we can move forward with writing up a proposal. I've also raised this conceptual approach with AC during the recent face-to-face in terms of a variation of immediate assertions when glitch stability is an issue. I haven't been tracking their decisions since I've been a tad busy lately, but they are aware of this as a conceptual approach. Gord. -- -------------------------------------------------------------------- Gordon Vreugdenhil 503-685-0808 Model Technology (Mentor Graphics) gordonv@model.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Thu Oct 11 09:44:53 2007
This archive was generated by hypermail 2.1.8 : Thu Oct 11 2007 - 09:45:02 PDT