Thanks for the reply, Gord. I understand that this kind of thing gives designers and vendors the heebie-jeebies , and not without good reason - nobody wants things that "worked fine" before to start changing when there's a schedule to meet, or to make it any harder than it already is for vendors to get a stable software release out the door (I've done tools, I sympathize...). My position is really that of a verification person - not just testbench author, but more importantly guarantor of RTL correctness. My intent was to demonstrate that there are also risks to leaving things as they are (even for designs that appear to work), and that these risks are greater in the long run. --Mike Gordon Vreugdenhil wrote: > Mike, thanks for the comments here. Since you got to > pontificate, I thought that I'd return the favor... :-) > > > The concerns that I touched on in the meeting (and which led to > the wider concerns) were pragmatic in nature -- I was concerned > that customers would in fact have divergent expectations in > terms of short-circuiting semantics in various scenarios and > expect the tool to reflect each of those sets of expectations. > > After further discussion here, we believe that at least for > us, the risk to customers from changing our tool set is fairly > low and unlikely to run into serious customer opposition. As a > result, I would now be prepared to vote in favor of the proposal. > > There is a fine line for vendors here -- having a "permissible" > optimization become a "required" semantic behavior is not > in fact a no-risk change (even if the vendor normally does in > fact perform the permissible optimization), particular when considered > across a broad range of tools and users. From an end-user perspective, > this is often seen as new switches being added to systems > in order to control "expectation compatibility" from various > users and actual tool compatibility with various tools and > various versions from various vendors. In the long run, large > cross-products of behaviors lead to less stability as vendor's > ability to test combinations of behaviors is compromised. > > All of this leads me as a developer to be very conservative > about *mandating* such a change. Although I agree that > "backwards compatibility" is not really the issue (in the > sense that you were talking about), vendor concerns must > be very broad and deal with "expectation compatibility" as > must as real "LRM compatibility". That is often a much more > delicate balance and can sometimes result from pragmatic > concerns and needing to really work through likely end user > impact across a wide variety of customers, some of whom are > not nearly as tolerant of any change to behavior, independent > of whether such change is LRM compliant or not. > > Gord. > > > Michael Burns wrote: >> >> Hi folks, >> >> There was a request at the last meeting for some example code >> demonstrating the short-circuiting operator issue. Will Adams and I >> created the following example. Recent versions of three major >> simulation tools all short-circuit the logical AND and conditional >> operators, but an older version (2005) of at least one of these tools >> does not. Below is the example and two output files - one for >> short-circuit and one not. >> >> As an added bonus, I will indulge in some pontificating. My stance on >> the issue is: >> >> 1. Freescale wants the standard to define a portable language - >> that's the most important reason we make standards. Leaving a common, >> useful usage of common operators undefined/ambiguous is clearly not a >> portable definition. >> >> 2. This change might require some vendors to change implementations, >> and possibly some users to change code. There is disagreement over >> whether this constitutes "breaking backwards compatibility" or not. I >> strongly believe the term is misused in this case. When >> implementations of useful features diverge due to lack of >> standardization (which is the case here - the behavior of these >> operators is not unambiguously defined in the standard), the right >> thing to do is to standardize, even if it means someone would have to >> change. If there are practical obstacles to standardizing, well, >> let's get them on the table and try to do the best we can, but I >> think it is wrong to avoid standardizing on principle because it >> would require someone to change. >> >> 3. I have a fear that if synthesis is allowed to omit >> short-circuiting logic in the interests of optimization, then there >> will be pressure on formal vendors to follow this interpretation, >> leaving us (someday) with a design that simulates properly and passes >> formal verification, but turns out to have bugs after we've spent a >> bundle on masks and consumed months of schedule. Ouch! >> >> 4. If people don't like this proposal, I can live with that, as long >> as we choose something to remove the ambiguity. Even if it's >> something crazy (e.g., only short-circuit in programs, only >> short-circuit for certain kinds of operands), that's better than what >> we have now. However, it is critical that no allowance be made for >> other applications (synthesis, formal, coverage, etc.) to choose a >> different semantic from simulation, otherwise we leave the door open >> for bugs. >> >> 5. Note that I am not arguing for removing all ambiguity across the >> board. Just this week I argued in favor of leaving the evaluation >> order of unique case item expressions undefined in some cases. There >> is a cost-benefit analysis that must be done for each of these >> decisions. For the case item expressions, my judgment is that it is >> both rare and bad style to rely on this ordering, and I would not >> want to sacrifice performance for this. If it were possible to >> disallow it, I would. On the other hand, with the logical operators, >> we can argue about style, but it's certainly not rare for users to >> rely on short-circuiting, particularly in testbenches. The only >> benefit I see to allowing the current state of affairs to continue is >> that people with fragile RTL models will not have to risk their >> synthesis results changing ("fragile" in the sense that they rely on >> the ambiguity going one way, and produce bad results if it goes the >> other). Of course, such designs are also precisely the ones at risk >> of suffering from item 3 above. The cost of leaving it as-is is >> portability and verifiability. >> >> --Mike >> > -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Tue Nov 27 18:08:28 2007
This archive was generated by hypermail 2.1.8 : Tue Nov 27 2007 - 18:09:04 PST