RE: [sv-ec] Concurrency semantics in SystemVerilog

From: Rich, Dave <Dave_Rich@mentor.com>
Date: Tue Feb 22 2011 - 16:19:06 PST

As an example, take continuous assignments.

Continuous assignments are independent processes. One implementation
made an optimization where it would in-line the continuous assignment
process into an always block process. This effectively interrupts the
current always process to execute the continuous assignments process.
What it's really doing is intertwining the two processes into one. The
LRM wording allows simulation vendors to do this by not requiring atomic
behavior.

There are always people out there trying to rely on one specific
simulators behavior, and some big customers have force us to match other
simulators behaviors, regardless of the LRM. In honoring these requests,
that we are saying in some cases an implementation guarantees atomic
behavior, and in other cases it is guaranteeing to not be atomic. The
only difference between the two cases is some heuristic involving the
user's intent. This would be nearly impossible to standardize.

-----Original Message-----
From: owner-sv-ec@eda.org [mailto:owner-sv-ec@eda.org] On Behalf Of
Vreugdenhil, Gordon
Sent: Sunday, February 20, 2011 3:32 PM
To: Bresticker, Shalom
Cc: David Jones; sv-ec@eda.org
Subject: Re: [sv-ec] Concurrency semantics in SystemVerilog

On 2/20/2011 1:54 AM, Bresticker, Shalom wrote:
> Yes, but there are certainly some operations that have to be atomic or
close to it in order to work.
>
> An example would be parallel execution of a non-reentrant function.
>
> In order to avoid overwriting internal variables, it is necessary to
avoid interleaving of the two function calls.
>
> Even something as simple as
>
> function f(i);
> f= i ;
> endfunction
>
> initial a = f(0);
> initial b = f(1);
>
> a must be assigned 0, and b must be assigned 1.
> Yet unrestricted interleaving would allow f(0) to start execution,
suspend, start f(1), overwrite i, end f(1), and then resume f(0) and
give the wrong result.

Shalom, I am not arguing that yielding other answers (such as both
being 0 or both being 1) would be "a good thing", I'm just saying that
there is no LRM basis for claiming that such a result would be wrong.
Clearly there are LRM feasible schedules which yield such results.
Just as clearly, no vendor would go out of their way to yield such
results.

However, the question that started this is what *guarantees* someone
has in the face of (possible) truly parallel execution of Verilog. I
stand by my answer -- you have almost no guarantees at all. Talk
to the vendor about what they guarantee.

Once again back to reality -- I doubt that any vendor would field
something that was too egregiously unprotected. There would be
no real market for such a product. But for someone who wants to
write vendor independent code in a safe manner, I don't believe
there are any guarantees right now about what assumptions and/or
stylistic requirements vendors might require. Particularly given
new TB constructs, many of the issues in TB parallel behavior are
directly related to open issues in writing high-performance parallel
C++ code.

Gord

> See also Mantis 1290.
>
> Regards,
> Shalom
>
>> -----Original Message-----
>> From: owner-sv-ec@eda.org [mailto:owner-sv-ec@eda.org] On Behalf Of
>> Gordon Vreugdenhil
>> Sent: Sunday, February 20, 2011 5:48 AM
>> To: David Jones
>> Cc: sv-ec@eda.org
>> Subject: Re: [sv-ec] Concurrency semantics in SystemVerilog
>>
>> David,
>>
>> There are very few guarantees formally in the LRM. The LRM
>> does make guarantees about relative statement order and
>> NBA (same process) update order and effect, but that is
>> about all.
>>
>> 1800-2009 Clause 4.7 has a stronger statement than the
>> one you quote:
>> At any time while evaluating a procedural statement, the
>> simulator may suspend execution and place the partially
>> completed event as a pending event in the event region.
>>
>> Additionally, the LRM goes out of the way to be clear that
>> implementations have substantial leeway in various places;
>> you mentioned "++" and that is a good example. 11.4.2 states:
>> The ordering of assignment operations relative to any other
>> operation within an expression is undefined. An implementation
>> can warn whenever a variable is both written and
read-or-written
>> within an integral expression or in other contexts where an
>> implementation cannot guarantee order of evaluation.
>> and gives an example with implementation defined results
>> (even in sequential simulation). The intent is clearly to allow
>> decoupling of assignment and expression parts to allow for
>> common compiler optimizations. Making any assumption
>> about the atomicity of such operations would not be valid.
>>
>> There are many subtle assumptions about intent throughout
>> the LRM in these areas. I doubt that anyone would be happy
>> with an implementation that didn't have atomic operations
>> for mailbox operations and semaphores since those form
>> the basis of modern TB constructs. But I'd certainly talk very
>> carefully with any vendor doing very low level parallelism
>> about what they will or won't guarantee. There are clearly
>> more "coarse" granularities of parallelism that wouldn't
>> suffer from some of the interactions that you mention, but
>> from an LRM perspective there are very few guarantees.
>>
>> One would need to be very, very careful in trying to establish
>> stronger models. There are many (long) established
>> optimization techniques that modern simulators use and
>> which rely on fairly aggressive application of the flexible
>> scheduling rules; one could easily cause substantial
>> performance impact in trying to provide strong guarantees.
>>
>> Gord.
>>
>>
>> On 2/18/2011 4:27 PM, David Jones wrote:
>>> What assumptions can I make if I assume that a SystemVerilog
>> simulator
>>> may execute in a truly concurrent manner on a multiprocessor
computer
>>> system?
>>>
>>> Section 4.6a) of the LRM states:
>>>
>>> Execution of statements in a particular begin-end block can be
>>> suspended in favor
>>> of other processes in the model;
>>>
>>> But there's a lot that one can read into that sentence. On the one
>>> hand, it is clear that if a sequence of statements in a begin-end
>>> block contains a blocking statement (e.g. #delay or a call to a task
>>> that consumes time) then the execution will be suspended.
>>>
>>> However, can execution be suspended (even on a uniprocessor) if a
>>> sequence contains no blocking statement? The LRM doesn't say that it
>>> won't. And on a multiprocessor, two different begin-end blocks can
>>> execute in a truly concurrent manner. Nothing gets "suspended" in
>> this
>>> case.
>>>
>>> If truly concurrent execution is possible, then what SystemVerilog
>>> constructs, if any, can I assume will be thread-safe?
>>>
>>> - mailboxes? (likely yes)
>>> - x++ (I'd be surprised if this were concurrent-safe)
>>> - queue operations such as q.push_back()
>>> - other?
>>>
>>> This isn't hypothetical either. Every SystemVerilog implementation
>>> that I have access to is running on multiprocessor hardware, and I
am
>>> assuming that all EDA vendors are working on MP-aware
implementations
>>> of SV. (I am aware of such an effort from one of the "big 3".) As an
>>> author of SystemVerilog IP I want to ensure that my code will work
>>> properly on such systems.
> ---------------------------------------------------------------------
> Intel Israel (74) Limited
>
> This e-mail and any attachments may contain confidential material for
> the sole use of the intended recipient(s). Any review or distribution
> by others is strictly prohibited. If you are not the intended
> recipient, please contact the sender and delete all copies.
>

-- 
--------------------------------------------------------------------
Gordon Vreugdenhil                                503-685-0808
Model Technology (Mentor Graphics)                gordonv@model.com
-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.
Received on Tue Feb 22 16:19:30 2011

This archive was generated by hypermail 2.1.8 : Tue Feb 22 2011 - 16:19:41 PST