Steven Sharp wrote: >>From: Gordon Vreugdenhil <gordonv@model.com> > > >>I think that the statement "functions are not supposed >>to interact with the scheduler" was intended to mean >>"functions are not supposed to be able to direct the >>scheduler to suspend the current thread". The latter >>is certainly true. > > > Not quite, at least according to the LRM. If it were true, > then we could clearly rule out the legality of fork...join > and fork...join_any inside functions based on it. Both of > them direct the scheduler to suspend the current thread to > to wait for all or some of the subprocesses to complete. The rules for these forms are yet to be defined. I don't have any problem with thread *dispatch* within a function; I do have a problem with having functions suspend the thread that enabled the function. The implication is that I would be in favor of any construct inside a fork..join_none and would also accept fork..join blocks which respect function restrictions (meaning they would be equivalent to a series of sequential blocks and don't really need to be described as suspending). I would have problems with a fork..join_any since either the blocks respect the function restrictions (meaning that the construct is essentially useless since a simulator could then just pick one of the "threads" arbitrarily and never actually suspend) or they don't respect the restrictions in which case you could have "fork #10; join_any;" and you might as well throw away any restriction on function behavior. My working rule here is that if the function must "wait" for a thread then the thread must be expressable as a valid sequential block within the function. This approach seems to me to be a reasonable compromise between allowing functions such as "new" to create threads while not opening the door for real thread suspensions within functions. > If you believe that it *should* be true, then it is a reason > why these fork statements should not be allowed in functions. Or, as I outlined above, the restrictions on the fork..join constructs do not require suspension in the context of functions. >>Functions are not guaranteed >>to be "atomic" in the sense of requiring a scheduler >>to guarantee that no intervening action occurs. Given: >> int x,y,z; >> function void f; >> y = x; >> z = x; >> endfunction >> initial f(); >> initial begin x = 1; x = 2; end >>It would be valid for an implementation to have y and z >>end up with different values. Most users would likely >>be both surprised by this and would immediately send >>in support mail saying "your simulator is broken". > > > And I would agree with them, regardless of what the LRM says. So would I in the sense of being able to *sell* such a simulator, but as your earlier post agreed, such an interpretation is LRM compliant. It is perhaps not "intent compliant" but that is a different question (see below). > The LRM goes overboard in allowing arbitrary interleaving of > processes. If we really accepted those sections of the LRM > as the concurrency model of Verilog, then there would be very > little valid Verilog code out there. With no operations > that can be assumed to be atomic, it would be very difficult to > write anything that was guaranteed to work. But it is getting easier to deal with this now. For example, if you either capture "x" in an internal decl inside an unnamed block or if you passed "x" in as an argument to an automatic function then there is no possibility of unintentional interleaving as my example discussed. Yes, this means that legacy code has issues. I would contend that we are better off having *mechanism* that allows the user to have stronger guarantees and *policy* that does not *require* those stronger guarantees in all scenarios. I suspect that it would be difficult to disallow the above situation while allowing other scenarios (such as the inlined continuous assignment you mentioned in your previous post). Of course, if you would like to propose a specific set of rules, that would make it easier to discuss potential impact. Simulation implementations are always free to provide stronger guarantees (which could be a selling point if you wish), but over constraining the simulation semantics into a much more sequential form is likely not in the best long-term interests of the community. Fundamentally, I don't think that the LRM definition is really all that "wrong". In this area, I would be much more concerned about over-constraining the semantics then I am about under-constraining the semantics. > I agree with you that users often rely on particular behavior > from designs with race conditions, and that they should not. > Scheduling order is not guaranteed, and should not be. > > But if users have to write code that assumes process execution > could be interleaved in a completely arbitrary way, that is too > much of a burden. > > Steven Sharp > sharp@cadence.com I would not be opposed to trying to find a decent mid-point but I don't think it will be at all easy, particularly if you don't want to disallow other situations which (semantically) can only be described as a process suspension. Gord. -- -------------------------------------------------------------------- Gordon Vreugdenhil 503-685-0808 Model Technology (Mentor Graphics) gordonv@model.comReceived on Wed Mar 22 07:26:48 2006
This archive was generated by hypermail 2.1.8 : Wed Mar 22 2006 - 07:27:39 PST