>From: "Bresticker, Shalom" <shalom.bresticker@intel.com> >Another and logical alternative would have been to calculate the delay on each bit of the vector separately, as in a parallel connection module path in a specify block. This would certainly be the most realistic alternative. The difference in rise/fall delay that is being modeled is dependent on the high/low drive characteristics of the driver on that bit. Selecting the delay based on the value of any other bit is unrealistic, whether it is based on the LSB or the reduction-OR of all the bits. (While there may be some effects of one driver on another, such as extra delays for simultaneous switching, those are secondary effects that are more complex than these delays are trying to model). The problem is that it would effectively require the vector net to be expanded or "scalared", so that the individual bits could be scheduled independently at different times. That would have a significant cost in simulation performance. I presume that is why it was not done that way. Steven Sharp sharp@cadence.com -- This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean.Received on Thu Oct 15 15:25:52 2009
This archive was generated by hypermail 2.1.8 : Thu Oct 15 2009 - 15:26:45 PDT