Subject: SystemVerilog: always_comb and functions
From: Clifford E. Cummings (cliffc@sunburst-design.com)
Date: Thu Mar 28 2002 - 11:10:14 PST
Hi, All -
A discussion came up on our conference call last Monday concerning whether
or not variables inside of functions, but not declared in the function
header, should be part of the always_comb sensitivity list. In
Verilog-2001, we explicitly said that the always @* combinational always
block did not have to descend into tasks and functions to find additional
variables to add to the sensitivity list. We did this to unburden the
language from being required to descend multiple levels of tasks calling
tasks that might recursively call tasks, etc.
Based on our discussion, I contacted a group studying Superlog at Intel to
ask for their input. They gave what I consider to be good reasons to
require always_comb to find all internal function variables to add to the
sensitivity list. Their feedback follows.
It should be noted that we are fortunate to have a group doing large
designs that sees more opportunities for abstraction. As a SystemVerilog
Standards Group, we have a tendency to focus on small examples that do not
really show how we can improve the Verilog language. The group from Intel
has clearly looked at a large design and the potential benefit that can
result from small changes to the Verilog syntax.
From the SystemVerilog 3.0/draft 4 - section 10.3, page 40, 2nd bulleted
paragraph:
The SystemVerilog always_comb procedure differs from the Verilog-2001
always @* in the following ways:
- <text>
- always_comb is sensitive to changes within the contents of a
function, whereas always @* is only sensitive to changes to the arguments
of a function.
- <text>
PROPOSAL: do NOT change the second bullet. Users have found intelligent
ways to take advantage of this functionality to add both abstraction and
readability to their code.
This does of course open another avenue for language abuse (function1 calls
function2 which calls function3 which calls function1 again, etc.) but it
will be up to us trainers to explain the features and benefits while still
warning against the unintended potential abuses.
After seeing what the Intel group wants to do, I support the idea of
always_comb descending into functions (or perhaps tasks) to find signals
required for the sensitivity list.
Regards - Cliff
==== Notes from a group studying Superlog at Intel ====
We are eager to explain the motivation behind this feature. We have been
pursuing Superlog / SystemVerilog in order to provide SIGNIFICANT code
abstraction, size, and readability improvements over previous Verilog
language generations. On the top of the "significant improvement" list are:
1) The ability to use coding constructs which provide automatic
sensitivity, to remove the obfuscating, space-consuming, tedious, and
error-prone process of maintaining explicit sensitivity lists.
<Cliff-note: I think they have an opinion about sensitivity
lists! ;-) ;-) >
2) The ability to organize you code in a structured manner, which allows
the designer to view functionality in the most abstract manner, and
descending down into increasing detail.
The "always_comb" block provides IDEAL sensitivity for large blocks of
combinational logic. But if you are forced to fill the block with line
after line of low-level combinational logic detail, then you lose the
structural view or "sense" of what your design is trying to do.
Here is a random made up example (don't punish me for syntax or redundancy
violations here :-) of several stages of always_comb block logic with a
fair amount of detail in it. It might take several pages of code to
describe and the meaning is lost.
always_comb begin
a=b+c[21:2];
for (int i=0;i<NUM;i++) begin
d[i].v = a[i];
end
.
<endless line after line of detail>
.
f[1].h[21].v = d[1].v
end
always_comb begin
aa=ab+c[21:2];
for (int i=0;i<NUM;i++) begin
ad[i].v = a[i];
end
.
<endless line after line of detail>
.
af[1].h[21].v = d[1].v
end
always_comb begin
ba=b+c[21:2];
for (int i=0;i<NUM;i++) begin
bd[i].v = a[i];
end
.
<endless line after line of detail>
.
bf[1].h[21].v = d[1].v
end
Instead, we would like to organize the CODE PLACEMENT of the logic inside
these blocks into functional blocks so that the resulting code would look
something like this (note I've invented my own non-overloaded operator key
"tasunction" :-):
always_comb begin
data_address_swizzle();
buffer_merge();
end
always_comb begin
address_adjust();
predecode();
end
always_comb begin
decode();
readmerge();
bypass();
end
.
. <sometime later in the code>
.
tasunction data_address_swizzle () begin
detail...
end
tasunction buffer_merge () begin
detail...
end
All we are really looking for is a CLEAN way of locating code in organized
blocks and having it "drop in" to the always_comb block via some direct
code replacement mechanism. Multi-line DEFINE's were considered BRIEFLY and
rejected because of the required continuation characters on every line; we
need free-form Superlog code allowed in those blocks.
When we first dug into Superlog, we tried to accomplish this by using
Superlog "task" calls, but we eventually discovered the issue of missing
sensitivity to signal usage that is buried inside the tasks. Because we are
only coding combinational logic in the always_comb block and time was not
needed, Co-Design proposed the move to void "function" calls, provided that
we could get automatic sensitivity of signals used inside the functions.
In our opinion the "tasunctions" are not large enough to merit being
modules, nor can we spend time creating and maintaining the pins of these
potential modules. Our "cleanup" solution might be perceived as "lazy"
procedural programming. We're using global variables and incomplete
task/function arguments. On the other hand, the "laziness" provides
something that every large or complex design in the future can benefit
from: structural abstraction, the ability to read the code and get an idea
of what's happening without being immediately overwhelmed with details.
Now, when someone visits the code they can view things at progressive
levels of detail. They can figure out that the module that does job X
divides the job into the following Y sub-jobs. That's the motivation.
1. Manual sensitivity list maintenance is intractable.
2. Manual sensitivity list maintenance obfuscates the code, adds bugs,
reduces readability.
3. Structural organization of code greatly enhances readability.
That's how we arrived at our current preference: functions whose RHS
signals are automatically added to the sensitivity list of the always_comb
procedural block from which it's called.
We're not sure we completely fathom or sympathize with the post-synthesis
argument. One always runs the risk of creating spurious latches during
synthesis. It's the responsibility of the synthesis tool person to interact
with their synthesis tool (and perhaps a lint tool) to get accurate latch
and flop identification.
<Cliff-note-begin>
The problem is not the creation of latches but the non-creation of latches
when latch-like code is placed into a function. Synthesis tool vendors
assume that functions cannot hold state and therefore all functions are
converted to combinational logic, even though they still simulate like a
latch. There is no warning and if the pre-synthesis simulation requires the
latching functionality while the post-synthesis design is missing the
functionality, and if the gate-level simulations miss this condition, the
taped-out design will be wrong and the mistake will be costly.
Examples from a paper that Don Mills and I did three years ago:
module code3a (o, a, nrst, en);
output o;
input a, nrst, en;
reg o;
always @(a or nrst or en)
if (!nrst) o = 1'b0;
else if (en) o = a;
endmodule
// Infers a latch with asynchronous low-true
// nrst and transparent high latch enable "en"
module code3b (o, a, nrst, en);
output o;
input a, nrst, en;
reg o;
always @(a or nrst or en)
o = latch(a, nrst, en);
function latch;
input a, nrst, en;
if (!nrst) latch = 1'b0;
else if (en) latch = a;
endfunction
endmodule
// Infers a 3-input and gate
This is why I generally discourage putting functionality into functions;
however, I will have to reconsider my recommendation if the always_comb
capability is added to Verilog. I like the abstraction that I see. I guess
my only question is whether this should apply to tasks too? Synthesis tools
would have found the latch inside of tasks.
<Cliff-note-end>
We're open to other ideas. We had originally conceived of something like a
task/function that functioned as a macro. Each task/function call would be
replaced by its contents, in-place during parse or elaboration time so the
automagic sensitivity could be accurately calculated. We still like this
idea very much.
<Cliff-note: in-place replacement would make line-reporting in debuggers
somewhat tricky. Tasks were intended to be abstractions for blocks of code.
We use read and write tasks all the time in Verilog Verification>
//*****************************************************************//
// Cliff Cummings Phone: 503-641-8446 //
// Sunburst Design, Inc. FAX: 503-641-8486 //
// 14314 SW Allen Blvd. E-mail: cliffc@sunburst-design.com //
// PMB 501 Web: www.sunburst-design.com //
// Beaverton, OR 97005 //
// //
// Expert Verilog, Synthesis and Verification Training //
//*****************************************************************//
This archive was generated by hypermail 2b28 : Thu Mar 28 2002 - 11:11:35 PST