2.3.2.  Equivalence Verification

During the development process of a design, several models with different levels of abstraction are created. The lower the level of abstraction gets, the more refined the models become. Of course each model has to describe the same functionality and the same original design intent, so the models have to be equivalent. The group of methods for proving this is called equivalence verification. There are particularly similar techniques in use as for intent verification.

2.3.2.1 Physical Verification

Again the most direct approach regards the hardware. Here, three distinct checks are used to verify the physical implementation being a correct realization of the original logic design:

Electrical Rules Checks (ERC).
In center of interest in this test is the electrical correctness, so no electrical design rules are violated. Violations could be open and short circuits as well as unused outputs, floating inputs or loading violations.
Design Rules Checks (DRC).
This concerns the process design, checking layer-to-layer spacing, overlaps of layers and their width. The rules are specified in a DRC rule file.
Layout Versus Schematic Checks (LVS).
This requires a “golden netlist” to be present. From physical layout a netlist is extracted by taking polygons and building devices, then compared to the “golden netlist”. Typically the LVS is performed as a last step before mask generation and fabrication.

2.3.2.2 Formal Equivalence Checking

Functional equivalence regarding the I/O boundaries and the cycle accuracy can be verified used formal equivalence checking tools, usually operating at RTL or gate level netlists.

The benefit of the formal approach is that a complete equivalency can be ensured as opposed to simulation, which depends on the completeness of the test bench and takes more time to execute. Two different approaches for formal equivalence checking are known:

Boolean Equivalence Checking.
This is the technique used in most cases, checking combinational logic. Automatic name mappings are made between the memory elements of the two designs. After this the tool compares the combinational logic at the input to each pair of memory elements. With this proceeding it can be verified that for all possible combinations of inputs, the outputs are the same for both systems.
Sequential Equivalence Checking.
If the two designs to be equivalent show differences in arrangement or number of components, although fulfilling the same specifications, it is called sequential equivalence. An example for this could be two FSMs, encoding 8 states, one uses 3 latches and the other one 8. They have the same output although acting in different ways. There exists only little tool support for sequential equivalence checking because only small finite state machines can be explored. For large designs sequential equivalence checking is possible theoretically but there remains an unsolved implementation problem.

2.3.2.3 Dynamic Verification

There are two techniques within dynamic verification that are the same as for intent verification. Deterministic simulation is explained in section  2.3.1.4 on page 121. The same test bench with the well-defined stimuli can be applied to different models to check whether the results are the same. Expected Result Checkers, introduced in section  2.3.1.4 on page 122, are used, too. There are three techniques not introduced before:

Golden Model Checkers.
A “golden” or trusted model is a model that is assumed to be correct. The golden model checker monitors the responses of the golden model and the model under test. By comparing the responses to the input stimuli it can be determined whether the model to be verified is correct. No formal techniques are used; simply the behavior of the two models is explored.
Regression Testing.
Regression testing is a technique that can be applied to all simulation techniques with two pre-requirements: First, all electronic design automation (EDA) tools, verification test benches and result analyzers must be able to run in batch mode without user interaction. Second, the decision between success and failure of a test must be possible in batch mode, too, usually comparing responses against golden results. These requirements allow a setup where many tests can be run automatically. Regression tests allow checking whether design changes cause any existing verification tests to fail. As with design changes also new features may be implemented, the test suites tend to grow.