2.3.1.  Intent Verification

There are different techniques leveraged for Intent Verification, covered by several tools I will not mention here.

2.3.1.1 Physical Prototyping

The most direct approach for verifying a VC is to found it into silicon. This device, still being a model, is at close to the target platform performance for its hardware nature. With a physical prototype several tasks can be accomplished. The application and system software can be developed and debugged before the real SoC device is available. As this is possible with other techniques later explained, too, it will not be the main task for the immense effort of time and cost. More interesting are tasks like system-level performance testing as well as enabling exhaustive test cycles by being a high performance platform simulation for the target design. It is also needed for hardware and software co-verification and providing a logic analyzer interface. But the only important task listed in [VSI01a] from my point of view in the present is that physical prototypes are for marketing demonstration of the target device. And with the further evolution of virtual prototypes this task will dissolve, too.

Implemented as real hardware, a physical prototype operates within the same order of performance as the target system and so is much faster than any software simulator. There are different sub-methods used for physical prototyping:

Emulation Systems.

This technique adopts an emulation system discussed in section 2.3.1.2 on page 119.

Reconfigurable Prototyping System
The different VC building blocks are implemented separately and assembled on an integration board. They are realized either as bonded-out silicon, non-volatile FPGAs or via in-circuit emulator (ICE) systems.
Application-Specific Prototype
For Physical Prototyping with an ASP a complete design must be developed using commercially available components, it will have limited expansion capability. As the package is very specific, tests based upon this technique are very costly.

2.3.1.2 Emulation

One little step away from pure hardware tests is the emulation. An emulator usually consists of a FPGA and its connection to some other parts of the system. The execution speed is very high in comparison to all software solutions that will be explained later. For the benefits of high speed it is possible to perform even large and computation-intensive verification tasks, like running operating systems on a chip, which often includes millions of lines of code. An emulator can be implemented in different ways. Mostly there are one or more interconnected FPGAs, in addition to some custom processors, interfaces to allow controlling the software and the hardware as well as debugging the system.

2.3.1.3 Formal Verification

Far away from real hardware formal verification utilizes mathematical techniques that allow to check the functional aspects of a design. A verification test suite is not required, since the design is analyzed purely mathematical. The purpose of formal verification includes equivalence checking, explained in section 3.3.2 below. For intent verification the following methodologies are applicable:

Property and Model Checking This technique probes the entire state space of a design. All possible input conditions are explored, finding bugs that probably cannot be found with simulation. The model properties are verified using queries in a specification language. In case an error occurs, the complete track from an initial state to the fail state can be traced. In most designs not the entire state space has to be evaluated, because there is a set of input constraints. Compliance with these constraints has to be ensured by the model checker. For these bounds there is an extra constraint language, too. Of course, if the data path of an IC is very long, formal verification can be ineffective due to the wide and deep state space a long path implies. Better to check are control-intensive designs with a usually narrow state space. A model checker can also be useful within simulation, too, verifying assertions for certain conditions.

Theorem Proving A specification language based on a formal logic and a set of strategies are the requirements for theorem proving verification systems. The strategies typically are present in form of commands, and allowing constructing a proof of an assertion in the logic. The kind of formal logic as well as the level of automation widely varies among the theorem proving systems. This sort of verification typically is done by first describing the design model and the property under test in the specification language. Then a proof of the correctness criterion can be constructed. Size of state spaces is not a problem here, allowing also verification of data path oriented designs and high-level applications. This technique is used in equivalence checking, too, by forming an appropriate assertion. The biggest disadvantage of theorem proving is, that it does not provide good possibilities for automation like model checking does. The proof has to be constructed interactively. In case of a failure in proof construction, there is no automatic trace; the cause of the fail has to be found manually by analyzing the failed state.

2.3.1.4 Dynamic Verification

This is a very practical verification technique, using a set of stimuli to exercise one or more models of a design or a hardware implementation of the design. There are several sub-categories within dynamic verification:

Deterministic Simulation In deterministic simulation a well-formed stimulus is fed into the model. The response from the model can be predicted and so the expected and the real response are compared for verification.

Random Pattern Simulation (Directed and Non-Directed) By applying random address, data and control values as stimulus to a model, the robustness of a design is tested, if false inputs lead to undefined states. Often one or more bus protocol checkers are employed to verify that no bus protocol violations do occur. In contrast to bus protocol checking, where undirected random patterns form the stimuli, it is also possible to take directed random patterns as stimuli, which means the patterns are still random, but within certain boundaries. The operations are not purely random but generate actions to stress the design in specific ways, e. g. reading, writing or calculations. Algorithmic errors can be found very quickly using random pattern simulation, as the boundary conditions are reached and extremely different operations can take place in an order that was never thought of.

Hardware Acceleration One or more components of a design are mapped into hardware for performance reasons. Still being purely software based, the verification test bench profits from the speed-up of certain operations. The grade of hardware acceleration varies, even entire simulations could run in hardware.

Hardware Modeling Hardware modeling means that components that are not available or insufficiently accurate are run in a hardware modeler, interacting with the software simulator. So this technique can be used in combination with any of the other ones.

Protocol Checkers A protocol checker is a set of valid operations that can be monitored as transactions on an interface. With a protocol checker this monitoring can detect invalid operations and flag them as an error. A protocol checker is typically embedded in a verification test bench, for simulation purposes. Embedding in the design is possible, too, to check for violations also during normal operation of a device.

Expected Result Checkers Every system verification test bench needs to compare the gained responses to the expected results. Any discrepancies will be flagged as errors.

2.3.1.5 Virtual Prototyping

Within several of the techniques invented in the previous sections virtual prototypes could be used as simulators. Other modeling terms differentiate model based on their characteristics, whereas virtual prototype has nothing to do with any particular model characteristics, it rather relates to the role of the model within a design process. A virtual prototype’s purpose is to:

As already discussed in section 2.1 on page 17, a virtual prototype is not tied to a special level of abstraction but can be constructed at any level. Typically a virtual prototype defines the interfaces to the rest of the design and so can be integrated into it, not disclosing its internal structure, which can be designed in a wide variety of possibilities. A virtual prototype can provide a detailed view of how a component will work without the need to design complex hardware just for figuring out alternatives, and so is more cost effective. For the model existing as pure software, a very detailed view into the states of the component is possible, depending on how the model is implemented.

The biggest disadvantage of virtual prototypes is, that their speed is within simulation order rather than hardware order, so less verification tests can be run as on real hardware in the same time.

2.3.1.6 Verification Metrics

Verification itself has to be verified to gauge whether it is complete or not. Different points are to be tested.

Hardware Code Coverage The first group is the hardware code coverage metrics. These can be automatically determined, producing as result a value for the percentage coverage of each property assessed and a list of those areas that are unexercised by the test. Coverage metrics can be applied to the following points:

Statement coverage
, counting how often each statement was executed.
Toggle coverage
, regarding the toggled bits of the signals.
FSM arc coverage
counts the transitions of the Finite State Machine.
Visited state coverage
shows how many states of a FSM were passed.
Triggering coverage
checks whether each process has been started uniquely by each of the signals.
Branch coverage
shows which branch was selected in case of a decision.
Expression coverage
explores the grade of exercise of a Boolean expression in a conditional statement.
Path coverage
traces the routes the execution has taken through the branches.
Signal coverage
shows how well addresses or state signals have been exercised.

Functional Coverage The second group of metrics is the functional coverage. This cannot be derived automatically, as it measures the amount of features that have been verified. Generally the functional coverage can be described as a cross-combination between temporal behavior and data. This metric is very important, as it ensures that all features given from the specification are verified, ensuring all bugs can be found. Typically the functional coverage is analyzed on RTL view of the design.