Yash Phogat, Arm Inc.; Patrick Hamilton, Arm Inc.
Verifying a complex hardware design is a compute-intensive task. Design verification often uses a constrained random approach to simulate tests using random stimuli. However, due to the random nature of this approach, efficiency diminishes over time, i.e., as the hardware design becomes stable, we find large amounts of wasted compute cycles in running tests that always pass. For a given regression on a stable design, while the passing cycles may be useful for coverage, they certainly do not contribute to finding newer bugs. Using Machine Learning (ML) we model the tests history to build classification models that filter the tests with high likelihood of passing. To do this, the machine learns to identify(model) the parts of the test input stimulus which show high correlation to the fail/pass label. By applying the learnt model to a new test input stimulus, we obtain a subset of tests that are useful to simulate. Running this smaller set of tests on simulators will cut down the test regression size. The saved compute can either be repurposed or translated to savings in verification costs.