The Meeting of the SoC Verification Hidden Dragons
A gap in semiconductor verification has formed between block functional verification and system SoC validation. Gap requirements for large block, sub system and early SoC verification have been extended to include device integrity (cache coherency, security, etc.), together with multi-block and SW functionality. This is at odds with the inability to run high performance random tests on an emulator and the coverage that can be achieved using real world workloads. Multiple methods, some in conflict with each other, are being explored to close this gap that include formal methods, synthesis with Portable Stimulus, improved hardware execution platforms and others. This panel will compare and contrast these methods from different verification viewpoints, hashing out the pros and cons while taking input from the virtual audience.
Most current verification methodologies hinge on two key process elements: The functional verification of design blocks using regression simulation and SystemVerilog/UVM test content, and SoC validation leveraging real world workloads and booting operating systems employing an emulator or prototyping mechanism.
Next-generation devices need more care. Engineering teams are finding that bugs escape from an improperly verified SoC, regardless of the infallibility of the integrated blocks. Functionality spread across multiple blocks and firmware is hard to verify at the block level. Real world workloads may require weeks of expensive emulation time before they trip over obscure operational corner-cases. SoC bugs can manifest themselves in operational infrastructure such as cache coherency problems, fabric performance gaps, or security issues. The specter of a bug escape lurks in these SoC details.
So why is this process step function from block functionality to SoC validation not more gradual, with functional testing prevalent at the SoC or at least sub-system level? Challenges include executing randomized functional test content on an emulator without an integrated, performance degrading simulator executing the randomized testbench, composing high-coverage test content that targets the complexity of an obscure SoC corner case, and executing firmware early enough to exercise blocks for a complete functional test. The middle ground between block simulation and SoC emulation is a veritable verification hornet’s nest.
Engineering teams are now being forced to attack these issues head on, and the industry is finding that different specializations and applications are using alternative solutions. This in turn creates headaches for tool and methodology providers who are unable to provide a single, scalable solution to meet this varying needs. Debates are forming around these methods, which could include Portable Stimulus and Content Synthesis, Formal methods, Virtual Platforms, Improved Emulation methodologies and others.
This panel of experts in this particular area will delve into this quandary, leveraging their own experience to suggest different methods in a lively debate on their merits and pitfalls. Questions will include:
Is this need a real factor and in what designs?
Can these issues be solved using simulation or is emulation now a must?
How can testbenches drive emulation and what special characteristics are needed in this test content?
Will new languages such as Portable Stimulus, C++ and Python help resolve these requirements?
How do emulation and prototyping systems need to adapt and what is the role of virtual platforms?
Is system IP a critical need for this level of testing, and if so which VIP
Breker Verification Systems
Moderator: Brian Bailey
Panelists: Mike Chin, Intel
Adnan Hamid, Breker Verification Systems
Balachandran Rajendran, Dell EMC
Ty Gariby, Mythic AI