Oral/Lecture Sessions-Day 1:
Technical Session 1 [Functional Coverage Closure]
Tuesday, March 5, 2024
-
Seokho Lee, FuriosaAI; Youngsik Kim, FuriosaAI; Suhyung Kim, FuriosaAI; Jeong Ki Lee, FuriosaAI; Wooyoung Choe, FuriosaAI; Minho Kim, FuriosaAI
Over the past few years, there has been a growing trend of using Python for design verification instead of traditional hardware verification languages such as SystemVerilog. However, existing Python verification frameworks focus on driving and monitoring signals or solving random constraints, but lack coverage features which makes it hard to achieve functional coverage closure. This paper proposes a Python environment for enabling functional coverage closure. This environment fully utilizes rich features of SystemVerilog functional coverage as well as leverages existing tools for easier coverage analysis.
Loading… -
Robert Kunzelmann, Infineon Technologies AG, Technical University of Munich; Aishwarya Sridhar, Infineon Technologies AG; Daniel Gerl, Infineon Technologies AG, Technical University of Munich; Lakshmi Vidhath Boga, Infineon Technologies AG; Wolfgang Ecker, Infineon Technologies AG, Technical University of Munich
Function set modeling is a specification methodology striving for the unified description of general hardware systems. Based on the Instruction Set Architecture (ISA) of processors, function set modeling specifies systems exclusively by their executable functions and the relevant system state. While this methodology has been used in formal verification and behavioral modeling, the abstraction gap between system-level function models and design implementation limits its significance. We use traces, time-annotated representations of functions, to additionally model architecture parameters. Hierarchical State Machines (HSMs) are leveraged as a notation to capture the refined and conditional execution of the underlying function set. Moreover, we show the transformation of traces into formal assertion properties spanning over fixed-length intervals. The extracted interval set jointly verifies the complete behavior captured by the traces, thereby checking the functional and temporal correctness of the Design Under Verification (DUV).
Loading… -
Jikjoo Lee, Samsung Electronics, Memory Division South Korea; Kihyun Park, Samsung Electronics, Memory Division South Korea; Tony Gladvin George, Samsung Electronics, Memory Division South Korea; Dongkun An, Samsung Electronics, Memory Division South Korea; Wooseong Cheong, Samsung Electronics, Memory Division South Korea; Byungchul Yoo, Samsung Electronics, Memory Division South Korea
In the context of hardware design verification, defining function coverage accurately in SystemVerilog remains a challenge, largely due to human errors leading to "missing coverbins". This paper introduces a methodology aimed at enhancing function coverage by identifying these overlooked bins. By treating coverbins as a SystemVerilog queue and employing an "waive function", this approach provides verification engineers a mechanism to efficiently determine whether sampled cover point already accounted for in the coverage. Experimental validation, involving the CacheManager in an SSD Controller, underscored the method's efficacy, with results revealing a significant 16.4% improvement in functional coverage. Thus, the proposed method not only rectifies human-induced inaccuracies but also improves the overall robustness of hardware verification.
Loading… -
Yash Phogat, Arm Inc.; Patrick Hamilton, Arm Inc.
Verifying a complex hardware design is a compute-intensive task. Design verification often uses a constrained random approach to simulate tests using random stimuli. However, due to the random nature of this approach, efficiency diminishes over time, i.e., as the hardware design becomes stable, we find large amounts of wasted compute cycles in running tests that always pass. For a given regression on a stable design, while the passing cycles may be useful for coverage, they certainly do not contribute to finding newer bugs. Using Machine Learning (ML) we model the tests history to build classification models that filter the tests with high likelihood of passing. To do this, the machine learns to identify(model) the parts of the test input stimulus which show high correlation to the fail/pass label. By applying the learnt model to a new test input stimulus, we obtain a subset of tests that are useful to simulate. Running this smaller set of tests on simulators will cut down the test regression size. The saved compute can either be repurposed or translated to savings in verification costs.
Loading…
Technical Session 2 [AI & ML in Verification]
Tuesday, March 5, 2024
-
Siarhei Zalivaka, Solidigm
Verification becomes more crucial in the design of complex systems on a chip. To make this process more sustainable, the verification IPs (VIPs) has to be utilized. However, the development of VIPs relies on deep understanding of the target IP specification. The internals of the VIP are usually designed based on the requirements, which cannot be easily extracted from the specification. Thus, Large Language Models (LLMs) can be used in order to identify the requirements. The experimental study shows that GPT-2 model provides good performance (Precision=0.618, Recall=0.874) that improves the process of requirements extraction by at least 20% comparing to the manual process. Also this approach can be used for automation (semi-automation) of other routines required in the VIP development process.
Loading… -
Aman Kumar, Infineon Technologies; Deepak Narayan Gadde, Infineon Technologies; Thomas Nalapat, Infineon Technologies; Evgenii Rezunov, Infineon Technologies; Fabio Cappellini, Infineon Technologies
Modern hardware designs have grown increasingly efficient and complex. However, they are often susceptible to Common Weakness Enumerations (CWEs). This paper is focused on the formal verification of CWEs in a dataset of hardware designs written in SystemVerilog from Regenerative Artificial Intelligence (AI) powered by Large Language Models (LLMs). We applied formal verification to categorize each hardware design as vulnerable or CWE-free. This dataset was generated by 4 different LLMs and features a unique set of designs for each of the 10 CWEs we target in our paper. We have associated the identified vulnerabilities with CWE numbers for a dataset of 60,000 generated SystemVerilog Register Transfer Level (RTL) code. It was also found that most LLMs are not aware of any hardware CWEs; hence they are usually not considered when generating the hardware code. Our study reveals that approximately 60% of the hardware designs generated by LLMs are prone to CWEs, posing potential safety and security risks. The dataset could be ideal for training LLMs and ML algorithms to abstain from generating CWE-prone hardware designs.
Loading… -
Olivera Stojanovic, Vtool; Nemanja Mitrovic, Vtool; Anna Revitzki, Vtool
The growing complexity of SoCs is challenging the efficiency of present-day verification approaches, making verification processes increasingly more involved. As waveform databases expand in size and simulation run-time becomes more time consuming, logs and code execution traces are harder to read and understand. This requires greater resourcefulness from verification teams, alongside more powerful verification tools. By addressing the various complex SoC verification bottlenecks, this paper proposes a viable, highly effective verification solution that uses Big Data techniques for analyzing standard verification outputs. The new approach presented here transforms traditional debugging by applying proven AI techniques and algorithms on unified Big Data datasets, derived from multiple sources. To highlight the applicability and potential of AI for complex SoCs verification, the new approach is tested, demonstrated, and ultimately proven to be successful using several NOC verification examples. It provides the nearest endpoints addresses algorithms, proving to be highly more effective in matching source and destination endpoints in failing transfers. The approach also suggests potential resolutions for common failures by finding the common data values in failed transfers, and its time-scale analysis extracts deviations in transfer execution time to suggest potential throughput issues, transfers timeouts, and more. Although standard debugging methodologies can resolve such common NoC verification issues, the new AI-driven approach described in this paper significantly accelerates SoC verification, transforming the entire verification process into a highly efficient one.
Loading… -
Dan Yu, Siemens EDA; Harry Foster, Siemens EDA; Eman El Mandouh, Siemens EDA; Waseem Raslan, Siemens EDA; Tom Fitzpatrick, Siemens EDA
This paper presents a comprehensive literature review on how Large Language Models (LLMs) can be applied in multiple aspects of verification, including requirement engineering, coverage closure, formal verification, debugging, functional safety, code generation and completion, and data augmentation, among others. To demonstrate the capability, experiments are carried out to automatically generate variants of the existing designs and their verification code using prompts. Significant productivity and quality improvements are recorded compared to traditional manual data preparation. Despite the promising advancements offered by this new technology, we must be aware of the intrinsic limitations of LLMs in making incorrect predictions stemming, manifested by hallucination. The paper cautions that raw outputs of LLMs should not be directly used in verification. In conclusion, three safeguarding mechanisms are recommended to ensure the quality of LLM outputs. Finally, the paper summarizes the observed trend of LLMs' development and expresses optimism about their broader prospective applications in verification.
Loading…
Technical Session 3 [Formal Verification Use Cases]
Tuesday, March 5, 2024
-
Bryan Olmos, Infineon Technologies AG; Sanjana Sainath, Infineon Technologies AG; Wolfgang Kunz, Rheinland-Pfälzische Technische Universität Kaiserslautern-Landau; Djones Lettnin, Infineon Technologies AG
Electronic safety systems need to guarantee a certain level of dependability. Therefore, their development must consider not only robust hardware but also trustworthy and reliable embedded software such as firmware, drivers, and bare metal software. In the case of hardware, the use of formal verification and its automation are two crucial aspects to prevent the presence of bugs during the development process. However, formal verification of firmware and especially its automation lacks a methodology that can be applied to an industrial environment. In this paper, the verification of safety properties of firmware is automated to cover software criteria defined in the ISO26262-6 such as requirements-based tests, code coverage and weakness detection. Additionally, it introduces the main guidelines to enable this automation in an early phase of the development. The proposed methodology is based on the Model Driven Architecture and the generation of C files —usually called contracts. These files contain the functions under verification with the preconditions and post-conditions to be verified. This work considers an end-to-end register verification, which means that the design is proved concerning the Hardware Abstraction Layer (HAL).
Furthermore, this work generates the scripts to run the open-source tool Bounded Model Checker for ANSI C (CBMC). In the beginning phase, the methodology was applied to verify the safety properties of industrial designs of 16 and 32 bits and after running some pilot projects, it was extended to automate the detection of weaknesses and code coverage. The results show that the generator can set up the verification environment in a few seconds and similar to hardware the verification runtime depends on the complexity of the design. The methodology can be easily adopted in industry or academic environments because it is based on Python scripts and open-source tools.
Loading… -
Shuhang Zhang, Infineon Technologies AG; Bryan Olmos, Infineon Technologies AG; Basavaraj Naik, Infineon Technologies AG
Registers in IP blocks of an SoC perform a variety of functions, most of which are essential to the SoC operation. The complexity of register implementation is relatively low when compared with other design blocks. However, the extensive number of registers, combined with the various potential functions they can perform, necessitates considerable effort during implementation, especially when using a manual approach. Therefore, an in-house register generator was proposed by the design team to reduce the manual effort in the register implementation. This in-house register generator supports not only the generation of register blocks but also bus-related blocks. Meanwhile, to support various requirements, 41 switches are used for this generator, which is highly-configurable. From the verification perspective, it is infeasible to achieve complete verification results with a manual approach for all switch combinations. Besides the complexity caused by configurability, the register verification is still time-consuming due to two widely recognized issues: the unreliability of specifications and the complexity arising from diverse access policies. To deal with the highly-configurable feature and both register verification issues, we propose an automated register verification framework using formal methods following the Model Driven Architecture (MDA). Based on our results, the human effort in the register verification can be reduced significantly, from 20Man-Day (20MD) to 3MD for each configuration, and 100% code coverage can be achieved. During the project execution, eleven new design bugs are found with the proposed verification framework.
Loading… -
Abhishek Asi, Intel; Anshul Jain, Intel; Virginia Bao, Intel
This paper investigates a novel formal verification approach for validating the functionality of stream decoders, essential components in modern SoCs responsible for interpreting encoded data. Stream decoders are crucial for tasks such as compression, decompression, error-correction, and secure transmission. Traditional methodologies, including simulation and testing, struggle to address the complexities of evolving formats and standards. Formal verification, a systematic method, offers mathematical proof of design correctness, ensuring reliable operation in all situations. The paper explores the methodology and presents findings that emphasize the effectiveness of formal verification, promoting it as a rigorous tool for ensuring the accuracy and reliability of stream decoders in various digital systems.
Loading… -
Mike Walsh, Siemens EDA; Jin Hou, Siemens EDA
Abstract-Integration of multiple ICs in a single package is critical for high performance computing. Due to the huge number of connections after packaging the ICs, it is hard to verify the correctness of the connections. The traditional way to verify the connections requires a lot of manpower and time and is either not exhaustive or too late in the process. This paper will introduce a new way to verify the packaging connectivity using formal verification that can exhaustively verify all interconnections between the IC blocks. The flow is automatic for all steps from creating connectivity spec to verify packaging output connectivity. The automatic parallel algorithms on compute grid can verify huge numbers of connections in minutes even seconds. The script for the flow is simple and only takes a few minutes to setup. Once the script is ready, it can be reused for different packaging projects.
Loading…
Technical Session 4 [UVM Testbenches]
Tuesday, March 5, 2024
-
Aswini Kumar Tata, ALLEGRO MICROSYSTEMS LLC; Bhanu Singh, MathWorks; Sanjay Chatterjee, ALLEGRO MICROSYSTEMS LLC; Eric Cigan, MathWorks; Surekha Kollepara, ALLEGRO MICROSYSTEMS LLC; Kamel Belhous, Allegro Microsystems
In this paper, we describe how to build model-based testbenches and conduct early testing when design behavior is available as models developed in MATLAB, Simulink, or C/C++ source code. Our approach high-lights reuse of the Simulink test environment in UVM bench development and extension of the generated UVM bench by adding more complex constrained randomizations, assertion checkers, and coverage groups. Collecting model-based coverage, dead logic detection, rollover and saturation checks helped us in finding corner case bugs in the design. In this way, digital verification engineers can benefit from the exhaustiveness that UVM generally provides while shifting the verification effort earlier through the use of model-based benches. Moreover, merging coverage – i.e., code and functional coverage – from the UVM test runs between algorithm blocks and non-algorithm blocks in RTL proves to be much simpler to deliver during coverage sign off. We also share our experience of building a stimulus model that can generate multiple test scenarios. and how to replicate stimulus between model-based test runs and RTL test runs.
Loading… -
Prathik R, Samsung Semiconductor India Research, Bengaluru; Ramesh Madatha, Samsung Semiconductor India Research, Bengaluru; Girish Kumar Gupta, Samsung Semiconductor India Research, Bengaluru; Tony Gladvin George, Samsung Electronics, Korea
As high-speed computation and data transfers rise, layered protocol subsystems become increasingly prominent. Verification IP (VIP) play a crucial role in verifying these protocols, with the complexity of the test environment growing proportionally with the complexity of the design under verification. Traditional VIP development, centered around controllability and observability, might fall short with next-gen, layered, and multiplexed protocols. This makes us think of an approach to scale the VIP horizontally along with the reusable standards to minimize the development cycle and look for ways to enhance reusability, controllability and observability of a VIP. In this paper, we would like to propose a solution to develop a reusable VIP that accounts to scale for multi-layer multiplexed protocols with enhanced controllability and observability.
Loading… -
Neha Goyal, Nvidia Corp.; Justin Refice, Nvidia Corp.
The current Transaction Level Modeling (TLM) specification in the Universal Verification Methodology (UVM) has many shortcomings. It lacks compile-time checks for port and interface compatibility and missing implementations. Additionally, it leaks APIs between different interfaces, allowing nonsensical and illegal method calls that are only detectable at run time. With the introduction of Interface classes in SystemVerilog 2012, we can rethink UVM TLM such that illegal and nonsensical behavior can be detected at compile-time, reducing the latency for the user to address these errors.
Loading… -
Rich Edelman, Siemens EDA
The SystemVerilog UVM implements a class named uvm_objection. An objection is used to guard code that isn't done yet". For example, an objection can prevent a process from finishing until some other process agrees. Uvm_Objections Are Sometimes Overused And Are Always Misunderstood. This Paper Will Explain The Implementations And Uses And Provide Some Alternative Solutions That Are Easier To Understand, Simpler To Use, And Work Transparently."
Loading…
Technical Session 5 [Functional Safety and Verification]
Tuesday, March 5, 2024
-
Sougata Bhattacharjee, Samsung Semiconductor India Research (SSIR); Gulshan Kumar Sharma, Samsung Semiconductor India Research (SSIR); Wonil Cho, Samsung Electronics; Akshaya Kumar Jain, Samsung Semiconductor India Research (SSIR); James Kim, Siemens; Andrey Likhopoy, Samsung Electronics; Sangkyu Park, Samsung Electronics; Hyeonuk Noh, Samsung Electronics; Ann Keffer, Siemens; Arun Gogineni, Siemens
Functional verification efforts are concentrated on making sure that the design is meeting the expectation as per the specification and that all the functionality has been verified. It will never look for the design's capability to detect or correct itself from random hardware failures. The ability to recover from hazardous and random failure is very important for functional safety. The motivation for this paper is to introduce functional safety-related flows and observe their effect on design correctness. We also present several comparisons that are derived out of results from using different optimization techniques while performing fault simulation either with full fault list generation or with SRF. The paper also pointed out different techniques so that maximum Diagnostic Coverage (DC) can be achieved in minimum time.
Loading… -
Andrey Likhopoy, Samsung Electronics; Sangkyu Park, Samsung Electronics; Hyeonuk Noh, Samsung Electronics; Wonil Cho, Samsung Electronics; Inhwan Kim, Siemens EDA; Robert Serphillips, Siemens EDA; Chanjin Kim, Siemens EDA; Justin Lee, Siemens EDA; James Kim, Siemens EDA; Sougata Bhattacharjee, Samsung Semiconductor India Research (SSIR); Gulshan Kumar Sharma, Samsung Semiconductor India Research (SSIR); Akshaya Kumar Jain, Samsung Semiconductor India Research (SSIR)
Achieving the desired safety level is burdened by the runtime of fault injection and also data management over the duration of the fault campaign. In this paper we describe the process of data management and fault injection runtime optimizations that reduce the time to achieve the desired safety level and the product certification. The paper gives a case study of the fault campaign flow for an internal IP (NPU) in a large automotive IC by using the emulation for S/W-based safety mechanisms.
Loading… -
Suresh Vasu, Intel Corporation; Palanivel Guruvareddiar, Intel Corporation; Pooja Sundar, Intel Corporation
Thanks to the advancements in processor compute, connectivity, and memory technology, the demand for video processing silicon is on the rise over the past few years. In addition to the traditional video entertainment applications, the explosive growth in AI technology enabled several machine vision applications as well. Today’s Systems on Chips (SoCs) catering to these various multimedia applications are a complex chain of hardware with significant memory requirements. The importance of the functional Safety and reliability verification in the design is very critical and it has the possibility for catastrophic issues for the end users if it’s not verified properly. The impact of soft errors in the silicon especially during the memory transactions to the end user experience and to the inference accuracy of the machine vision algorithm is largely an unexplored area. Our paper tries to bridge this gap by outlining a novel methodology and a unique verification framework that provides the ability to inject soft errors and study the impact of those errors on both video/image quality as well as on the inference accuracy. Using this verification framework, we studied the impact of soft errors in memory for the JPEG encoder hardware and expanded the study to include video encode hardware, where the encoded video streams will be used for both human consumption as well as for machine vision. For human consumption, the framework computes both objective video quality using standard metrics as well as subjective quality metrics using VMAF (Video Multi-Method Assessment Fusion). For machine vision, the framework executes AI workloads including object detection, tracking and classification and compute various metrics such as mean average precision, multi-object tracker accuracy as well as classification accuracy. The proposed verification methodology uses the power of SystemC and OpenVINO which provides a novelty in functional safety and reliability Verification.
Loading… -
James(JAE YUN) Kim, Siemens EDA; Arun Gogineni, Siemens EDA; Ann Keffer, Siemens EDA; Ahn Hyunsun, Samsung LSI
Running an efficient fault campaign with accurate results can be a difficult task especially on large complex designs, but there are optimizations and techniques to address this. A common difficulty when running fault campaigns is providing stimulus behavior where a campaign run on RTL matches the results of the final gate level netlist fault campaign needed for certification. In the paper ‘Complex Safety Mechanisms Require Interoperability and Automation for Validation and Metric Closure’ presented at DVCon US 2023 we covered the fundamental methodology to overcome typical fault campaign challenges for SOC’s with complex safety mechanisms. The previous paper described a general methodology and flow for fault campaigns, but it did not go into the details of how to plan and verify fault campaign activity.
Loading…
Technical Session 6 [RISCV Design & Verification]
Tuesday, March 5, 2024
-
Puneet Goel, Incore Semiconductors; Ritu Goel, Coverify Systems Technology; Jyoti Dahiya, Coverify Systems Technology
High-end RISC cores encompass intricate processor architectures that comprise of complex instruction pipelines and convoluted maneuvers like instruction re-ordering. Functional verification of such cores require a signification effort involving thousands of millions of millions ($10^{15}$) randomized instructions\cite{ARM-verif}. The RISCV-DV project, coded in SystemVerilog, generates about 10,000 instructions in a second. At this rate it may take several machine years to generate the required randomized sequences of RISC-V instructions.
Loading… -
Jon Taylor, Imperas Software
As the number of companies developing RISC-V CPUs grows, it is useful to look at the ecosystem around verifying RISC-V cores. Historically companies have had to develop all the tools and techniques for CPU design verification (DV) themselves; verification methodology is usually seen as part of their secret sauce. With many RISC-V CPU companies appearing, developing rigorous verification methodology will be critical to the wide success of the architecture. However it will be impractical for each company to develop their own private verification methodology and tooling.
RISC-V verification tooling efforts today are fragmented across multiple projects and a mix of commercial and open source tools so it’s hard to understand at a glance what can be leveraged from the ecosystem and what might have to be built. While there is value in having a well verified product, in another sense it is a hygiene factor; all successful CPU companies will need high quality verification to succeed.
This paper will discuss the current state of the RISC-V testing ecosystem before identifying gaps and opportunities. The focus is on simulation based testing. While formal tools are also often used by CPU verification engineers, there are fewer options available and less RISC-V specific innovation.
As the RISC-V ecosystem matures, we can expect to see more complete solutions pulling together all the component elements needed for CPU DV.Loading… -
Chenhui Huang, Tenstorrent Inc.; Yu Sun, Tenstorrent Inc.; Joe Rahmeh, Tenstorrent Inc.
This paper describes a framework that uses Whisper (an open source RISCV simulator), RISCV assembly and a System Verilog / UVM framework to verify the memory subsystem of an out of order CPU. Load/Store cache and MMU are arguably the most complex microarchitectural blocks in any high-performance CPU. Our methodology enables RTL designers to describe a scenario which is then translated into block level stimulus/drivers with the help of an architectural simulator. This not only reduces the effort for DV engineers for crafting stimulus but also provides more granular control that comes with a block level testbench. Using this technique, we will demonstrate how we enabled a design from scratch and how we were able to take high level language workloads (e.g. operating systems, benchmarks) and seamlessly run them at the block level. This flow also provided a staged enablement of RISCV architectural extensions in our design. We are in the process of creating an interface that allows stimulus generation via ChatGPT. It will effectively allow complex test plan scenarios to be described in plain text and translated into stimulus that can run on a block level testbench!
Loading… -
Aimee Sutton, Imperas Software Ltd.
As RISC-V has shifted responsibility for processor verification to a wider community of developers, the tools, techniques, and best practices surrounding it are constantly evolving. One of these best practices is the RISC-V Verification Interface [1], or RVVI. An open standard available under Apache license on GitHub, RVVI enables reuse and efficiency in processor RTL verification by formalizing the interfaces that all RISC-V CPU testbenches should contain.
RVVI has developed and matured over time through use in a number of projects, including commercial and open-source RISC-V cores. Until recently the focus of RVVI was the verification of a single RISC-V core. However, there are several scenarios where it makes sense to extend the processor testbench to include other components. One of these scenarios is the verification of a Debug Module. Rather than create a behavioural model of the processor in order to verify the Debug Module, it makes sense to use the processor RTL to respond to Debug Mode events, so long as there is a reference model ensuring that the processor’s response is correct.
This paper will present an overview of the RVVI and explain how it facilitates processor verification. The challenges of Debug Mode co-simulation will be explained in detail, followed by an explanation of the proposed changes to RVVI. Examples based on the customer’s use case will be presented along with preliminary results from simulations run using the proposed RVVI changes. Lastly, areas for future development will be identified.
Note: the customer mentioned in this abstract will obtain approval to make their name public and participate as a co-author if the abstract is accepted.Loading…
Oral/Lecture Sessions-Day 2:
Technical Session 7 [Requirements Definition and Traceability]
Wednesday, March 6, 2024
-
So Wonyeong, Samsung Electronics; Yang Yonghyun, Samsung Electronics; Roe Sun-il, Samsung Electronics; Jang Moonki, Samsung Electronics; Kim Youngsik, Samsung Electronics; Choi Seonil, Samsung Electronics
The Bus Trace System (BTS) is a novel debugging methodology for verifications about System-on-Chip (SoC) integrated based on IP-XACT. BTS aims to automatically identify and debug an errors such as bus hang and error responses using only simulation logs, without the need for waveform information. BTS works by utilizing SoC bus navigation information that is generated through an in-house bus connection searching algorithm, which searches IP-XACT to find the full bus connection for each instance. Moreover, by analyzing the interface logs connected to Design Under Test (DUT), BTS can precisely determine the scope that needs to be debugged. By using BTS, the debug time and verification Turn-Around-Time (TAT) can be reduced, as it automatically finds the debug scope related to bus hang and error response in SoC verifications. BTS is a powerful debugging methodology that can effectively reduce the debugging time in the trend of increasingly complex SoC architectures.
Loading… -
Kirolos Mikhael, Siemens EDA; Abelouhab Ayari, Siemens EDA
The increasing complexity of modern System-on-Chips (SoCs) has led to the emergence of numerous subsystems, resulting in the need for broader teams to collaborate on a single chip. Consequently, it has become more challenging to integrate and perform functional verification on both the subsystem and chip level. In the software industry, Continuous Integration (CI) has gained trust as an automated testing and integration solution that reduces time and effort. Thus, can CI be leveraged by hardware professionals to achieve a traceable, seamless, and quicker integration solution? This article will explore the efficacy of using a unified framework comprising of Questa Formal, Design Solutions, and Simulation to execute functional verification and simulation with CI. With this increasing complexity of the system and maybe some wall hitting for the performance of the tools the eyes are now going to how to increase the productivity using the same tools. In this paper, we will show how to utilize the CI flow in hardware to increase the productivity by making the flow more configurable and seamless. Wilson Research reports that 24% of time is spent on creating tests and running simulations, and 41% on debugging. Using continuous integration (CI) in functional verification ensures automation and tight integration between various functional verification tools, resulting in reduced time and effort. This paper will cover the process of building a strong integration and how to use such a flow in hardware verification to achieve higher productivity and quality.
Loading… -
Prashantkumar Ravindra, Analog Devices; Barry Briscoe, Analog Devices; Miguel Castillo, Analog Devices; Nimay Shah, Analog Devices
Verification IPs are the building blocks of UVM testbenches needed for Metric Driven Verification of complex designs. UVM Testbench generators supply the necessary infrastructure to instantly create a basic UVM testbench template from scratch. However, the VIP integration into the testbench is done manually, making the testbench development activity much more difficult and time-consuming. Automating VIP integration is the solution, but this is not straightforward due to the lack of an industry-wide standard to exchange VIP metadata. In this paper, the authors present a non-proprietary VIP metadata template that can enable this automation via a testbench generator. This paper will further highlight how, without restricting the creativity of VIP developers, multiple vendor VIP titles have been successfully integrated into the ADI's UVM Testbench generator, with the help of this metadata. This enabled the DV engineers to instantly create a ready-to-simulate sophisticated UVM TB from scratch, reducing the efforts from weeks to minutes.
Loading…
Technical Session 8 [Mixed Signal with UVM Verification]
Tuesday, March 5, 2024
-
Simul Barua, Ulkasemi Inc.; FNU Farshad, Ulkasemi Inc.; Henry Chang, Designer’s Guide Consulting, Inc.
Due to the rise of AI, Internet of Things (IoT), and cloud computing, demand for high-performance computing System on Chips (SoC) is higher than ever. To ensure the correct functionality of these complex SoCs, there is a plethora of digital verification solutions offered by various vendors and in-house verification teams. But in order to ensure first functional silicon we need to perform full-chip verification with both the analog and digital subsystems. Currently, there is no standard methodology for performing full chip verification and existing functional verification solutions focus heavily on UVM-driven digital verification testbenches. Due to the high simulation run time of SPICE models and analog circuits, it is not feasible to run analog circuits in schematics along with UVM testbenches. So, it is desirable to model the functional behavior of analog circuits using a higher-level language such as SystemVerilog, Verilog-AMS, etc. such that they provide full functional coverage, and analog assertions to verify connectivity along with reasonable accuracy and faster simulation run times. In this paper, as much is in the literature on the trade-offs of AMS vs. DMS modeling, we will discuss digital-driven chip-level verification and the trade-offs when using both approaches along with examples to illustrate our approach.
Loading…
Wednesday, March 6, 2024
-
Jonathan David, Innophase, Inc.; Henry Chang, Designer's Guide Consulting, Inc.
In the world of design verification for Analog and Mixed Signal (AMS) SOC’s there are many problems, some of which are now relatively solved. For smaller digital blocks where C or TLM models exist as the reference specification, this specification will drive the development of a UVM bench for the block level design. However, for the equivalent level in the mixed signal design comprising several circuits working together: an RF synthesizer, radio receiver or transmitter chain, or even one entire radio transceiver with many digital controls but little digital content, no simple and standard way of quickly creating a verification bench exists. This is where there is a convergence of a lot of activity -- the work of the system designer, the work of the analog lead, the creation of the top-level schematic, and the work of the designers. All of the people involved need to start working together, and it is exactly for this reason that this is the place where there is often difficulty and a source for errors. It is also where circuit simulation is needed and becomes slow and often infeasible. This is the area that we will address. First we address the handling of analog ports in a UVM environment. Next, we address the handling of register based controls. Thirdly, we demonstrate a way to use Python and Jinja templates to construct DUT specific testbenches from the DUT port list. Finally, we show how to use UVM to manage the testing with sequences, including sequences that depend on feedback from the design. To conclude, we will present the cost (development time) and benefit (TB build time difference) when adopting this methodology.
Loading… -
Sandeep Sharma, Meta Platforms Inc.
The paper proceeds to offer an architectural overview of the new MSV verification methodology, supporting Digital Mixed Signal (DMS) and AMS Co-simulation (Co-Sim), alongside guiding principles and implementation details. It explains the approach for dynamically switching SV real number models (RNMs) and SPICE netlists in regression runs, outlines the randomized nature of SV stimulus, and explores the Analog response. To facilitate the reuse of legacy testbenches, details of DMS checkers and assertions instantiated in Co-Sim are provided. In conclusion, the paper provides a forward-looking roadmap for the new approach. This includes the integration of the AMS sub-system's testbench into the SoC environment and the seamless incorporation of advanced features at the SoC level. Additionally, it emphasizes the adaptability of this environment, enabling multiple projects to harness the same MSV flow, thereby potentially facilitating the widespread adoption of a unified Mixed-Signal methodology.
Loading…
Technical Session 9 [SystemVerilog Verification Techniques]
Wednesday, March 6, 2024
-
Doug Smith, Doulos
Nearly all digital designs have asynchronous behaviors. For example, designs often have asynchronous resets or asynchronous inputs like interrupts or ready signals. Some RTL designs are inherently asynchronous in nature as in the case of power management modules receiving off-chip boot up signals. Asynchronous behaviors also appear in the form of asynchronous handshaking protocols for peripheral devices or in the case of synchronizers between clock domain crossings. SystemVerilog assertions (SVA) provide a great way of testing and describing design behaviors. However, using SVA to capture asynchronous behavior is not always straightforward due to the scheduling semantics of SystemVerilog. While triggering on an asynchronous event is easy enough, the sampling of the assertion inputs is either dependent on its context as in the case of immediate assertions or synchronous by nature as in the case of concurrent assertions. Often, asynchronous events occur before the design has updated its state, requiring the checking of the RTL to be delayed. Further, the timing of asynchronous events may be hard to predict, making it harder to describe using an assertion. In this paper, eight common asynchronous scenarios are presented and SVA solutions for checking them. In additional, an alternative approach using a global fast clock is presented as both a portable simulation solution and something that works for both formal verification and emulation. Lastly, incorporating functional coverage into the asynchronous checking is also discussed.
Loading… -
Salman Tanvir, Infineon Technologies; David Crutchfield, Infineon Technologies; Markus Brosch, Infineon Technologies
Modern SoC designs are composed of a large number of blocks, which are typically a mix of internal and third party vendor provided IP. In order to manage the complexity, not only is the reuse of existing IP essential, but modern platform-based design goes one step further in churning out multiple product derivatives by reusing a single configurable base design. Parameterization is at the heart of this reuse methodology. Parameters are well supported in the most widely used Hardware Description Languages (HDL), namely VHDL, Verilog and SystemVerilog. However, on the HVL side in SystemVerilog, the user must contend with various challenges when working with parameterized interfaces, classes and coverage. This work compares various known solutions to the parameterized interface problem. As these solutions do not support parameterized interfaces in an emulation context, a new emulation compatible solution is proposed. The next problem area highlighted is the UVM factory. Whilst working with non-parameterized classes, the UVM factory abstraction (base class & macros) is easy to use without understanding the core implementation. However, to effectively use the factory design pattern with parameterized classes, we need to dig deeper. The final challenge described is related to parametric coverage. Parameterized classes make parametric coverage handling very easy, but as we will see, there are drawbacks to this under specific scenarios. Various solutions to this problem are presented.
Loading… -
Dillan Mills, Synopsys, Inc.; Chip Haldane, The Chip Abides, LLC
This paper presents solutions to problems encountered in the implementation of policy classes for SystemVerilog constraint layering. Policy classes provide portable and reusable constraints that can be mixed and matched into the object being randomized. There have been many papers and presentations on policy classes since the original presentation by John Dickol at DVCon 2015. The paper addresses three problems shared by all public policy class implementations and presents a solution to a fourth problem. The proposed solutions introduce policy class inheritance, tightly pair policy definitions with the class they constrain, reduce the expense of defining common policies using macros, and demonstrate how to treat policies as disposable and lightweight objects. The paper concludes that the proposed solution improves the usability and efficiency of policy classes for SystemVerilog constraint layering.
Loading…
Technical Session 10 [Requirements Definition and Traceability]
Wednesday, March 6, 2024
-
Fernando Orge, Allegro MicroSystems
This paper introduces the PyRDV tool, which helps IC (Integrated Circuit) developers find potential coverage holes or unimplemented requirements without running simulations. PyRDV is not just a Python-based software tool but a complete solution consisting of (1) A theoretical framework to prove design completeness, (2) A detailed workflow for IC developers based on GitLab issues, (3) A Python package to collect all the necessary information from the GitLab issues and (4) A CI/CD (Continuous Integration / Continuous Deployment) service to periodically check for sign-off metrics. Thus, this work presents an alternative solution to the requirements traceability problem and contributes to RDV (Requirement-driven Verification) methodologies, proposing detailed criteria to guarantee the implementation of all the requirements and the verification of all the specifications. The theoretical framework defines a common language for all the IC developers, preventing ambiguities in the information flow. The GitLab platform is the unique source of truth among all developers since it gathers requirements, specifications and verification plans. The GitLab CI/CD service also helps publish all the sign-off reports and metrics on the GitLab platform. As a result, developers will benefit from this prompt information because it will help them to quickly adapt to requirement changes and optimize design and verification resources.
Loading… -
Jan Kreisinger, Allegro MicroSystems, Inc; Sanjay Chatterjee, Allegro MicroSystems, Inc
Well-defined design requirements tracking workflow throughout the verification process considerably impacts the completeness of the verification effort and on-time execution of the project. Sometimes the verification plan development and documentation are considered an unnecessary overhead to the “actual” verification effort, however, it must be understood that incomplete or inconsistent design requirements mapping on the verification plan can lead to a project schedule slip, indemonstrable verification results, and missed bugs. This paper presents design requirements tracking workflow based on the Jama Connect tool, a requirements management tool from Jama, and vManager, a regression management tool from Cadence. Manual work to create the interface between these tools is minimized by two Python scripts based on Jama REST API.
Loading… -
William Moore, Paradigm Works
When silicon project documentation is incorrect or misunderstood, costly hardware bugs can escape into production. In the software realm, Agile methodologies such as Behavior-Driven Development (BDD) and the Gherkin language emerged to ensure code behavior matches the documented intent. Bathtub is a new library written completely in SystemVerilog which enables silicon design and verification teams to realize the benefits of BDD and Gherkin to facilitate collaboration and generate true living documentation—executable specifications that are always accurate, accessible, and up-to-date. This paper describes BDD, Gherkin, and Bathtub, and shares findings from their use in a silicon project.
Loading…
Technical Session 11 [Formal: Liveness and Use Cases]
Wednesday, March 6, 2024
-
Ankit Garg, Nvidia
This paper focuses on the different techniques and methodologies employed to analyze and prove forward progress and its significance in the context of formal verification. We will discuss various approaches for ensuring forward progress in the design, using Liveness and Safety assertions both, and later examine how and where each assertion type should be used. with some case studies that demonstrate the application of forward progress checks and the type of critical issues it has found in the designs.
Loading… -
Nitish Sharma, Qualcomm Inc; Venkata Nishanth Narisetty, Qualcomm Inc
Formal Verification (FV) has established itself as a critical element of high-quality functional verification, playing an indispensable role in the success of tape-out of modern hardware designs. While safety properties have received significant attention in recent EDA tool advancements, liveness properties remain relatively overlooked, even among formal experts. In this work, we introduce a robust mathematical formulation aimed at delving into exploring deep states in the design and uncover liveness issues like starvation, livelocks, and deadlocks etc. Furthermore, we extend this methodology to break down the FV complexity related to liveness, enabling us to achieve exhaustive proofs for liveness properties. We also present the bugs found by this approach in multiple DV signed off designs.
Loading… -
Erik Seligman, Cadence; Karthik Baddam, Qualcomm
Have you ever worked on a large formal property verification (FPV) problem, and had to decompose the problem into smaller pieces to make it possible? This is a natural technique used during modern design verification, and a necessity due to the inherent complexity of the “model checking” problem, the main problem addressed by current commercial FPV tools. Usually such decomposition is handled by writing ad hoc user-level scripts. These scripts define the various pieces of the problem, and assemble the results together in the end to ensure the design is fully validated. However, this reliance on user-level scripts creates a key avenue for potential errors or escapes; indeed, we have seen numerous real-life cases where the subdivided problem was run perfectly on the tools, but an error was made during assembly or in porting specifications from a previous project. In this paper we present a new FPV tool concept, “Proof Structure”, which augments standard model checking to empower the user to carry out this decomposition in a rigorous, well-defined way—with the tool given enough information to prove that the overall results are combined with correct reasoning, or identify any holes in the process. We describe the main operations supported by our current implementation of Proof Structure, and show how it has helped engineers at HPE to achieve convergence on a difficult FPV problem, using multi-step decomposition and gaining high confidence in the overall soundness of their result.
Loading…
Technical Session 12 [Analog Modeling in SystemVerilog]
Wednesday, March 6, 2024
-
Mariam Maurice, Siemens EDA
As waiting for the completion of the analog transistor level could extend the time to market for the digital verification engineers to ensure that both the analog and the digital systems will function properly when they are connected, the functional verification of the analog devices that are modeled using the Real Number Modeling (RNM) has become a crucial step of the mixed-signal SoC's validation. Given that there are numerous effective, adaptable, and trustworthy functional verification approaches, including the Constrained Random Verification (CRV), Functional Coverage, Assertions/Checkers, and Universal Verification Methodology (UVM). The paper explains how these approaches could be used to test an analog-modeled Device Under Test (DUT) and guarantee its functional accuracy
Loading… -
SangGi Do, Samsung Electronics; Seongeun Shin, Samsung Electronics; JungKyu Jang, Samsung Electronics; Dohui Kim, Samsung Electronics
IP verification is an essential process when designing solid IPs. In addition, the importance of IP verification is being highlighted due to increased demand for automotive silicon. Verifying the logical operations always requires a behavior model which is written in Verilog or VHDL. From small gates to large memories, behavior models are essential for logical simulation. For small circuits such as gates and standard cells, it is easy to implement all operations. However, large-sized IPs, such as memory, usually reflect only key operations, because it is impossible to implement all the detailed operations and considering physical characteristics. Furthermore, some specialized devices, such as MTJ device for MRAM, have unique probabilistic characteristics that cannot be supported using conventional modeling methods. In this paper, we introduce our memory bit-cell modeling method, which enables various test operations for MRAM. Our proposed modeling methods are implemented for solid IP validation, and they support not only the MRAM but also various types of memories.
Loading… -
FNU Farshad, Ulkasemi Inc; Shafaitul Islam Surush, Ulkasemi Inc.; Simul Barua, Ulkasemi Inc
Modern Mixed Signal System on Chip (SoC) requires tight integration between analog and digital domains. Through pre-silicon verification of analog and digital sub-systems in block and SoC level is a must for ensuring first working silicon. Unlike digital circuits, analog circuits do not have a standard verification methodology yet [1-3]. Usually in verification environment the spice based analog circuits are replaced with faster event driven behavioral models developed using VerilogAMS or SystemVerilog. In recent years SystemVerilog based Real Number Modeling (SV-RNM) and SystemVerilog User Defined Nettype (SV-UDN) is gaining popularity due to easier integration with Universal Verification Methodology (UVM) testbench [3]. Pre-computed Look Up Tables (LUT) is widely used to model the behavior of complex analog circuits. These LUTs are usually populated using data from external source like MATLAB. Unlike VerilogAMS, which has a built-in function named $table_model() [4] to support LUT by importing data from .tbl file; SystemVerilog does not have any built-in support for such functionality. This paper demonstrates a SystemVerilog package named sv_lut_pkg based mechanism for LUT table creation, population and fetch values using parameterized SystemVerilog macros. The capabilities of sv_lut_pkg package is exhibited by developing a LUT based PTAT core model using SV-RNM flow.
Loading…
Posters
Tuesday, March 5, 2024
-
Ashfaq Khan, Intel Corporation; Shuhui Lin, Intel Corporation; Daniel Standring, Intel Corporation; Adam Campos, Intel Corporation; Satish Venkatesan, Intel Corporation; Soowan Suh, Intel Corporation
A typical VLSI implementation starts from a logical or architectural view of the design in RTL, where multiple building blocks are connected without considering physical constraints. This view then goes through a series of modifications required to meet structural design (SD) and manufacturability needs. While some of these modifications enjoy automation through industry standard EDA tools (e.g., DFT insertion), majority of them are currently implemented manually by designers and are considered regular design activity. To make matters worse, some of these steps often need to be repeated based on the feedback from the downstream consumers of the design. In this paper we demonstrate that, with the recent advancements of the EDA industry, many of these activities can now be automated in the form of RTL Transformation. Such transformations include, but are not limited to, Physical Hierarchy restructuring, Feedthrough/Tie-off/Repeater insertion, Clock/Reset re-distribution, Fuse isolation etc. Our methods combine user spec and home-grown code with industry standard EDA tools to achieve RTL transformations on the fly as part of the RTL generation flow. This is a paradigm shift in RTL development, resulting in order of magnitude improvement in design turn-around time (TAT). We describe several applications/transformations from a production use case where multiple weeks of design work have been either fully eliminated or reduced to a fraction of the original TAT, depending on the type of the transformation. We also present various best-known methods (BKMs) and gotchas that need to be considered while performing RTL transformation to achieve the best results, given the current state of the EDA tools.
Loading… -
Kaiwen Chin, Renesas; Esra Sahin, Renesas; Kranthi Pamarthi, Renesas
LINT tools have been supporting digital designers to structurally detect design issues early in the design cycle. One of the key problems they seek to address is “arithmetic overflow” detection. Due to inevitable false negatives within pure Structural LINTing flow, designers struggle to isolate the real issues. Manual assessment of certain types of violations can be tedious, error-prone, and very time-consuming. LINT combined with Formal Methods of Analysis offers exciting new opportunities. Formal LINT tools provide efficient Formal-enhanced solutions for many LINTing applications to ensure good design quality. One of the areas of concern is arithmetic overflow verification. Formal-aware LINTing helps designers identify real design issues much more efficiently. However, efficient formal-aware verification requires RTL designers to adopt a certain coding style for signed arithmetic operations. . We have been partnering with EDA vendors to improve reliability and robustness of arithmetic overflow verification in Formal LINT tools, aiming to achieve higher productivity and accuracy. While the tools keep improving in their coverage, this paper recommends the most promising RTL coding style if one wants to use Formal LINT tools to detect arithmetic overflow. The paper illustrates strength of Formal LINT technology over Structural LINTing, summarizes best practices and pitfalls in signed arithmetic RTL implementation, articulates the approach taken in tool evaluations, and briefly presents the tool evaluation results.
Loading… -
Kyoungmin Park, Samsung Electronics; Brad Budlong, Cadence Design Systems; Hyundon Kim, Samsung Electronics; Danny Yi, Cadence Design Systems; Gibs Lee, Cadence Design Systems; Jaehunn Lee, Samsung Electronics; Chulmin Kim, Samsung Electronics; Jaemin Choi, Samsung Electronics; Kijung Yoo, Samsung Electronics; Youngsik Kim, Samsung Electronics; Yogesh Goel, Cadence Design Systems; Seonil Brian Choi, Samsung Electronics
This paper introduces an innovative 4-State logic ('0', '1', 'X', 'Z') emulation technology and how we applied it to real mobile-AP SOC for power-aware verification for the first time in the emulation based verification world including requirements for successful adoption. And the results of the experiment compared with traditional simulation based verification. Also we share the experimental data about how much additional resources are needed for 4-State logic emulation compared with 2-State logic emulation.
Loading… -
Endri Kaja, Infineon Technologies AG; Nicolas Gerlin, Infineon Technologies AG; Ungsang Yun, Infineon Technologies AG; Jad Al Halabi, Infineon Technologies AG; Sebastian Prebeck, Infineon Technologies AG; Dominik Stoffel, Rheinland-Pfälzische Technische Universität; Wolfgang Kunz, Rheinland-Pfälzische Technische Universität; Wolfgang Ecker, Infineon Technologies AG
In this paper, we introduce an automated and versatile framework designed to generate diverse SFI campaigns and at the same time closing the gap between the specifications and the fault injection process with minimal efforts. The framework provides a vendor-independent solution, thus all Verilog/SystemVerilog-based simulators/emulators can be utilized. Initially, an RTL generation flow is utilized to generate the designs in a mixed RTL/gate-level granularity and at the same time to equip them with fault injection capabilities. In this flow only selected parts of the design that are subjected to fault injection are kept at the gate-level granularity while the remaining parts of the design are represented at the RTL granularity. This approach enables fast RTL fault simulation while maintaining the accuracy of the gate-level fault simulation to provide automatic SFI on various RISC-V variants.
Loading… -
Suhas D S, Intel; Ponsankar Arumugam, Intel; Deepmala Sachan, Intel; Ritesh Jain, Intel
Graphics SOCs are high-performance complex design with multiple clocks and reset interacting between multiple modules having CDC/RDC scenarios. Typical Graphics SOC contains hundreds of IP's and multiple Subsystems having 100’s of clocks, where abstract of these are used at SoC for hierarchical analysis which in turn results in 1000's of architectural assumptions. A single wrong assumption can mask the actual violation and lead to metastability. Reviewing large number of constraint's manually is always error prone. In this paper we are addressing such metastability issues by validating these constraints against the actual design intent during simulations to ensure no misses leading to potential silicon escapes.
Loading… -
Vineeth B, Intel Technology India Pvt Ltd; Deepmala Sachan, Intel Technology India Pvt Ltd; Ritesh Jain, Intel Technology India Pvt Ltd
Timing constraint verification plays a crucial role in the development of GFX SOCs as it ensures that the timing constraints used for synthesis and timing closure are proper and accurate so that the design meets the desired performance requirements. The conventional method used to verify the timing constraints is a gate-level simulation (GLS). Simulations such as these require long run times, offers less coverage, and occurs too late in the design development cycle. A powerful alternative to GLS is the formal verification of timing constraints which is faster and more efficient. The major drawback to this approach is that it may not be entirely possible to formally verify all the timing exceptions in the design and in such scenarios all the formal failures must be verified using SV assertions in functional simulation. However, the sheer number of assertions generated corresponding to the formal failures can make it difficult to verify them completely in simulation. This paper presents the methodology to improve the formal verification and consequently reduce the assertions generated to converge on real timing issues as fast as possible.
Loading… -
Farhad Ahmed, Siemens EDA; Manish Bhati, Siemens EDA; Lyle Benson, Siemens EDA
Abstract – Reset tree checks should be viewed thoroughly before reset domain crossing analysis. Static verification tools have many checks for reset tree analysis. The paper discusses on the usage of non-resettable registers (NRRs) in reset paths. NRRs can bring in metastability in the reset paths and hence a thorough verification is a must. The paper also discusses noise reduction strategies in RDC analysis. Stable paths and functional false paths are the focus of the discussion in noise reduction, and we discuss various scenarios and how static verification tool should report these paths.
Loading… -
Nicholas Nuti, Intel Corporation; Srinivasan Jambulingam, Intel Corporation
Contemporary methods of validating SoC IP interoperability tend to be arduous and time-consuming because they lack a standardized functional validation methodology. An SoC is made of many interconnected IPs that can be developed and verified in isolated conditions. Lack of initial collaborative effort between IP teams can lead to increased validation complexity because IPs have varying simulation requirements and preexisting work must be modified to directly integrate other IPs. Direct IP integration in interoperability validation will require extended support from other teams during setup because simulations must be merged. Merging IP simulations can be difficult because projects may have imposing requisites. Further, a lack of a standardized and seamless integration method increases project duration and can cause validation teams to find bugs late in the overall development process due to delays. This paper discusses an unobtrusive and accessible method of multi-IP simulation with the aim of optimizing and simplifying the process of interoperability validation
Loading… -
Sharada Vajja, Google LLC; Raghu Alamuri, Google LLC; Saksham Mehra, Google LLC
Mobile System-on-Chip (SoC) devices have entered an era of unprecedented complexity, driven by the exponential growth in hardware capabilities within smartphones and other portable devices. The increased demand to support complex and high performance features in the current smartphones has prompted the integration of compute engines like GPUs, multicore CPUs and other Machine Learning processing units in the mobile chips. This substantially surged the performance demands placed on the mobile devices making performance verification an increasingly critical endeavor in addition to the functional verification.
In this paper, we present a systematic approach to efficiently capture the intricacies of the performance verification flow, offering insights into strategies, methodologies , stimulus generation, analysis techniques and results. More importantly, this approach is designed to be scalable across various testbenches and environments including SoC modeling, IP and SoC level RTL simulations, emulation and post silicon validation.
Loading… -
Emiliano Morini, Intel; Bill Zorn, Intel; Disha Puri, Intel; Madhurima Eranki, Intel; Shravya Jampana, Intel
Dot Product Accumulate Systolic units are a primary part in the ML accelerator architectures. They compute floating-point matrix multiply-add operations, which consist of several dot products. Given that exhaustive simulation is impossible in real use cases, Formal Verification is the only possible option to verify these components. C2RTL Formal Equivalence Checking has shown to be a very powerful methodology, and to deploy it successfully an independent, trust C/C++ reference model is required. In this paper, we present a comprehensive End-to-End (E2E) Formal Verification Signoff approach for DPAS units. The proposed flow begins with a new formal friendly C++ model, created using an internal iFP library, developed starting from the FPCore functional programming language, and validated against 3rd party trusted libraries. Then the flow continues with a C2RTL equivalence checking to establish the RTL correctness, where the assumptions used are encoded in the RTL itself as SVA properties and tested using simulation and formal property verification. Several hard corner-case bugs have been identified in intermediate versions of the RTL with this methodology. Innovative techniques have been developed to achieve full convergence, in particular for large double precision units, proving that the final RTL doesn’t have any bug against the reference model.
Loading… -
Deepak Narayan Gadde, Infineon Technologies; Suruchi Kumari, Infineon Technologies; Aman Kumar, Infineon Technologies
Python, as a multi-paradigm language known for its ease of integration with other languages, has gained significant attention among verification engineers recently. Python-based verification environment capitalizes on open-source frameworks such as PyUVM providing (Python-based UVM 1.2 implementation) and PyVSC (facilitates constrained randomization and functional coverage). These libraries play a pivotal role in expediting test development and hold promise in reducing setup costs. The goal of this paper is to evaluate the effectiveness of PyUVM verification testbenches across various design IPs aiming a comprehensive comparison of their features and performance metrics with the established SystemVerilog-UVM methodology.
Loading… -
Nimay Shah, Analog Devices Inc.; Pranav Dhayagude, Analog Devices Inc.; Paul Wright, Analog Devices Inc.; Raj Mitra, Cadence Design Systems Inc.
Emulation is ubiquitous for verifying and validating complex silicon systems of today's age, comprising of a full software stack driving highly intricate hardware. However, as some of these silicon systems move towards the edge, the underlying hardware becomes exceedingly mixed-signal with the integration of sensors, real-world interfaces, high-speed data convertors, buck/boost regulators, etc. Traditional emulation techniques only support synthesizable digital logic. As a result, the scope of what can be verified or validated, and to what extent is somewhat limited. This means that software driven chip configuration that goes all the way down to a primitive hardware elements or complex calibration loops and low power techniques involving the full software stack, cannot be fully verified prior to tapeout. This is an absolute necessity in today’s complex systems at cutting edge manufacturing technologies owing to the cost of unplanned tape-outs and the pressure of delivering first-pass sampleable silicon to customers. The novel techniques presented in this paper focus on taking away this limitation and enabling analog/mixed-signal behavioral modeling methods, thereby enabling "true" system-level, mixed-signal emulation.
Loading… -
Charles Dancak, Betasoft Inc.
A standard UVM testbench is developed to apply benchtop-style directed tests to an on-chip low-dropout CMOS voltage regulator. Eight directed tests verify the regulator's DC and AC response to line and load fluctuations, its programmed operating modes, etc. To enhance the effectiveness of the low-level directed tests in reaching unexpected corner cases, we employ high-level UVM randomization techniques to generate a randsequence of transient tests. The sub-cycle timing aspects of stimulus and response for transient testing are handled using the global uvm_event_pool. Our goal is to present the UVM testbench mechanisms and coding techniques that proved most effective for an area of analog/ mixed-signal testing which lies outside the usual scope of chip-level UVM testbench development.
Loading… -
Santosh Kumar, Qualcomm Technologies Inc; Yogish Raja, Qualcomm Technologies Inc; Geetika Agrawal, Qualcomm Technologies Inc; Karthikeyan Sugumaran, Qualcomm Technologies Inc; Arjun Vazhayil, Qualcomm Technologies Inc; Tommy Brunansky, Qualcomm Technologies Inc.
Ever increasing design complexity across different market segments (Auto, Mobile, Servers, Compute etc.) and different architecture types (single die vs chiplets) has put verification effort and strategies used across IP, Subsystem and SOC under the spotlight. With challenging TTM (time to market) for products, it is imperative to have a scalable verification approach that allows single constrained random stimulus specification to be re used across different verification environments and strategies.
In this paper, we will see our experience of using Portable Test and Stimulus Standard (PSS) language to enable seamless reusability of constraint random scenarios across platforms, design integration levels and verification environments.
Loading…
-
Loading…