• Ravindra Aneja, Synopsys Inc.  
    • Xiaolin Chen, Synopsys Inc.  
    • Aimee Sutton, Synopsys Inc.  
    • Nilabja Chattopadhyay, Amazon.com LLC
    • Jevin Saju John, Synopsys Inc.  
    • Bjoern Hartmann, Synopsys Inc.  

    (Will include presentations by Synopsys customers)  

    Today’s complex processor-based systems enable technological advances in many market segments, such as AI, high performance computing, and automotive. However, verification of these systems introduces new challenges, spanning architectural verification of a custom RISC-V processor to memory coherency in a system containing thousands of Arm or RISC-V cores. As the complexity of the design increases, so does the need for new tools and methods beyond simulation and UVM testbenches.  

    In this tutorial, we will focus on RISC-V processors and present next-generation verification techniques that span the verification journey from a single RISC-V processor to complex systems with many RISC-V cores.  

    To accommodate the flexible and evolving nature of the RISC-V ISA, as well as privilege mode features, out-of-order pipelines, interrupts and debug mode, RISC-V processor verification requires innovation in stimulus generation, comparison, and checking. We will cover dynamic and formal approaches to verifying RISC-V cores, with topics including, but not limited to: ISA compliance verification and functional coverage, data path validation, functional verification of critical blocks, and security verification.  

    Multi-core designs introduce a new set of challenges, such as ensuring fair access to shared resources and cache and memory coherence. This tutorial will present solutions designed to address these issues and prevent costly bug escapes to silicon.

    The size of multi-core designs and multi-processor SoCs means that a simulation-only verification strategy is impractical. Hardware-assisted verification becomes essential to ensuring correct operation in the multi-core designs of today and the future. This tutorial will demonstrate how Synopsys’ next-generation processor verification tools and techniques combine with HAV platforms to create a powerful and effective solution.  

    Whether you are designers or verification engineers of these complex processor-based systems, you will walk away with new ideas on how to improve your verification flow by embracing these next generation solutions.
     


     

  • Russell Klein, HLS Program Director, Siemens EDA

    Many of the algorithms implemented in hardware have their foundations in mathematics and often have reference implementations in software programming languages. Mathematics generally uses real numbers (and sometimes imaginary numbers). Software and general-purpose computers typically use 32- or 64-bit integers and IEEE floating point representations. But for purpose-built hardware, supporting the full range and precision of these formats is not just unnecessary, it is wasteful in terms are area, power, and performance.

    Examples of these types of algorithms can be found in image processing, audio processing, data communications including 5G and 6G, encryption, machine learning, data compression, and much more.

    Algorithms are implemented in hardware, as opposed to simply being implemented in software and run on a general-purpose processor, specifically to improve their performance and power consumption. So as algorithms are implemented in hardware it is important to find an appropriate representation and understand the impact of that representation on the accuracy and precision of the algorithm, as well as the effect on the power, performance, and area (PPA) of the hardware implementation.

    This workshop will cover a variety of numeric representations, including fixed point numbers, alternative floating-point formats like Google’s “brain float,” and exponential representations like “posits.” It will examine rounding vs. truncation, overflow/underflow, and saturating math operations and their effects on the calculations. We will look at how to model algorithms using these alternate formats. And we will cover how to validate and verify both algorithmic implementations as well as Verilog or VHDL RTL, and how it fits into the overall verification process.
     

  • The advent of RISC-V has presented verification teams with many new verification challenges. Complex interactions at the system level, that must be considered when developing a RISC-V core, include uncommon scenarios for block level verification teams. As we move towards more system-level verification in general, these types of scenarios will become commonplace. As such, RISC-V verification provides, among many other things, an interesting learning vehicle for general verification challenges to come. 

    This workshop will discuss a specific complex, but yet commonplace, verification challenge for any team working on a complex RISC-V core. We will consider the verification of a Memory Management Unit (MMU) that includes virtualization and hypervisor operation. These scenarios need to consider both Single- and Multi-core devices along with an Input Output Memory Management Unit (IOMMU) and uncore IP interaction. 

    In this case, scenarios that will be considered will include a broad range of Page Table Entry (PTE) setup cases and page fault cases, with diMerentiated behavior at diMerent privilege levels. Many use cases for the MMU will be included from various data read and write operations and code fetches, up to issues such as self-modifying code. Complex interactions that include a range of Cache Coherency scenarios, RISC-V Weak Memory Ordering (RVWMO) that makes use of the fence instructions, Translation Lookaside BuMer (TLB) invalidation and sfence.vma cases, and many other situations. 

    As part of the workshop, a test place with a full range of scenarios will be discussed and considered, and eMicient tests will be shown that provide high coverage, even at the more complex system level. This workshop will be useful for anyone working on RISC-V cores or processor verification, but is also applicable for any verification engineer considering the development of more complex testbenches that extend into system interaction.
     

  • Vikas Sachdeva, Director of Product Strategy and Business Development at Real Intent

    Introduction 
    Glitches are a common phenomenon in chip design, often deemed inconsequential due to their occurrence on synchronous paths, where Static Timing Analysis (STA) effectively mitigates them. However, specific scenarios within the chip design flow remain critically vulnerable to glitches, potentially causing catastrophic failures at the silicon level. These critical scenarios include clock domain crossing paths, interfaces between analog and digital domains, reset and clock paths, Design for Testability (DFT) paths, and paths spanning across power domains. Traditional methodologies struggle to detect and address glitches across such diverse scenarios comprehensively. This workshop introduces a holistic static methodology to achieve a thorough glitch signoff.

    Summary of the content of the workshop 
    The session begins by delineating the glitch phenomenon, followed by a discussion on the existing chip design flow's limitations, which may overlook glitch issues, leading to silicon failures. The workshop will showcase actual glitch-induced design failures encountered by Real Intent's team, including:

    • Glitches in asynchronous clock domain paths
    • Glitches in reset paths
    • Glitches in synchronous multi-cycle paths
    • Glitches affecting DFT paths
    • Glitches in power domain crossing paths and isolation signals
    • Glitches at the digital-to-analog interfaces

    We will provide a comprehensive analysis of these glitch types through detailed block diagrams, elucidating the identification, verification, and resolution processes for each glitch type. Drawing from these examples, we will develop a methodological framework that ensures a thorough glitch signoff. 

    Furthermore, the session will detail a signoff workflow that integrates static and formal analysis techniques to achieve a robust glitch pathway signoff. 

    Concluding with a compilation of Static Signoff best practices derived from industry insights and experiences, this workshop aims to empower participants to elevate verification practices within their organizations. 

    Intended Audience 
    This workshop is tailored for RTL Designers, Verification Engineers, SoC Designers, Chip Architects, and professionals involved in Clocks and Resets design and architecture, seeking to enhance their verification methodologies against glitches. 
    By attending, participants will gain invaluable insights into glitch analysis and signoff strategies, enriching the verification standards and practices in their respective fields. 
     

  • Arti Dwivedi, Group Director, Product Engineering, Cadence Design Systems 
    Yash Bhagwat, Senior Emulation Verification Engineer at NVidia
    Michael Young, Senior Product Management Group Director, Cadence Design Systems 

    Performance per watt is a key success criteria for complex billion gate SOCs. Optimizing for performance per watt requires power estimation for real application workloads early in the design flow.  

    Palladium Dynamic Power Analysis (DPA) offers novel technologies to estimate power of real-world scenarios spanning billions of cycles for billion gates designs in hours. Palladium’s fast dynamic power analysis enables identification of power-hotspots in the design and provides insights into power efficiency of software-hardware interaction. 

    This workshop will present the power estimation methodologies for different types of emulation workloads to drive design power efficiency with fastest turnaround for long vectors. The presentation will include case studies on how Palladium users have drastically improved the turnaround of power estimation for long emulation vectors compared to traditional power methodologies. Presentation will also share how Palladium users have improved methodologies for IR drop analysis using DPA, enabling power integrity sign-off for real worst case power scenarios.
     

  • Tim Schneider – Sr. Manager of Field Application Engineering 

    Advanced semiconductor designs have many components, including multi-core architectures, programmable peripherals, and purpose-built accelerators. These design elements require a pathway for embedded system software to communicate with them. This is the hardware/software interface (HSI) and it forms the foundation for the entire design project. There are many activities that need information about the HSI. These activities include device drivers and firmware, hardware design and verification, technical documentation, system diagnostics and application software. All of them need accurate, up-to-date HSI information in many different and specialized formats. A lack of unified, up-to-date information results in poor collaboration and an increased opportunity for design errors. This can lead to costly last-minute fixes or even design re-spins, impacting team productivity and compromising, the end quality of the SoC. 

    Arteris addresses these challenges with Magillem Registers. Magillem Registers is tool for a better HSI solution with a scalable infrastructure that promotes a rapid, highly iterative design environment to specify, document, implement, and verify address maps for complex SoCs and FPGAs.  

    During this tutorial, we will explain how Magillem Registers has the features and flexibility to speed development of the largest and most complex designs.
     

  • As the semiconductor industry is experiencing an explosion in design size and complexity, it is accompanied by a need to deliver software readiness when the silicon is back in the lab. One of the key targets for software readiness is the adherence to a power budget with real software applications stressing the hardware design. There are 2 key elements to validating power budgets in pre silicon – performance of the model and the ability to execute a full software stack. Combining a fast virtual prototype of the CPU sub-system with the RTL of the remaining SoC running on an emulator typically produces a 10x speed-up over fully-RTL emulation setups. Recent advances in both virtual prototyping and emulation now yield another leap in hybrid performance, which enables pre-silicon execution of entire software stack. Similarly, the power analysis engine needs to become efficient to be able to handle these large workloads and be able to address requirements of peak power, average power and leakage power. In this workshop, we will first review the latest state-of-the-art of hybrid emulation technologies and use-cases. We will then illustrate the application of hybrid emulation for pre-silicon power optimization. 
     

  • From battery operated handhelds to datacenter servers, electronic devices are power consumers. Depending on the application area, total power consumption varies based on the semiconductor (SoC) content used, and the intended purpose for the device. One truth applies to every SoC, power analysis requires a holistic methodology from architectural exploration to tape-out to accurately addresses power concerns. 

    One of the key factors used to calculate power consumption is the dynamic power that has become increasingly important in technologies such as FinFET. Here the gate area has increased; hence, the load capacitance is larger and the impact of the switching power more pronounced.

    In this workshop, join us in exploring the power dynamics shaping the future of the data centric era. We’ll look at new methodologies for power profiling and analysis and their positive impact on meeting power requirements.
     

  • PSS (Portable Test and Stimulus Standard) was first released in 2018 to enable a single representation of stimulus and test scenarios that can be re-used across multiple verification and test engines and also provide the means for intelligent constrained random testing at the SoC level.

    Six years later, PSS is adopted by many chip design companies in multiple use cases, verticals, and platforms!

    In this workshop, we will review practical applications and real-life use cases of PSS adoption, focusing on the challenges that PSS was used to address, the way it was used to tackle them with Perspec™ System Verifier.

    Workshop Highlights
    During this workshop, you will have the opportunity to:

    • Real-world applications: Discover how PSS is being used in real-world projects to improve productivity, enhance SoC-level coverage, and reduce time-to-market.
    • Industry insights: Learn from industry leaders and pioneers who have successfully implemented Portable Stimulus in their projects and gain valuable insights into best practices and emerging trends.
    • Gain in-depth knowledge: Our expert speakers will delve into the latest enhancements of the PSS standard and Cadence’s System Verification libraries.

    Who Should Attend?
    This workshop is tailored for professionals in hardware and software design, verification, and testing. Whether you're a design engineer, verification engineer, or project manager overseeing system-on-chip development, this workshop is designed to enhance your understanding of Portable Stimulus and provide you with the knowledge and skills to apply it effectively in your projects.

    Speakers:

    • Moshik Rubin (Cadence)
    • Sergey Khaikin (Cadence)
    • Santosh Kumar (Qualcomm)
    • Klaus Hilliges(Advantest)
       


     

  • In the rapidly evolving landscape of semiconductor design, the complexity and scale of digital circuits continue to grow exponentially. Traditional methods of Register-Transfer Level (RTL) functional verification are increasingly challenged by these advancements, necessitating innovative approaches to ensure robust and efficient verification processes. This technical workshop aims to explore the integration of Artificial Intelligence (AI) and Machine Learning (ML) engines into RTL functional verification workflows, showcasing new products and methodologies that leverage these cutting-edge technologies.

    The workshop will feature a comprehensive overview of the current state of RTL functional verification, highlighting the limitations and bottlenecks faced by verification engineers. We will introduce a suite of new AI/ML-powered tools designed to enhance verification efficiency, accuracy, and coverage. These tools employ advanced algorithms to automate pattern recognition, anomaly detection, and predictive analysis, significantly reducing the time and effort required for verification tasks.

    Via the following presentations on “smart” automation for design and testbench creation, debug and regression analysis, engine optimizations, and muti-platform coverage merging and analysis, participants will be able to map their needs to these new AI/ML-accelerated flows:

    1. Introduction to AI/ML in RTL Verification: Understanding the basics of AI/ML and their applicability to RTL verification.
    2. New AI/ML-Driven Verification Tools: Demonstrations of the latest products incorporating AI/ML engines, including their features, benefits, and use cases.
    3. Case Studies and Real-World Applications: Insights from industry leaders on successful implementations of AI/ML in RTL verification, showcasing tangible improvements in verification outcomes.
    4. Future Trends and Challenges: Exploring the future potential of AI/ML in verification and addressing the challenges associated with their adoption.

    By the end of the workshop, attendees will have a deeper understanding of how AI/ML can transform their overall approach to RTL functional verification, driving innovation and efficiency in their verification processes.

    Join us to stay ahead of the curve and harness the power of AI/ML to tackle the complexities of modern RTL D&V!
     

  • In this 90-minute session, participants will explore how AI-driven agents can revolutionize today’s SoC design and verification workflows. We will discuss the latest breakthroughs in agentic AI algorithms for hardware modeling, constraint-solving, and automated test generation—showing how these AI agents not only expedite verification cycles but also enhance overall design quality. Using real-world use cases and practical demonstrations, attendees will gain actionable insights into integrating AI solutions for faster time to market, improved reliability, and lower development costs.

    Introduction to ChipAgents and William Wang

    ChipAgents is a pioneering semiconductor software company at the forefront of AI-driven automation in chip design and verification. By combining advanced AI algorithms with deep hardware expertise, ChipAgents delivers scalable solutions that help design teams catch bugs faster, optimize performance, and reduce project risk. The workshop organizer, William Wang, CEO and Founder of Alpha Design AI, is an industry thought leader recognized with the prestigious IEEE Laplace Award for his significant contributions in Artificial Intelligence. William has also been honored with “AI’s 10 to Watch,” the DARPA Young Faculty Award, the NSF CAREER Award, and the Karen Sparck Jones Award by the British Computer Society, underscoring his impact in applying machine intelligence to accelerate innovations.

  • This workshop presents a comprehensive exploration of machine learning (ML) techniques applied to functional verification, addressing the pressing need to automate and accelerate key stages of chip design verification. As verification consumes approximately 55% of ASIC/IC project costs and is a major bottleneck in chip design schedules, ML offers promising solutions to enhance productivity and reduce time-to-market. Our workshop aims to bridge the gap between cutting-edge research and practical industry applications, providing attendees with actionable insights and strategies to implement ML in their verification processes. 

    The workshop begins with a state-of-the-art survey of ML applications in verification, including recent advancements in using large language models (LLMs) for test bench generation and assertion insertion. We will discuss current limitations and challenges in applying ML to complex, large-scale designs, setting the stage for our exploration of novel solutions.
     

  • Development of SoC is a flow involving several steps and many times there are multiple iterations to meet the design objectives and performance targets. This flow typically involves the following:

    • Design intent definition by architects 
    • Handling CSR (Control and Status Registers), Memory Maps, Register banks, etc. Users usually define their register specifications in SystemRDL (an Accellera standard), spreadsheets, JSON, CSV, etc. Then they use EDA tools to convert these specifications into DV collaterals, firmware headers, documentation, etc., for the consumption of different teams in the SoC development flow.
    • Design teams create RTLs for desired functionality manually from a feature specification. They also work on power specifications, clock synchronization, constraint analysis and timing closure. The same specification is used by other teams to create the verification testbench to verify the design, headers for firmware development, documentation for validation, etc. Any change in the specification during the process leads to iterations and sometimes rework. 
    • Reusing legacy IPs developed internally, mostly RTL, but also at other abstraction levels (TLM)
    • Using third party IPs (RTL and/or IP-XACT) from different vendors.

    Different teams work on different aspects of SoC design dealing with different formats and a series of EDA tools. Finally, everything needs to be stitched together, integrated and assembled into  the SoC which is then packaged and could be used in products inhouse or delivered to other product companies for further development and integration into a product.

    Successful delivery of any SoC requires numerous exchanges of information and data (IPs) across inhouse teams of hundreds of engineers and vendors. It becomes important to standardize this exchange internally and externally for effective utilization of resources.

    Standardized approach also creates scope for automation to further save upon development cycle time. IP-XACT, being an IEEE standard developed by Accellera, helps to standardize the SoC development flow. The latest version of IP-XACT (1685-2022) along with its rich set of TGI (Tight Generator Interface) API helps not only define standard structure of your design data but also allows programmability for automation using the TGI API.

    This workshop demonstrates effective design automation based on IP-XACT 1685-2022 with two use cases:

    • SoC integration automation based on TGI API
    • Automatic instantiation of NoC based on architect intent and IP-XACT

    Attendees of the workshop will learn the following:

    1. Introduction to IP-XACT
    2. Key features of IP-XACT
    3. What's new in IP-XACT 2022 in brief 
    4. Use case 1: Constructing SoC using IP-XACT and TGI API
      1. IP Packaging 
      2. Constructing SoC using IP-XACT
      3. Capture connectivity using busInterfaces and adhocConnections 
      4. What is TGI
      5. Using TGI for programmability
      6. Using vendor extensions for capturing user defined data
    5. Use case 2: Automatic instantiation of NoC based on architect intent and IP-XACT
      1. Flow overview
      2. System map intent definition by architects
      3. NoC automatic instantiation and connection
      4. Output generation from IP-XACT design (NoC RTL, C Header/Driver, UVM RAL, Documentation)
      5. Next design refinement steps
      6. Best practices in SoC creation using IP-XACT 
  • As advanced low-power architectures have become more pervasive in industry, the complexity of these architectures has driven new methodologies for the verification, implementation, and reuse of power intent specifications. Modern low-power designs place requirements that span from enabling more flexible IP design reuse to providing well defined interfaces between analog and digital components in simulation. The IEEE 1801-2024 (UPF 4.0) standard provides several key enhancements that are required to keep pace with these innovations in lowpower design. The workshop will provide an overview of the enhancements to the standard from both a conceptual and a command level. New concepts such as virtual supply nets, refinable macros, and UPF libraries will be introduced, as well as rearchitected features with respect to interfacing between analog and digital simulation and advanced state retention modeling. While the new IEEE 1801-2024 standard provides numerous detailed clarifications and enhancements to the previous version, this workshop will focus on the key changes that will impact most designers and changes that enable new functionality. 
     

    In the six years since IEEE 1801-2018 was introduced there have been a number of trends in design that required additional features in the standard to support. Key among these is an increasing need for co-verification of analog and mixed signal content. Another trend is that IP providers are providing pre-verified low-power IP to their customers, but struggle to provide a flexible IP for implementation in multiple design contexts while preserving the verification signoff on the blocks. To address these trends, IEEE 1801-2024 provides new features such as Value Conversion Methods (VCM) and HDL tunneling that help bridge the gap between the analog and digital designs, and the concept of a refinable macro to address the IP reuse requirements. 
     

    Real-world usage of the previous standard (IEEE 1801-2018) has prompted clarifications and enhancements that will have a significant impact on users of the standard. For example, the modeling of state retention has been entirely reworked to provide better modeling of the retention power intent and to more accurately define requirements on the retention control signals in each phase of state retention. The new standard also codifies some common design practices to make a clear, more consistent implementation across vendors. The new concept of virtual supplies is one such case. It removes the ambiguity that exists today when supplies are used to provide port constraints or to simplify power state definitions, but do not imply a physical supply net in implementation. 
     

    This workshop will introduce these new concepts and their associated commands and provide an overview of the major semantic and syntax changes introduced by IEEE 1801-2024. This understanding will help attendees transition to the new standard and improve the quality of advanced low-power architectures and design environments. 

    Authors: 

    • John Decker, IEEE P1801 Workgroup Chair
    • Daniel Cross, Cadence
    • Amit Srivastava, IEEE P1801 Workgroup Vice-Chair
    • Lakshmanan Balasubramanian, IEEE P1801 Workgroup Secretary
    • Marcelo Glusman, Cadence
    • Medaramitta Jeevan, Siemens
    • Gabriel Chidolue, Siemens
    • Rick Koster, Siemens
    • Raguvaran Easwaran, Intel
    • Paul Bailey, Nordic Semiconductor
    • Progyna Khondkar,  Cadence
  • CDC-RDC analysis has evolved as an inevitable stage in RTL quality signoff in the last two decades. Over this period, the designs have grown exponentially to SOC's having 2 trillion+ transistors and chiplet's having 7+ SOC's. Today CDC verification has become a multifaceted effort across the chips designed for clients, servers, mobile, automotives, memory, Al/ML, FPGA etc ... with focus on cleaning up of thousands of clocks and constraints, integrating the SV/\s for constraints in validation environment to check for correctness, looking for power domain and OFT logic induced crossings, finally signing off with netlist CDC to unearth any glitches and corrupted synchronizers during synthesis.

    As the design sizes increased in every generation, the EDA tools could not handle running flat and the only way of handling design complexity was through hierarchical CDC-RDC analysis consuming abstracts. Also, hierarchical analysis helps to enable the analysis in parallel with teams across the globe. Even with all these significant progress in capabilities of EDA tools the major bottleneck in CDC-RDC analysis of complex SOC's and Chiplets is consuming abstracts generated by different vendor tools. Different vendor tool abstracts are seen because of multiple IP vendors, even in house teams might deliver abstracts generated with different vendors tools. 

    The Accellera CDC Working-Group aims to define a standard CDC-RDC IP-XACT model to be portable and reusable regardless of the involved verification tool. 

    As moving from monolithic designs to IP/SOC with IPs sourced from a small/select providers to sourcing IPs globally (to create differentiated products), the quality must be maintained as driving faster time-to-market. In areas where the standards (SystemVerilog, OVM/UVM, LP/UPF) are present, the integration is able to meet the above (quality, speed). However, in areas where standards (in this case, CDC-RDC) are not available, most options trade-off either quality, or time-to-market, or both :-( Creating a standard for inter-operable collateral addresses this gap. 

    This workshop aims to remind the definitions of CDC-RDC Basic Concepts and constraints, as well as the description of the reference verification flow, and addressing the goals, scope & deliverables of the Accellera CDC Working Group in order to elaborate a specification of the standard abstract model. 

    Presenters:

    • Bill Gascoyne, Blue Pearl Software
    • Ping Yeung, Nvidia
    • Chetan Choppali Sudarshan, Marvell
    • Don Mills, Microchip Technology
    • Anupam Bakshi, Agnisys
    • Farhad Ahmed, Siemens EDA
    • Iredamola Olopade, Accellera
  • With the release of the Portable Stimulus Standard (PSS) version 3.0, and additional work ongoing in the Accellera Portable Stimulus Working Group, the PSS language has taken the next steps to maturity.

    Just as PSS has elevated the use of constrained-random verification from the transaction level in UVM to the scenario level, PSS 3.0 elevates the concept of runtime coverage from transaction-level data via covergroups, which PSS has supported for quite some time, to behavioral coverage to enable the tracking of sequential and concurrent behaviors to understand whether the scenarios generated from a PSS model execute the behaviors required to meet the intended deliverables. The first section of this technical Workshop will explain Runtime Behavioral Coverage in PSS and present examples to show how it is used to monitor and identify a variety of PSS behaviors as they are executed in the generated scenarios. We will also show how behavioral coverage statements may be combined with data coverage to understand the full usage and value of this new PSS feature.

    The second section of the Workshop will provide an update on new and ongoing efforts in the Working Group, including a methodology library, similar to what UVM provides for SystemVerilog, to provide greater interoperability between PSS models, whether created in-house or sourced from third parties. This example-based approach will address solutions to common challenges faced in test creation and will underscore the necessity for an industry-wide PSS methodology library while showing the quality of PSS tests and the ease with which they can be created.

    You will learn:

    • How behavioral coverage can determine whether runtime concurrency matches test intent
    • How to use key behavioral coverage operators and statements
    • What new PSS features are coming
    • How a PSS methodology library supports interoperability and simplifies scenario creation

    Presenters include:

    • Hillel Miller, Synopsys
    • Prabhat Gupta, AMD

     

  • As the IEEE 1800.2 UVM standard continues to evolve, Accellera's release of the latest reference implementation (2020.3.1) introduces significant enhancements for verification engineers. This workshop focuses on the practical aspects of adopting the new version, providing an overview of its performance and functional improvements, as well as guidance for a smooth transition.

    We will examine the migration process from UVM 1.2, emphasizing how the challenges faced in earlier transitions have been addressed. Particular attention will be given to backward compatibility and resources now available, such as the public GitHub repository, which supports faster delivery of bug fixes and migration aids.

    Participants will also gain an understanding of the performance benefits introduced and how these enhancements can accelerate verification workflows compared to legacy versions. The workshop will also provide an overview of functional improvements, including enhancements that reduce the need to maintain modified versions of the UVM library. Finally, we’ll open the floor for discussion on potential future enhancements, including plans for reworking the Register Abstraction Layer (RAL).

    Presenter: Justin Refice, NVIDIA