Tutorials

Monday, March 4, 2024

  • Adnan Hamid, Breker; Tom Fitzpatrick, Siemens EDA; Matthew Ballance, AMD; Sergey Khaikin, Cadence; Prabhat Gupta, AMD

    Bringing an SoC-level system out of reset into an operational state involves configuring the component subsystems and IPs by properly programming hundreds or thousands of IP registers. Running behavior involves programming yet more registers and in-memory descriptors. Stake holders, including block-DV, subsystem, SoC verification, silicon bring-up teams, rely on having early access to accurate programming sequences in order to shift-left their activities. Many of these stakeholders also depend on being able to efficiently modify/adjust the programming sequences to exercise different legal configurations and operations.

    Current approaches to deriving bring-up sequences often require block-level teams to create some C code that captures key register-programming sequences to hand off to subsystem and SoC teams. Creating this content is an extra task that the block-level team would not normally perform, and is often deferred until late in the verification cycle. This limits the ability of subsystem and SoC teams to left-shift their activities. The programming sequences are typically highly directed, and cannot be easily modified to exercise different scenarios. Finally, because creating C-code programming sequences is disconnected from the primary work of a block-level DV team, they are at high risk of becoming outdated.

    This tutorial will also provide an overview of the PSS features in development for PSS 3.0, as well as an introduction to the PSS methodology library currently under development by the working group. 


    Loading…
  • Francesco Lertora - Group Director, Software Engineering, Cadence Design Systems, Inc.; Mangesh Mukundrao Pande - Solutions Architect, Cadence Design Systems, Inc.; Pete Hardee - Group Director, Product Management, Cadence Design Systems, Inc.

    As the degree of automation continues to increase, the intelligent SoC content in automobiles is growing faster than the general semiconductor market. As a result, the traditional automotive electronics suppliers are being rapidly joined in the market by many other established semiconductor companies and startups, as well as OEMs seeking differentiation through increased vertical integration. This ever-growing set of players all need to meet the functional safety requirements specified in standards such as ISO 26262. A process called Failure Modes, Effects, and Diagnostic Analysis (FMEDA) is critical to meeting these requirements. FMEDA so far has largely been the preserve of consultants using spreadsheet-based approaches. But now, automotive semiconductor design projects are looking for tools offering greater automation and consistency, as well as greater ownership and integration with the design and verification flow, of the FMEDA process. We believe the inherent ambiguity of functional safety standards will finally converge into a consolidated EDA standard to enable automation, results consistency, and information exchange between integrators into a product development lifecycle, analogous to industry standards for hardware description languages, timing, and power constraints.

    In this workshop, Cadence will present the Midas™ Safety Platform, and how it uses the Unified Safety Formal (USF) – a formalized method for safety analysis and specification of safety mechanisms proposed as a candidate standard to Accellera’s Functional Safety Working Group – to provide a holistic and automated approach to FMEDA at the architectural, design, and diagnostic coverage verification stages. We will demonstrate how USF provides the links between the FMEDA process and digital design and verification flows for both digital and analog/mixed signal, to provide a complete and consistent approach to meeting functional safety requirements.


    Loading…
  • Tushar Parikh, Principal Applications Engineer

    Generative AI created a lot of interest recently and is poised to change our lives in many unimaginable ways. One significant change we already know is the huge amount of power it consumes: an AI chip with the functionality of GPT-3 and 175 billion parameters would require an estimated 1280 megawatt-hours, which is equivalent to about 120 gasoline-powered cars operating for one year and creating around 550 tons carbon emissions.

    Generative AI and other similar applications leverage semiconductor devices, so it is imperative that such devices consume as little power as possible.  The most common power saving schemes are clock gating, separating power domains and voltage islands, installing retention cells, and designing efficient power management units.

    These low-power instruments add circuitry to the design and potentially impact design behavior. Verifying the functional intent of the design in the presence of the low-power constructs is critical. Low-power signoff is a new verification requirement for the chip design and verification community.

    Just as functional verification and signoff requires different techniques, such as static, formal, simulation, and hardware prototyping to ensure design integrity, the same is true for functional verification of low-power designs.

    This tutorial offers a comprehensive overview of various low-power design techniques as well as the corresponding, recommended verification steps. You will be provided with complete understanding of what it takes for low-power signoff. Topics covered include:

    • Generation, optimization and maintenance of UPF throughout the flow

    • Static low-power checks at different stages of the design and verification flow, from UPF creation to gate-level design connected to power-ground (PG) pins

    • Formal connectivity checking and property verification in the presence of low-power elements in the design, including optimization of clock gating and retention to improve PPA

    • Dynamic, low-power simulation to further ensure design functionality in the presence of UPF

    • Debug of issues that arise during low-power verification

    You will hear from low-power experts to gain a solid understanding of steps to implement and verify a solid low-power strategy that reduces silicon power consumption.  This practical knowledge will prepare you for the trend of ever lower power consuming designs.


    Loading…

Thursday, March 7, 2024

  • Russ Klein – HLS design and verification; Andy Meier – Hardware-assisted verification; Erik Jessen – RTL design and verification; John Hallman – Trust and Assurance verification

    Efficiently delivering a first-time right RISC-V-based SoC requires a flow that supports every phase of the development process -- from rigorously validating the RTL with simulation and formal methods, to full SoC testing with real world workloads, to specialized C++ or SystemC high-level verification of specialized accelerator IP blocks.  

    And all of this must come together seamlessly so that the project team has data-driven accountability of their verification coverage and completeness at all times. 
    In this tutorial, using different open-source RISC-V cores and accelerators for context, we will take you on a detailed journey covering three primary areas: 

    • High-Level Synthesis (HLS) design and verification: Specialized accelerators are commonly incorporated in RISC-V SoCs in the form of new instructions, co-processors, or bus-based accelerators used to off-load compute intensive software functions into hardware. In this segment, we will show how a HLS D&V flow enables designers to rapidly create and evaluate implementation alternatives to achieve the optimal performance, power, and area (PPA) for their design. HLS with C++ or SystemC delivers both an abstract and a low-level VHDL or Verilog RTL model that can be used in downstream verification phases. Our HLS flow has pre-synthesis design checks to find programming errors, verification coverage, and synthesis issues early, as well as both formal and dynamic simulation to verify correctness between the original algorithm and the synthesized RTL.

    • RTL-level design & verification: In this segment we will cover the elements of RTL D&V flows that RISC-V projects often call for above-and-beyond legacy SoCs: scalable, exhaustive verification of custom processor instructions and other novel logic, validating the root of trust and overall security of the RTL design, and verifying the correct integration of peripherals via standard interfaces (e.g. AMBA, PCIe, UCIe, et.al.) We will also demonstrate how recent advances in static formal verification and dynamic simulation technologies can deliver step-function gains in total user productivity.

    • Full SoC verification: In this segment of the tutorial, we will explore the complexities of verifying and validating SoCs with real-world stimulus and real workloads. We will show how the Veloce ecosystem provides hardware, software, and systems engineers alike the tools and methodologies to not only shift-left their software development efforts, but also provide visibility and correlation to power profiles, performance, and HW debug. We will discuss the details of what the Veloce Hardware Assisted verification execution platform has in providing a congruent verification experience. Specifically, we will share how to apply the mix of virtual prototype capabilities, hardware emulation and enterprise prototyping technologies, enhanced by easy-to-use apps and solutions, to address challenges common to RISC-V SoCs

    Working knowledge of Verilog, SystemVerilog, VHDL, and the principles of code and functional coverage will be assumed. 


    Loading…

Workshops

Monday, March 4, 2024

  • Richard Weber, Fellow, Director of Engineering, Arteris; Anupam Bakshi, CEO, Agnisys

    This workshop explains the data model underlying the IP-XACT standard. This SoC data model unifies logical and physical connectivity as well as memory map and registers which enables the standard to be used as a single source of truth to automate large parts of SoC front-end design and verification flows.

    The workshop will address IP-XACT concepts that are relevant to understand the overall SoC data model such as:

    • Components

    • Design and Design Configurations

    • Bus and Abstraction Definition

    • Component Memory Maps and Registers

    • Component Address Spaces and Bus Interface Bridges

    • Type Definitions


    Loading…
  • Ping Yeung, Nvidia; Iredamola Olopade, Intel; Kranthi Pamarthi, Renesas Electronics; Chetan Choppali Sudarshan, Marvell Technology; Farhad Ahmed, Siemens EDA; Bill Gascoyne, Blue Pearl; Anupam Bakshi, Agnisys

    As complexity and the number of clock domains increase in today’s ASIC designs, we are moving towards a hierarchical verification approach. This tutorial covers the proven clock domain crossings (CDC) and Reset Domain Crossing (RDC) schemes, the verification challenges, and the potential risk mitigation strategies. We will then discuss the hierarchical CDC/RDC verification methodology, the tradeoffs faced when incorporating units from multiple sources, and the challenges when integrating multiple vendor-generated abstracted blocks into an encompassing design. To mitigate these issues, we introduce the Accellera CDC committee, highlight the released CDC standard, and summarize the status of the current efforts.

    Topics:

    • CDC-RDC Basic Knowledge

    • Clock domain crossing schemes

    • Reset domain crossing schemes

    • CDC-RDC Verification

    • Hierarchical CDC Verification

    • Accellera CDC Working Group


    Loading…
  • Freddy Nunez, Applications Engineer, Agnisys Inc.; Neena Chandawale, Applications Engineer, Agnisys Inc.

    Embedded systems projects pose significant challenges due to tight schedules and frequent hardware and software redesigns. Creating IP/SoC and their device drivers, especially for diverse components like image sensors and processors, with 10’s of 1000’s of registers, operating modes and configurations, is time-consuming and error-prone. Informal communication between hardware and software teams further complicates the process, exacerbated by the need for drivers compatible with various operating systems.

    To address these challenges, an innovative solution involves automatically generating not just the IP/SOC RTL but also their device driver code. This approach aims to enhance communication between teams by helping them collaborate using the industry standard such as PSS and SystemRDL. The generated sequences support multiple environments, including UVM and C++, reducing the effort required for individual driver development and ensuring compatibility across different environments.

    Having a common understanding of the programming sequences not only helps the project move faster towards first pass silicon success but also helps reach the complete system development milestone faster. The Programmer’s Reference Manual is a product of this automation which enables the firmware and software teams to deliver the system software earlier in the development process.


    Loading…
  • The implementation of Functional Safety standards such as ISO26262 poses challenges during the exchange and integration of functional safety data between different work products and activities, carried out by different teams and/or different layers of the supply chain. Automation with EDA tools is now common practice in this field, but interoperability is still challenged by the lack of a Functional Safety standards that supports the data exchange. The Accellera Functional Safety Working Group has recently completed a white paper that describes the approach taken to develop the data model and a corresponding language prototype that will enable Functional Safety data exchange. As a next step, the focus for the Working Group is to develop the LRM (Language Reference Manual) for the Functional Safety Standard. This session will review the content of the white paper and discuss the plans to proceed with the LRM.


    Loading…
  • Jean-Philippe Martin, Intel; Mike Borza, Synopsys

    This workshop will demonstrate how to identify assets in intellectual property (IP) in accordance with Accellera’s Security Annotation for Electronic Design Integration (SA-EDI) standard.  This guidance is planned to be documented in the IEEE P3164 Asset Identification whitepaper. The SA-EDI standard relies heavily on the accurate identification of assets within an IP.  Any errors, either false positives or negatives, can render the SA-EDI collateral ineffective or nonapplicable. Therefore, it’s essential not only to identify assets but also to classify them correctly. However, this is not as straightforward as one may think.  Contextual information of the IP’s integration, such as security requirements, use-cases, surrounding IPs, etc. are often needed in order to properly identify assets.  These are typically defined by the integrated circuit (IC) owner and well after the IP has been developed, which is when the SA-EDI collateral needs to be produced. To address these challenges, the whitepaper introduces a methodical and practical approach that can be applied by IP owners with limited experience in security practices.  The methodology is vetted using four example IPs, ranging from simple to complex, to highlight how an IP developer can use to produce accurate SA-EDI collateral for the IC owner to properly consume and apply.   Don’t miss this opportunity to enhance your knowledge in security.

    Loading…
  • Ami Pathak, Senior Staff Field Application Engineer, Arteris; Matthew Mangan, Senior Manager, Corporate Application Engineering, Arteris

    Advances in semiconductor process nodes and FinFET transistor design have benefitted System-on-Chip (SoC) technology by allowing more IP to exist on-chip. The increased amount of on-chip IP results in more complex on-chip interconnect requirements to handle the communication between all the IP blocks. Consequently, the interconnect has become a prime source of SoC design challenges in both frontend and backend flows.

    Traditional interconnect IP offerings tend to box users into a fixed switch topology, most commonly a cascaded crossbar, forcing SoC teams to design around the interconnect IP’s limitations in ways that magnify challenges rather than solve them. In this tutorial, we will explore how Network-on-Chip (NoC) interconnect IP address the problem of on-chip communication differently than traditional interconnects and enable SoC teams to create a topology that is optimized for their SoC requirements. We will also introduce a framework for how SoC teams can approach NoC topology optimization in their projects.   
    Attendees will come away with:

    • Differences between NoCs and traditional interconnects

    • Tradeoffs between different interconnect topologies

    • How to approach finding an optimal interconnect topology for a given set of performance and physical requirements


    Loading…

Saturday, March 7, 2026

  • Brad Budlong, Cadence Design Systems; Michael Young, Cadence Design Systems

    Since the inception of hardware emulation, emulators have supported verification of digital designs using 2-state logic. The underlying hardware used for emulation has directly modeled the logic of a design with just 0 and 1 values. This workshop will show how support for 4-state logic improves emulation of digital designs and support for real numbers improves emulation for digital mixed signal designs.

    4-state logic emulation has a range of applications. Low power verification controlled with UPF is only partially modeled when constrained to 2-state. Here we’ll present how the corruption, isolation, and propagation of X values in emulation leads to greater verification and simplified debugging. We’ll also look at how introducing the fourth state value Z into emulation improves the verification of multi-driven busses.

    All large digital designs must be emulated, but those designs are typically surrounded by analog content for communication to memories, high speed interfaces and RF communication. This analog content is also configured and controlled by processors in the digital design, but the analog content couldn’t be emulated. This workshop will show how adding dedicated floating-point hardware to an emulation processor along with extending the emulation software to understand real number handling and the timing constraints of an analog design extends emulation into the mixed signal domain.


    Loading…

Thursday, March 7, 2024

  • Shravan Belagalmath, Vayavya Labs Pvt. Ltd.; Sandeep Pendharkar, Vayavya Labs Pvt. Ltd.; Karthick Gururaj, Vayavya Labs Pvt. Ltd.; Selvin Deva Santhosh Michael, Vayavya Labs Pvt. Ltd.

    OBJECTIVE: This presentation and workshop will cover the challenges and effective approaches to generate SystemC model code for hardware IP peripherals using Large Language Models (LLM). The workshop will show a live demo of LLM model in operation, generating SystemC model of a peripheral device.

    INTRODUCTION: With recent developments in AI, Large Language Models are used as assistants to perform various tasks. As these models accept and generate natural language text, they are helpful in various tasks that involves Natural Language Processing like text classification, document summarization, question answering, language translation, chatbots, code generation. Of relevance here is on-going active research and exploration in code generation capability of LLMs to enhance productivity of software developers. The presenters will show application of the same for creating SystemC Transaction Level Model from a high-level input specification.
    This abstract is organized into following sections:
     

    • Challenges, this section will mention the challenges faced by in using LLM for SystemC code generation.

    • Approach, this section will discuss about approaches that improved code generation results. 

    • Results, this section will provide analysis of the output generated by OpenAI’s GPT – the LLM that was used in this project

    • Future work, this section will provide information about features of final product.


    Loading…
  • Joe Marceno (Synopsys); Sivaramlingam Palaniappan (Synopsys)

    Today's design size and complexity continue to increase putting greater pressure on meeting compile time and performance goals of the prototype. Together, they are increasing faster than the available compute capability. This has lead to more challenging implementation problems and longer, unpredictable bring-up times for prototypes. To decouple design sizes and compute capacity, a divide-and-conquer approach is needed. This approach helps to enable parallelism.

    This workshop will present enhanced capabilities in Synopsys HAPS ProtoCompiler that can enable parallelism - allowing different teams to solve implementation challenges independently and concurrently. In addition, we will discuss how parallelism enables reuse of such implementations for design types involving multi-cores which are often replicated, ultimately reducing peak compute requirements and leading to faster tool execution times. This modular design approach is a framework that will address these needs while also providing a path to faster incremental TaT.


    Loading…
  • Moshik Rubin, Sr Product Management Group Director, Cadence; Anunay Bajaj, Sr Principal Product Engineer, Cadence

    The emergence of the Universal Chiplet Interconnect Express (UCIe) standard has revolutionized the
    electronics industry by enabling the integration of chiplets with diverse functionalities and technology
    nodes into complex electronic chips. This advancement has introduced new challenges in RTL (Register-
    Transfer Level) design, particularly in managing protocols like PCIe, CXL, and Streaming, as well as
    implementing a novel Physical Layer and intricate Die-to-Die adapters. Consequently, the verification
    complexities of such designs have grown exponentially.

    This workshop delves into effective strategies for optimizing verification efforts and enhancing
    productivity when working with UCIe-based designs, spanning from individual blocks to system-level
    verification. Attendees will gain practical experience in:

    1. Seamlessly integrating various UCIe blocks with a Device Under Test (DUT).

    2. Streamlining the transition from Intellectual Property (IP) to System-on-Chip (SoC) level.

    3. Measuring and analyzing UCIE based system performance.

    4. Debugging UCIe related scenarios using advanced tools and methods.

    Join us in this workshop to uncover solutions for tackling the challenges posed by UCIe-based designs,
    identifying critical bottlenecks, and visualizing the performance impact, ultimately empowering
    engineers to create more sophisticated electronic chips efficiently.


    Loading…
  • Introduction:

    Simulation-based testing has been the go-to approach for checking nearly everything through targeted tests in the ever-evolving verification methodologies. Its effectiveness has been undeniable and continues to play a critical role in verification landscape. However, it has its limitations. The number of possibilities it can cover remains limited compared to the complete set of potential behaviors, regardless of the number of tests or clever techniques used.

    Today, static verification has become an indispensable companion to simulation, formal methods, and other verification techniques, excelling in various specific applications. Its scope has expanded beyond traditional timing analysis, linting, and clock domain crossing (CDC) checks. This expansion has been made possible by the continuous advancements in the power and versatility of static tools and the growing complexity of modern designs. Some verification challenges that were previously addressed by throwing more resources, such as people, licenses, and machines at the problem, have now surpassed the capabilities of conventional methods for confident signoff.

    Summary of the content of the workshop:

    This workshop explores several emerging static signoff technologies and their applications in verification flows across both large and small design and verification organizations. These technologies are shaping how various companies approach verification coverage, offering valuable insights and efficiencies that complement existing methodologies.

    By leveraging the strengths of static verification alongside other verification techniques, engineers and organizations can enhance their verification coverage, ensuring robust and reliable designs in the face of increasing design complexity. As the field continues to evolve, staying up-to-date with the latest static signoff advancements will be crucial in maintaining a competitive edge in the semiconductor industry.

    The workshop teaches new verification areas where static sign-off methodologies are evolving and getting deployed to increase efficiency and coverage of verification.

    • Asynchronous Logic sign-off flows beyond CDC

    • DFT Compliance, checking and enabling shift left using static sign-off

    • Archiectural Compliance using static sign-off

    • Connectivity checking using static-signoff

    • Glitch checking metdhology using static-signoff

    • Efficient functional sign off by automatic assertion generation for RTL building blocks using static methods

    • Advanced methodology to identify X-initialization source errors and fix them to prevent the error from propagating

    The workshop then dives deep into specific industry use cases and examples, demonstrating how the industry is adopting these new approaches in the real world.

    • Nvidia’s Asynchronous Logic Sign-Off Flows Beyond Structural CDC

    • Kinara and Alif Semi – Shift Left DFT sign-off methodology for Edge AI Processor

    • Palo Alto Networks - advanced X-propagation methodology to identify X-initialization source errors and fix them to prevent the error from propagating.

    • Renesas - Efficient functional sign off by automatic assertion generation for RTL building blocks

    • Real Intent with multiple customers – Connectivity, Glitch and architectural compliance using static-signoff.

    The workshop is designed to benefit audience members and enable them to enhance the state of verification practice in their workplace.

    Intended Audience:
    RTL Designers and managers, Verification engineers and managers, SoC designers and verification engineers, Chip Architects, Clocks and Reset designers and architects


    Loading…
  • Adnan Hamid, CTO, Breker Verification Systems; John Sotiropoulos, Principal Engineer, Breker Verification Systems

    RISC-V processor cores, particularly complex application devices, require verification content that explores scenarios not encountered with other blocks, however complex. However, a lot can be learned by examining these content requirements for many verification situations. This workshop will explore a number of techniques employed on RISC-V cores today and consider how they may be applied across a range of designs.

    For complex, multi-core, out-of-order devices, instruction compliance and micro-architecture validation is not enough. Platform-level testing such as asynchronous interrupts, complex load-store and other scenarios are required. Coherency is a factor requiring specialized testing, as well as atomic operation, paging, privilege level switching, PMP security validation, the examples go on and on.

    The verification content for RISC-V processor multi-cores may be considered in layers. A classic RISC-V (and indeed other processor core) verification environment might start with some base tests, instruction compliance, and micro-architectural validation before employing a broad range of system integration and infrastructure content that ensures the device can efficiently drive an SoC. Performance issues must be evaluated alongside functional areas, data coherency, load store hazards, interrupt mechanisms are just a few of the examples of areas to be tested. Of course, if security, or low power are requirements, this adds more to the verification load.

    This workshop will guide attendees through verification content designed to address these and other scenarios critical for core level verification. A range of test techniques designed to fully cover these scenarios in a concurrent fashion to wring out unpredictable corner cases will be detailed, that are used on real designs today. Using the Accellera Portable Stimulus Standard, the construction of reusable, configurable SystemVIP will be examined to show how tests might be set up for a variety of scenarios across multiple projects.

    Not working on a RISC-V core? This workshop will expose verification mechanisms that can be applied to a broad range of blocks for many applications. By experiencing the techniques applied to processor verification a lot can be learned for verification in general.


    Loading…
  • During a project, subsystem and full-chip integration plays a crucial role. Integration can be particularly challenging on large SoCs with distributed teams due to complexity of the integration process, multi-site infrastructure issues, as well as the need to collaborate across multiple time zones. Often, integrators must integrate design blocks delivered by distant teams on which they do not possess any expertise. This causes long lead times with the integration process, repeated debugging of similar issues, and a perpetual state of missing key integration milestones. With IP-centric design and the use of "IP aliases," we propose a methodology that provides a streamlined, controlled, quality-based, and transparent flow that helps teams reliably meet integration milestones and more easily debug integration issues.


    • Loading…
    Loading…