Dan Yu, Siemens EDA; Harry Foster, Siemens EDA; Eman El Mandouh, Siemens EDA; Waseem Raslan, Siemens EDA; Tom Fitzpatrick, Siemens EDA
This paper presents a comprehensive literature review on how Large Language Models (LLMs) can be applied in multiple aspects of verification, including requirement engineering, coverage closure, formal verification, debugging, functional safety, code generation and completion, and data augmentation, among others. To demonstrate the capability, experiments are carried out to automatically generate variants of the existing designs and their verification code using prompts. Significant productivity and quality improvements are recorded compared to traditional manual data preparation. Despite the promising advancements offered by this new technology, we must be aware of the intrinsic limitations of LLMs in making incorrect predictions stemming, manifested by hallucination. The paper cautions that raw outputs of LLMs should not be directly used in verification. In conclusion, three safeguarding mechanisms are recommended to ensure the quality of LLM outputs. Finally, the paper summarizes the observed trend of LLMs' development and expresses optimism about their broader prospective applications in verification.