dvconus24-logo_color

Oral/Lecture

Caching Tool Run Results in Large-Scale RTL Development Projects

Caching is a widely used technique to improve efficiency in both software and hardware, including EDA tools and compute/storage infrastructure, but there is significant untapped potential when it comes to sharing tool run results within a large-scale RTL development team. Members of such a large team often repeat various tool runs simply because there’s no efficient way to share the results of previous runs among themselves. This causes wasted compute resources and unnecessary wait time for the designers, which ultimately leads to long turn-around time (TAT) and higher cost of the SoC. In this paper, we discuss the challenges in sharing tool run results among the designers of such large teams and explore a few caching methods to address these challenges. We discuss pros and cons of these methods, with particular focus on one of our proposed methods that we found to be the most balanced solution. We present data from a recent SoC at Intel where this balanced method is saving thousands of hours of compute resources as well as significant amount of engineering resources.

Ashfaq Khan, Intel Corporation