Beyond Integers and Floating Point – Designing and Verifying with Alternate Number Representations
Russell Klein, HLS Program Director, Siemens EDA
Many of the algorithms implemented in hardware have their foundations in mathematics and often have reference implementations in software programming languages. Mathematics generally uses real numbers (and sometimes imaginary numbers). Software and general-purpose computers typically use 32- or 64-bit integers and IEEE floating point representations. But for purpose-built hardware, supporting the full range and precision of these formats is not just unnecessary, it is wasteful in terms are area, power, and performance.
Examples of these types of algorithms can be found in image processing, audio processing, data communications including 5G and 6G, encryption, machine learning, data compression, and much more.
Algorithms are implemented in hardware, as opposed to simply being implemented in software and run on a general-purpose processor, specifically to improve their performance and power consumption. So as algorithms are implemented in hardware it is important to find an appropriate representation and understand the impact of that representation on the accuracy and precision of the algorithm, as well as the effect on the power, performance, and area (PPA) of the hardware implementation.
This workshop will cover a variety of numeric representations, including fixed point numbers, alternative floating-point formats like Google’s “brain float,” and exponential representations like “posits.” It will examine rounding vs. truncation, overflow/underflow, and saturating math operations and their effects on the calculations. We will look at how to model algorithms using these alternate formats. And we will cover how to validate and verify both algorithmic implementations as well as Verilog or VHDL RTL, and how it fits into the overall verification process.