Simulating a full-blown GPU is complex and challenging and demands parallel simulators as sequential simulators are slow. GPU has hundreds and thousands of processing elements on a single GPU die; simulating such environments asks for a challenging approach.
But instead, if we can leverage the Host CPU and GPU resources; we can offload such complexities to the GPU hardware with a minimalistic SystemC (or equivalent) model running on a Host CPU.
One approach to achieve this is using virtio-based GPU modeling. In this workshop, we dive deep into technical details to model the GPU simulator using SystemC.