Industry Leaders Panel: Did We Create the Verification Gap? - Part 2

Industry Leaders Panel: Did We Create the Verification Gap? - Part 2

 

Download the MP3

Transcript

Part 2
Male #1: Well, all right.  Let me poke around a little more then.  Why are there so many different verification approaches being explored and adopted today?  Could those be simplified as a way of helping to shrink this alleged gap?  Help me out here, guys.
 
Male #2: I have an opinion.
 
Male #1: Okay.  That's good.
 
Male #2: I always have an opinion.  Oh, by the way, the verification gap is due to Janick and Mike.
 
{Laughter}
 
Male #1: It's important to identify the blame early on.
 
{Laughter}
 
Male #2: Realistically, we're dealing with an (sounds like: NP) hard problem.  There is not a single solution that will solve all classes of problems.  But it's more than that.  If you just look at functionality, you've got a certain level of complexity.  But then you stack on top of that power domains, and then you've got security domains, and you've got (sounds like: clock domains).  You cross all that, you know.  There's not a single solution that's going to apply there.  However, there is narrow solutions that can be applied to specific aspects of that to be very effective.  So you've got to have multiple solutions.
 
Add software on top of that, you've got – you know, like we heard earlier, you can't do software SoCs nowadays without emulation, you know.  So…
 
Mike Stellfox: Right.  I think it's key actually having all these different technologies, formal, you know—simulation, constrained random, use case-based verification emulation and so on.  The key, I think, to those companies I've seen being most successful is how you integrate those flows and how you actually leverage the right approach by the right person at the right time.  Right?  So not every – you know, if you have a hammer, everything looks like a nail kind of thing.  You really need to optimize the flow and look at how you integrate those different technologies so that the investment that's made in terms of resources—learning those tools and techniques and so on—can be optimized toward the specific task at hand.
 
Male #1: How is that done today between different tools, even methodologies?  You know, how do they talk to one another or talk to someone anyway, you know, that has the system knowledge?
 
Mike Stellfox: Well, the approach we've taken with customers is actually trying to focus more on the flow aspects, you know.  So, you know, when you're using (sounds like: formal), for example, it applies in these domains, and this is where it can apply the best, you know.  And trying to integrate that around a common plan and a common way to capture metrics and track your progress against that.  So, you know, generally our approach has been to try to take the different technologies and apply them into a flow and then, you know, prescribe the best way to leverage each independent technology in the flow in an optimized way.
 
Male #2: There's usually also a good rule of thumb as to when particularly technologies are applicable.  You wouldn't do use case scenario for verifying an IP core.  That's where random (inaudible) is excellent coverage.  Same thing with (sounds like: formal).  There is some areas in the core like where you will apply (sounds like: formal).  There's some areas in the SoC where you'll apply (sounds like: formal), for example, to do connectivity check.  At the SoC level, constrained random, that's where a use model comes in to play.  So the level at which you're working at usually dictates the methodology you're going to be using and the technology you're going to be using, and it's all tied together eventually by a common final coverage.  You make sure that you haven't left any holes.
 
Male #3: Yes, I recall a year or so ago…
 
{Cough}
 
…I was talking to a customer.  They had a – basically a convolution unit that – on a (sounds like:  DSP) that they were trying to build this elaborate UVM environment.  I'm scratching my head and go, why, you know.  Again, it's a lot of times – everybody wants a single hammer that can do this.  But, again, we're dealing with an (sounds like:  NP) hard problem.  It's – a single hammer will not work.
 
Bill Grundmann: And that's where forced standardization in the design team actually breaks down, because there – you're forcing a standard that's not required for everything.
 
Male #3: I think it's a lack of understanding – I think – again, it starts at planning in the beginning.  What am I trying to verify?  Not how.  And there's too much emphasis – people think too much right away on how am I going to go verify this versus the what.
 
Jim Caravella: Yes, I totally agree.  I see it all the time where people immediately are jumping into a specific tool or flow without really thinking about, what are we trying to do here?  It sounds simple, right?  But I see it a lot.
 
Mike Stellfox: Right.  I think where we've seen the most successes, where you actually first focus on what is the plan.  You know, you were talking about getting together the software folks.  You're actually getting the software designers, architects, and verifiers.  You find a lot of bugs in the specs and so on before you actually implement things.  And the idea of capturing a plan up front, that's sort of independent of how you're actually going to do the verification, the what of verification at the IP level, or what are you going to verify in the mixed signal?  What are you going to verify at the subsystem when you do the use cases?  Sort of going through and capturing the plan in – which is, you know, tends to be a (inaudible) process.  You don't know it all up front necessarily.  That is one of the keys I've seen to people being able to really successfully apply then the technologies versus jumping in and saying, okay, well, hey, UVM's the latest, hottest thing, let's start building a test bench for UVM.  You know, along the lines of what Harry said, I've seen a lot of people adopting UVM and then doing directed testing.  You know, hey, I'm learning all this really hard, complicated class-base, SystemVerilog, all this stuff, and then I'm still doing directed testing with it.  Well, if you're doing that, just – you don't really need to adopt – you could spend your time on other things if you didn't really need the constrained random aspect.  So I've seen that as well, where the gap is somewhat artificially created by the focus more on the techniques versus the problem at hand.
 
Jim Caravella: Yes.  I think that's a great case of artificially creating a gap and then plugging it with something that you already are doing, and you just created a lot more work for yourself.