The “unspoken” pains of Code-cov analysis and some fresh thoughts
Earlier this week at SNUG India 2011, Intel had a very interesting and pragmatic paper on just “how rudimentary” the code coverage analysis tools are as of today. The presenter Ishwar did a great job in all aspects – right from having chosen the right number of slides, clarity and keen focus on the problem he chose to talk about.
Let’s start with the basics – below is a trivial Toggle cov report from VCS’s URG (just as a sample, every major simulator has similar viewer)
While the color coding, hierarchical reporting etc. are useful – the “uncovered” hole analysis is where the users spend lot of time analyzing just “what is not covered and why”? Ishwar’s team has done some out-of-the-box thinking and shown how a simple, logical way to approach the analysis can bring quick results than the “raw” approach that many might follow – blame it on the tools not giving better assistance, if you will. He used 3 criteria to prioritize which holes to look at:
- How “close” is the net/node/coverage object is to the primary input?
- How recently did the design/block change?
- The fan-in/fan-out count
There was a great question from the audience on the #1 above – why not focus on “output” than the “input”. After all we are verifying the design behavior, isn’t it? A great question indeed and a matching answer from Ishwar:
Since we as “stimulus writers” have better control on the inputs, it makes it easier to “fix” the hole thereby improving the coverage number faster. After all if we didn’t even toggle the input pins fully, how do you expect the internal nodes, all the way upto output nodes would toggle?
Now add our TeamCVC’s own twist to it:
- Identifying prioritized set of goals/candidates to look at is a critical next step to get more value out of code coverage. Clearly the industry would benefit from such simple steps especially for those teams to whom the code coverage is a MUST have sign-off criteria (most of the ASICs do demand that)
- While in general coverage analysis is a key process in DV closure, it is important to ensure we “check” what we got “covered” too. Otherwise we get into this infamous phenomenon of “We got it covered, but didn’t get it checked”. Not always easy, especially at code coverage level, but atleast important to keep this in mind while doing the hole analysis. B’cos that’s when you realize “OK, since you told me a scenario that you didn’t cover, let me explain what would make this cover, and also what is expected behavior of the DUT in such cases, and why it is important”.
- A novel, emerging approach to the above problem is to “Begin-with-end-in-mind”. Look at Breker’s Trek – www.brekersystems.com – the whole paradigm of “Scenario models” and “Model based test synthesis” is to start at the “expected DUT” output and expand that to “what stimulus” is needed to get there, and what shall be the expected behavior.
- Zocalo's Zazz Bird-dog is a tool much like what Ishwar presented that does some deep analysis/heuristics to promote assertions to be written on those candidates. Now one could use the same info with code coverage analysis as well - technically speaking. This has NOT been done/endorsed yet by the developers of the tool themselves, so it is an idea from TeamCVC end based on pure technical perspective.
- NextOp (www.nextopsoftware.com<http://www.nextopsoftware.com>) promotes a technology called "Assertion Synthesis" that infers "properties" from simulation and presents them to users/RTL designers to see if they care about these behaviors (could be classified as assertions or cover hole by the designer). Arguably, there is more work involved here by the designer but you end up high quality RTL.
Looking forward – one would hopefully expect more such papers from the real “Trenches” to highlight/guide the EDA developers to solve the key challenges from “real end-users” such as Ishwar. One topic that keeps bogging us (TeamCVC) together with our customers is the ability to do incremental “exclusion” on coverage holes – say when there are some more comments added to the source code, the old, technically/logically valid exclusion file becomes unusable/non-reusable. Any simulator expert care to comment?
Comments
Post a Comment