1.
Design Objects
Design
objects which are regularly used w.r.to design are design is explained below.
Running a parallel simulation may be as easy as flipping on a switch with the progressive and maturing solutions available today, but do people really take full advantage of the technology? It is true that in some scenarios the overhead of communication and synchronization needed for parallel simulation can negate any substantial performance gains. However, there are scenarios where deploying the parallel simulation technology can provide tremendous benefit. A long running simulation that exercises large blocks of a design concurrently and independently is one good example.Read complete article from Mentor Graphics: The Need for Speed: Understanding Design Factors that Make Multi-core Parallel Simulations Efficient
Designers need to be aware of the factors that can inhibit the advantages of parallel simulations, even in these best case scenarios; the main factor being inflexibility due to the way designs are modeled today. This article focuses on these factors and is an effort to educate on best design principles and practices to maximize the advantage of simulation with parallel computing. The discussion also extends to the three main fundamental features of parallel simulations, viz., load balancing, concurrency and communication. Designers need to understand how their designs run in simulation with these factors to ensure they get the maximum out of parallel simulations....
The reader of this article should be interested in predicting or monitoring the behavior of his hardware. This article will review phase-level monitoring, transaction-level monitoring, general monitoring, in-order and out-of-order transactionlevel monitors, A protocol specific AXI monitor written at the transaction-level of abstraction will be demonstrated. Under certain AXI usages, problems arise. For example partially written data may be read by an overlapping READ. This kind of behavior cannot be modeled by the "complete transaction" kind of monitor; it must be modeled by a phase-level monitor. All of these monitoring and scoreboard discussions can be widely applied to many protocols and many monitoring situations.Read complete article from Mentor Graphics: Monitors, Monitors Everywhere – Who Is Monitoring the Monitors?
The task of a monitor is to monitor activity on a set of DUT pins. This could be as simple as looking at READ/WRITE pins or as complex as a complete protocol bus, such as AXI or PCIe. In a very simple case a monitor can be looking at a set of pins and generating an event every time there is a change in signal values. The event can trigger a scoreboard or coverage collector. This monitor is typically very slow and not very useful as it generates a lot of irrelevant data.....
Modern object-oriented testbenches using SystemVerilog and OVM/UVM have been using SystemVerilog interface constructs in the testbench and virtual interfaces in the class based verification structure to connect the two worlds of static modules and dynamic classes. This has certain limitations, like the use of parameterized interfaces, which are overcome by using Bus Functional Models. BFMs are now increasingly adopted in UVM testbenches, but this causes other problems, particularly for complex BFMs: They cannot be configured from the test environment, thus significantly reducing code reuse.Read complete article from Mentor G raphics: Flexible UVM Components: Configuring Bus Functional Models
This article shows a way to write BFMs in such a way that they can be configured like any other UVM component using uvm_config_db. This allows a uniform configuration approach and eases reuse. All code examples use UVM, but work equally with the set_config_*() functions in OVM......
The increase of SoC complexity with more cores, IPs and other subsystems has led SoC architects to demand more from the main interconnect or network-on-chip (NoC), which is thus becoming a key component of the system. Power management, multiple clocks, protocol conversions, security management, virtual address space, cache coherency are among the features that must be managed by main interconnect and that demand proper verification.Read complete article from Mentor graphics: NoC Generic Scoreboard VIP
In addition, IP reuse and NoC generation solutions have enabled the conception of new SoC architectures within months or even weeks. Simple point-to-point scoreboard methodology is taught in most good verification methodology books and tutorials. However, building a generic verification solution for an SoC interconnect that can quickly adapt to any bus protocols and SoC architectures, and can deal with SoC advanced features, requires much more than dealing with point-to point transaction matching.....
Until recently, the semiconductor industry religiously followed Moore's Law by doubling the number of transistors on a given die approximately every two years. This predictable growth allowed ecosystem partners to plan and deal with rising demands on tools, flows and methodologies. Then came the mobile revolution, which opened up new markets and further shifted the industry's focus to consumers. The diversified nature of this market posed many, often conflicting development challenges: How to speed time to market while building products rich in functionality? How to boost performance while keeping both power consumption and cost at modest levels? Wading through these questions contributed to a multifold increase in verification complexity....Read complete article from Mentor Graphics: QVM: Enabling Organized, Predictable, and Faster Verification Closure
The Universal Verification Methodology (UVM) is becoming the dominant approach for the verification of large digital designs. However, new users often express concern about the effort required to generate a complete and useful UVM testbench. But the practical experience collected in numerous OVM and UVM projects during the last few years shows a different view. The UVM is a very suitable methodology for any kind of design and implementation, i.e. ASIC and FPGA due to the availability of the UVM library and the well-defined testbench structure. This allows the automation of essential steps in employing the UVM.Read complete article from Mentor Graphics: Making it Easy to Deploy the UVM
This article describes an UVM approach reducing testbench implementation effort, guaranteeing an early success and streamlining the processing of the test results. Depending on the number of functional interfaces and the design complexity up to 6 weeks of implementation effort or even more can be saved. A runnable UVM testbench will be handed over to the verification team at the very beginning of the project. The verification engineers have to implement only the corresponding drivers, monitors and functional coverage models. Later on the scoreboards needed have to be architected and implemented.....
Unknown signal values in simulation are represented as X-state logic levels, while the same X-states are interpreted as don't care values by synthesis. This can result in the hazardous situation where silicon behaves differently than what was observed in simulation. Although the general awareness of X-state issues among designers is good, gotchas remain a risk that traditional verification flows are not well equipped to guard against. The unknown simulation semantics of X has two oft discussed pitfalls: X-pessimism and X-optimism.Read complete article from Mentor Graphics: Confidence in the Face of the Unknown: X-state Verification
X-optimism is most worrisome as it can mask true design behavior by blocking the propagation of X-states and instead propagating a deterministic value through the design in simulation, when in reality various possible values will be seen in the hardware. If the unexplored values cause the design to malfunction, then X-optimism has masked a design bug that will only be discovered in silicon.......
High speed serial interconnect bus fabric is the SoC backbone, managing dataflow and keeping up with the dynamic bandwidth requirements of high speed applications. Verification of high speed interconnect IPs presents critical challenges not only in terms of complying with standards, but also in ensuring that the design is robust and flexible enough to handle and manage a large amount of time-critical data transfers. Acquiring such expertise requires years of verification experience. In this article, Silicon IP and platform enabled solution provider Mobiveil shares its story of verifying high speed bus protocol standards like PCI Express and Serial RapidIO, including what considerations are required when verifying high speed designs. In addition, Mobiveil highlights the benefits of using the Mentor Graphics Questa Verification Platform, including Questa Advanced Simulator, Questa CoverCheck, and Questa Clock-Domain Crossing (CDC) Verification, which together facilitates smart functional verification, debug and reporting of the high speed designs....Read complete article from Mentor graphics: Verifying High Speed Peripheral IPs
With the SoCs now supporting Multi-Core processors, complex ASICs and combinations that include systems on a board, SoC implementations now become an ever growing challenge for software development. Software development has to be supported not only by the inclusion of an RTOS, but, many SoC providers now have to leverage upon the Bare-Metal concept to achieve the necessary demands of today's SoCs. However, there still exists a huge chasm in the software development arena that addresses both the need to be able to verify not only the sw/hw interactions, but, also the software itself in a hardware context. This has become almost a necessity in today's "security" based systems marketplace....Read complete article from Mentor Graphics: Non-invasive Software Verification using Vista Virtual Platforms
Today's complex designs increasingly include at least one, and often more, embedded processors. Given software's increasing role in the overall design functionality, it has become increasingly important to leverage the embedded processors in verifying hardware/software interactions during system-level verification. Comprehensively verifying low-level hardware/software interactions early in the verification process helps to uncover bugs that otherwise would be uncovered during operating system or application bring-up – potentially in the lab. Characterizing, debugging, and correcting this type of bug is easier, faster, and thus less expensive, early in the verification cycle....Read complete article from Mentor Graphics: Power Up Hardware/Software Verification Productivity
Building a theater set is not unlike what we do as verification engineers. It involves modeling the "real world," often at a higher level of abstraction, and it has hard deadlines. "The show must go on," after all. Productivity is also key because all of us building the set are volunteers. We reuse set pieces whenever we can, whether it's something we built for a past production or something we borrowed from a neighboring group. And we often have to deal with shifting requirements when the director changes his mind about something during rehearsals. Oh, and it also involves tools – lots of tools. However, there is one important way in which set construction is different from verification. Those of us building theater sets know our work only has to stay up for two weeks and that no one in the audience is going to see it from closer than thirty feet away. In contrast, all engineers know the chips they verify must work a lot longer under much closer scrutiny. .....Read complete article from Mentor Graphics : An Up-sized DAC Issue Takes the Stage
When beginning a new design it's common to evaluate how to build a verification infrastructure in the quickest amount of time. Of course it's never just quick to deploy, verification also has to be complete enough to improve confidence in the design. Rapid bring-up and improving the quality of your design are excellent goals. However, you should not forget that your environment should be efficient to use during the verification process. This is where you will spend most of your time, slugging it out day after day. Arguably, debugging design bugs is one of the most time consuming tasks of any project. Transaction Level Modeling (TLM) will change the way you think about debug productivity, especially if you have recently experienced the long and difficult task of deciphering PCIe's training sequences,data transfers and completion codes at the pin level......Read complete article from Mentor Graphics: Maximum Productivity with Verification IP