Quantcast
Channel: Cadence Community
Viewing all 5347 articles
Browse latest View live

Voltus – Massive Parallelism Speeds Power Integrity Analysis and Signoff Closure

$
0
0

Performance, capacity, accuracy. These are the three criteria that IC design teams want most in a power analysis solution. By using a massively distributed, parallel compute engine along with a hierarchical capability, the Cadence Voltus IC Power Integrity Solution - introduced Nov. 12, 2013 - promises breakthroughs in all three areas.

When running on 16 or 32 CPUs, the Voltus solution has shown a 10X performance gain over existing power integrity analysis solutions (which generally can't scale well beyond 4 CPUs anyway). For the largest ICs, hierarchy gives Voltus another boost, allowing the analysis of designs up to a billion instances. Finally, a "SPICE-level matrix" solver promises full SPICE accuracy on the power grid network.

Voltus is tightly integrated with the Cadence Tempus Timing Signoff Solution, a static timing analysis and optimization tool that was announced in May 2013. Voltus and Tempus work together to provide a unified electrical signoff solution that reports the impact of power integrity on timing.

So what are the challenges in power integrity, and how does Voltus solve them? Here's a more detailed look at the industry's newest power integrity solution.

Why Power Analysis is Tough

Power analysis and signoff are not getting any easier. The biggest challenge, according to Jerry Zhao, director of product marketing for power signoff at Cadence, is the increase in design size. Designers are putting more and more cores and intellectual property (IP) blocks onto system-on-chip (SoC) designs. A second challenge, he observed, is accuracy - mobile devices in particular have very low power, perhaps below 1V, so the noise margin on the power grid is very small.

Yet another challenge is the pervasive use of low-power design techniques, such as power domains and power gating. While earlier designs used two or three power domains, Zhao noted, some designs today use dozens of power domains.

Also, the list of "care abouts" in power analysis is growing. Just looking at static (leakage) or dynamic power is not enough. Analysis tools must also look at the in-rush current that occurs when a power domain is turned back on, and consider how long it takes for current to stabilize. IR drop (voltage drop) affects timing and is an increasingly important part of the analysis. Electromigration (EM) is becoming more important at advanced nodes. Finally, an early power rail analysis can pinpoint potential problems before placement and routing.

Analysis alone is not enough --- design teams must also fix any problems found during analysis before signing off. To allow this fixing, power integrity analysis tools should be closely integrated with IC implementation systems as well as IC packaging and PC board tools. Power analysis and optimization are not just chip problems, they are system problems.

How Voltus Can Help

Voltus supports both vector-driven and vector-less static or dynamic power calculation, as well as static and dynamic and power grid simulation for IR drop and EM. Zhao said that static analysis gives a good measurement of the average behavior of a power grid, whereas dynamic analysis is good for finding transients, such as a sudden spike in current.

Voltus will replace the existing Encounter Power System (EPS) tool. While the parallel part of the EPS engine has been completely rewritten, current EPS users will see the same user interface and use the same scripting they used before. EPS has some parallel capability but, like its commercial competitors, does not scale well past four CPUs.

Voltus, which supports both distributed processing and multi-threading, scaled well to 32 CPUs in early customer engagements and "we believe it can scale up to any number of CPUs," Zhao said. Even though Voltus uses massive parallel computation, its specialized solver still performs a full SPICE matrix solution across the entire circuit.

The new offering provides many types of power analysis. For starters, it does a power distribution analysis across the chip, including leakage power and switching power. Once it has the current information, Voltus can do an IR drop analysis on the power grid. It also provides  an EM analysis on the power grid. Voltus also analyzes signal EM that takes place on the signal wires.

A unique Voltus feature, according to Zhao, is early power rail analysis. With this feature users can start their power grid analysis at the floorplanning stage, before placement and routing. While not as accurate as post-route signoff analysis, this early power estimation can help designers avoid multiple iterations due to such problems as not having enough power switches. The rail analysis can also be done during the placement and routing stage with partial layouts.

An Integrated Analysis

While Voltus can work as a standalone product, it provides even more value when coupled with other Cadence tools. Voltus works with the Encounter Digital Implementation System to help users fix the problems that power integrity analysis finds. For example, if in-rush current is too strong, Voltus can recommend larger power switches and send the layout back to Encounter to fix the problem. Another common optimization would be the addition of decoupling capacitors.

Voltus links with the transistor-level Virtuoso Power System to enable the analysis of custom/analog IP in a mixed-signal SoC design. Another important interface is with the Palladium XP platform, which provides accelerated dynamic power analysis and can quickly generate stimulus that Voltus can use.

Voltus is also closely linked with the Cadence Allegro Sigrity power analysis tools, which bring in the package and the board. The Sigrity tools can generate a package model using SPICE or s-parameters, and hook it up with the chip for Voltus analysis. "That's the most accurate way to analyze chip-level power signoff," Zhao said. "Without all the pads tied to the package model, it is not accurate."

One capability that may be very important in the future is support for 2.5D and 3D IC thermal modeling. Voltus can create a power map file for the dies in a 3D stack. The Sigrity PowerDC tool will read this map to do a thermal analysis.

The most important linkage, however, may be between Voltus and Tempus. Taken together, these two products can offer a unified electrical signoff solution. That's important because power integrity directly affects timing, which is sensitive to power supply changes. As Zhao explained, "with voltage drop, delays are going to increase and paths may fail. We are able to analyze all the voltage drops at the window in which your static analysis is going to run." The alternative to this kind of analysis, he noted, is using excessively pessimistic guardbands.

You can learn more about Tempus and Voltus at the Cadence Signoff Summit Nov. 21 at Cadence San Jose headquarters. To read a feature story, click here.

Richard Goering

Related Blog Post

Tempus - Parallelized Computation Provides a Breakthrough in Static Timing Analysis


RTL Compiler Beginner’s Guides Available on Cadence Online Support

$
0
0
With shrinking design nodes, a significant portion of the delays are contributed by the wires rather than the cells. Traditional synthesis tools use fan-out-based wire-load models to provide wire delay information, which has led to significant differences in quality of results (QoR) between the "synthesis" and "implementation" tools. RTL Compiler Physical (RCP) as a tool allows the user to integrate the "physical" information much earlier in the flow, and this provides...(read more)

High-Level Synthesis—What Expertise Is Needed for Micro-Architecture Tradeoffs?

$
0
0
My most recent blog post mentioned how utilizing new algorithms together with high-level synthesis can continue to drive innovation in hardware design by balancing power consumption with performance improvements. A great example of this is what Fujitsu Semiconductor was able to accomplish —they took advantage of the high abstraction level of SystemC to explore different micro-architecture tradeoffs. They found a micro-architecture that delivered 35% faster throughput along with a 51% power...(read more)

Semico Panel on Semiconductor IP Ecosystem: Reducing Cost and Risk

$
0
0

Despite years of progress in both business models and technology, semiconductor intellectual property (IP) is still risky, costly, and difficult to integrate. Can the IP ecosystem relieve the pain while providing more complete hardware/software solutions? Four industry experts shared their opinions at a lively panel discussion at the Semico Impact conference Nov. 6, 2013.

The panel was titled "IP Ecosystem Solutions for Complex Systems." The moderator was Mahesh Tirupattur, executive vice president at Analog Bits. Panelists were as follows, shown left to right in the photo below:

  • Chris Rowen, fellow, Cadence (and founder and former CEO of Tensilica)
  • Jason Polychronopoulos, product manager for verification IP, Mentor Graphics
  • Warren Savage, president and CEO, IPextreme
  • Dan Kochpatcharin, deputy director of IP portfolio management, TSMC

"The premise of this panel is to talk about the high cost and risk of integrating IP," said Tirupattur in his introduction to the panel. "There is a massive chance of failure if you choose the wrong IP, the wrong supplier, the wrong fab, or the wrong process. One missing link breaks the whole chain." While every IP provider claims that their IP is pre-verified, "at the end of the day, integration is still a nightmare," Tirupattur said.

With the tone set, the panelists responded with gusto.

Opening Statements

Savage noted that IPextreme helps semiconductor companies bring their internal IP into the semiconductor supply chain. Five years ago the company started selling the Freescale ColdFire processor core, and eventually IPextreme decided to sell the whole ColdFire platform (see recent announcement here). By so doing, "we enabled the entire software ecosystem," Savage said. "The IP business is much more than selling cores - it's about selling complete solutions, and tying together the semiconductor ecosystem with the software tool providers and EDA companies."

Polychronopoulos noted that verification IP (VIP) lets companies reuse their verification infrastructure environments. This includes a test plan, test sequences, and coverage, and users can add design-specific versions of these components. Support for the Universal Verification Methodology (UVM) is key, as well as portability across engines, debugging, and silicon-proven VIP.

There's a paradox in talking about ecosystems, Rowen said. On the one hand, if an ecosystem foundation is stable for a long period of time, more and more ecosystem components will grow up around it. On the other hand, design teams want to differentiate. "You want to get all the benefits of an ecosystem, in terms of skills and tools and libraries," Rowen said. "But you don't want to just use the same foundation as everyone else."

"If Tensilica should be known for one thing," he added, "it is solving this paradox of ecosystems. You can have dramatic innovation and the creation of things never seen before, and at the same time you can leverage an effective ecosystem that has all the things you want out of an ecosystem."

At TSMC, Kochpatcharin tracks incoming tapeouts and looks at how customers are using IP. He noted that IP reuse is growing rapidly. Out of 3,500 hard IP blocks that TSMC has cataloged, 2,500 were used more than once. "That's a lot of reuse, but what are the headaches?" he said. "The big challenge is verification and the quality of the IP. If you use this IP, how do you know it will work?"

Kochpatcharin discussed a TSMC IP quality program that is "kind of like ISO 9000." It includes a physical review, design rule checking, IP validation, and silicon verification. An IP validation lab performs audits of IP customer silicon. "In summary," he said, "IP quality is number one."

Questions and Answers

Q: Where do you see the role of customers, IP companies, EDA vendors, and foundries looking three years forward?

Kochpatcharin: There will be a lot more collaboration. We need feedback from IP vendors very early, to make sure that the power and performance of chips is what they are looking for.

Savage: We're getting more involved with our customers' end customers in the specification of systems and performance profiles. Recently we've become more involved in quality aspects and safety certification, so we're getting involved with the end customer directly.

Rowen: There are three adjacent associations for IP - process know-how, automation know-how, and application know-how. It is natural for EDA and IP to grow closer together, because the EDA companies have a good bit of process know-how. You hope EDA has the automation know-how. The tricky one is the application know-how - it's increasingly valuable.

Q: As people move to more intelligent, programmable subsystems, how do ecosystems need to evolve?

Savage: What we found in subsystem IP is that people want to reuse the OSes and all the [software] things used on chips.

Rowen: Subsystems are a lively topic, first because everyone defines "subsystem" differently. Many subsystems are good, but only some make business sense when it comes to IP licensing. What a lot of people want in a subsystem is the integration of hardware and software. If you don't have the capability to reuse that particular combination of hardware and software, subsystems can look like a services business.

Polychronopoulos: When a customer is reusing a subsystem, they don't necessarily want to get involved in the verification but they somehow have to manage the complexity of this device. There are multiple tens of interfaces around it. We're developing tools that will allow them to connect to multiple interfaces.

Q: IP reuse has a technical dimension and a business model dimension. What is preventing a higher level of reuse?

Savage: I think technology advances have far surpassed business-level innovation. We're now licensing to system houses, and they may have never done an IP license before. They don't understand the business terms or the concept of royalties. A sales person who worked for me said that IP contracts are like snowflakes-no two are identical.

Tirupattur: Mixed-signal IP reuse is almost like an organ transplant. It just does not work perfectly. The specs look good but I want something different.

Chris Rowen's comment about the 2013 Cadence acquisition of Tensilica:

"The acquisition of Tensilica by Cadence is really a watershed, certainly from an IP business model perspective. It also takes the notion of automation to a new level. Traditional CAD companies manipulated gates. But now, increasingly, customers are manipulating processors and subsystems. A new generation of technology is required to do that. So now, Tensilica goes to the big stage."

Richard Goering

Related Blog Post

Semiconductor Industry Headed for Strong 2014: Forecaster

 

Semiconductor Companies’ Big Squeeze

$
0
0
SAN JOSE, Calif. – Call it the rumblings of re-verticalization in the electronics industry. Call it the Semiconductor Sandwich. Call it whatever you like, but semiconductor companies are experiencing a shakeout that’s altering the relationships between themselves, intellectual property (IP) providers, and system OEMs in astonishing ways.

Cadence Fellow Chris Rowen and I have talked about this a few times this year. Last week, Kurt Shuler, VP of marketing at Arteris, articulated his version of the story during a presentation at the Semico Research IP Impact event here.

Gorillas in the midst

The simple version of Shuler's argument is this: Companies including Google, Facebook, Apple, eBay, Amazo,n and Microsoft are wielding enormous silicon-design influence today. They're doing it because they can; they're doing it because they have to. Their system needs are so tightly defined and requirements for optimization are so unique that they've had to assemble huge engineering teams to create customized or semi-custom designs.

They're either doing it themselves all the way to the foundry door, or their engineering teams are attached to the hips of their silicon partners.

Shuler said:

"The algorithms you need to do web search are different than what you need to do e-commerce are different than what you need to do social networking side."

In addition, they're often responsible for vast server farms that must be optimized for energy usage. In some cases, 3% of the power flowing into these farms is used for useful data analysis, Shuler said. Referencing an Electronic Design article by Cadence's Arif Khan and Osman Javed, Shuler added that often a third of it goes to cool the system itself. That puts even more pressure on the design team.

Shuler added:

"What big data enterprise companies have is a heat dissipation problem. They are struggling to figure out how can they make servers and routers do a better job of converting power into something useful and not just heat."  

So, the traditional silicon vendors are challenged to meet these custom or semi-custom design requests within the confines of their traditional (and non-custom) manufacturing models.

In some cases, these system houses have bypassed silicon vendors and designed their own chips, leveraging commercial IP such as ARM cores.

This raises the level of fear, uncertainty, and doubt among silicon vendors. 

Sandwich play

So, in Shuler's slide nearby you can see how silicon vendors might be squeezed from cash-rich customers above and by IP vendors below (vendors who could bypass them altogether to get designed in). 

Said Shuler:

"The semiconductor vendors are sandwiched. Between these guys who have tons of money to ... design their own chips and these guys down there who are enabling anybody to build chips."

But, for silicon vendors, it's not yet time to hit the panic button. That's because some of these OEM/systems houses won't take on the cost risk of designing their own chips.

In addition, the Googles, Facebooks, and Apples of the world represent a fraction of the world's system OEMs-albeit they're very large and influential ones.

Shuler argues that silicon vendors need to:

  • Understand where they best fit on the design spectrum (from COTS to custom)
  • Work farther upstream (talking with a Comcast or AT&T) to understand market requirements
  • Be flexible and focus on differentiation

The future of IP, EDA

What does this mean for IP vendors? Huge opportunity. Today, IP vendors sit at one end of the innovation food chain. While their value proposition includes design flexibility, IP vendors can struggle with pricing because their semiconductor customers can always threaten to design that particular block themselves.

Implications in this brave new world for IP vendors include:

  • Working upstream to understand market and tech requirements better
  • Re-engineering business models to get licensed farther upstream
  • Reconsidering licensing models

The EDA play

And then there's the consolidation angle. It's no fluke that EDA vendors have been buying IP companies at a feverish pace in recent years.

The technology not only adds a tool to the software sales belt; it facilitates a conversation with customers farther upstream as well.

Brian Fuller

Related stories:

-- What Do Applications Dream About?

-- We Need to Move "Past EDA": Tensilica Founder Rowen

 

 

 

11 Things I Learned by Browsing Cadence Online Support

$
0
0

I guess by now most of us are already familiar with Rapid Adoption Kits (RAKs). These are packages that include a detailed instructional document and a lab database. You can browse all the available materials at http://support.cadence.com.

Rapid Adoption Kits (RAKs) - The purpose of RAKs is to demonstrate how users can use Cadence tools in their design flows to improve productivity and to maximize the benefits of their tools. Here are a few important RAKs and appnotes that were published recently on the support portal (http://support.cadence.com) which might be of interest to many of us!

1. RAK - Prototyping Foundation Flat Flow EDI 13.2

This content helps us learn gigascale prototyping with FlexModels and presents a prototyping usage model. FlexModels enable gigascale design exploration using the Encounter Digital Implementation (EDI) System.

2. RAKIntroduction to EDI System 13.2 & Block Implementation Flow

This RAK can be extremely helpful for beginners who intend to learn the EDI System. RAK includes EDI command and graphical user interface (GUI), how to set up the EDI system, and how to implement the flat flow that can be used for chip or block Implementation

3. RAK - Basic Floorplanning in EDI System 13.2

This knowledge piece helps us learn how to specify the floorplan, move and edit placement constraints, create placement and routing blockages, create power and ground rings and stripes, and power routing

4. RAK - Power and Rail Analysis Using Encounter Power System (EPS) 13.1

This material provides information on to know how to do library characterization, analyze VCD files for windows of high signal activity, do static and dynamic power analysis, analyze current/power plots, perform what-if analysis & analyze various rail analysis plots

5. Appnote - Time Budgeting for Very Large, Timing Critical Hierarchical Design Using ART Methodology

Time budgeting for very large timing-critical designs using virtual optimization is not accurate enough. With design sizes growing to over 300 million instances, obtaining accurate time budgets with better accuracy for timing critical design is becoming a necessity for many design teams. This document crisply describes how to obtain accurate time budgets using Active-Logic Reduction Technology (ART) to reduce run-time and memory footprint on timing critical designs. This should help the user achieve accurate timing budgets.

6. Appnote - Constraint Implementation and Validation in Interoperability Flow

The key benefit designers derive by using a mixed signal Interoperability flow and OpenAccess (OA) database is the seamless transfer and implementation of various routing constraints from analog to digital designs. Starting from Encounter Digital Implementation (EDI) 13.2 and Virtuoso 616, it is not only simple to create these constraints, but it is also easy to import or export them from one environment to another. Once the design is implemented using these constraints, you can use the Physical Verification System-Constraint Validation (PVS-CV) utility to validate whether these constraints are implemented correctly.

7. Appnote - Context Aware Placement in EDI System

This document is targeted to users who want to apply context constraint for cells with Encounter Digital Implementation System (EDI System). It will introduce how to define edge types and the requested spacing rules, how to assign the specified edge types to cells, and how to spread cells with specific edge types to satisfy the spacing rules. It includes recommendations for a successful chip implementation.

8. Appnote - How to Detect and Fix Isolated Cut Via Violations

This content is meant for users and designers needing to locate and fix isolated cut violations in their designs, and CAD engineers wishing to implement such a flow using the Encounter Digital Implementation (EDI) system. This document provides some background to the problem, and a methodology for resolving violations by inserting multi-cut vias.

9. Appnote - Current Mode Virtual Attacker

Modeling small attackers accurately and efficiently is an important factor in the accuracy of noise and noise-on-delay analysis. This application note explains how the current mode virtual attacker is formed and used.

10. Appnote - End Cap Cells Usage in Encounter Digital Implementation (EDI) System Flow 

If you want to insert end caps into the design with Encounter Digital Implementation System, this content should help you achieve your goal. It will introduce you to various end cap insertion and verification methodologies recommendations from Cadence for a successful chip implementation.

11. Appnote - Path Exception Priority Rules

This document describes the path exception priority rules which are followed by Encounter Timing System (ETS) / Encounter Digital Implementation (EDI) System for finding effective path exception for a path, among the multiple path exceptions specified for that path. Path exception priority rules are explained (in descending order) as per path exception command type and various command options.

Happy Learning !!

Mukesh Jaiswal

Virtuosity: 12 Things I Learned in October by Browsing Cadence Online Support

$
0
0

Lots of routing, a little AMS, and finishing off with some fun...

Application Notes

1. Constraint Implementation and Validation in interoperability flow

The Mixed Signal Interoperability (MSI) flow allows designers to seamlessly transfer and implement routing constraints from analog to digital designs.  This document covers the steps required to apply and implement routing constraints in Encounter and validate these constraints using the Physical Verification System-Constraint Validation (PVS-CV) utility.

Rapid Adoption Kits

All RAKs include a detailed instructional document and database.

2. Virtuoso Interconnect Routing using VSR

Describes a new use model for running VSR using the Wire Assistant and top-level signal net routing in an analog top-level design.

3. Static and Dynamic Checks

This material describes the usage of the Spectre APS/XPS static and dynamic design checks. These checks may be used to identify typical design problems including high impedance nodes, DC leakage paths, extreme rise and fall times, excessive device currents, setup and hold timing errors, voltage domain issues, or connectivity problems. While the static checks are basic topology checks, the dynamic checks are performed during a Spectre APS/XPS transient simulation.

4. Mixed-Signal Verification -- System Verilog Real Number Modeling

Introduces the new SV-RNM 1800-2012 LRM capabilities that have been made available to aid in mixed-signal verification flows. It provides a basis for the production-level solution that we currently have in Incisive 12.2/ 13.1 that are SV-RNM 1800-2009 LRM centric. Then the RAK introduces the newer capabilities made available by the SV-RNM 1800-2012 LRM that enhance the SV-RNM modeling performance and functionality. The designer will be able to explore the user-defined types and resolution functions along with the debug capabilities made available by SimVision for these new features. Also includes an overview video.

5. Parasitic-Aware Design Using Custom Cells

ADE GXL Parasitic-Aware Design (PAD) features are used to investigate the effect of parasitic devices on a circuit.  RAK has been designed to highlight the features and functionality of the PAD flow in the IC 6.1.6 release, which enable the user to incorporate parasitic estimates into their simulations using custom parasitic elements.  Also includes an appendix on how to build a custom parasitic cell.  This RAK pairs well with the earlier Parasitic-Aware Design Workshop, which covers the entire PAD flow, including parasitic estimation, filtering extracted parasitics, parasitic stitching from extracted views, and parasitic reporting.

Videos

6. AMS Simulation with Multiple Logic Disciplines and Power Supplies on a Single Instance

Demonstrates how to run an AMS simulation with an instance that has multiple logic disciplines and power supplies. In this particular example, the instance has both 1.2V and 3.3V ports.

7. IC 6.1.6 Pin to Trunk Block-Level Routing

Frequent browsers of the Cadence Online Support Video Library may have noticed that many video demonstrations have been organized into "channels" or playlists.  Perfect for binge-watching on a rainy afternoon.  This channel features two videos covering block-level pin-to-trunk routing basics and routing between blocks.

8. IC 6.1.6 Pin To Trunk Device-Level Routing

These three videos cover device-level pin-to-trunk routing basics, wire assistant overrides, routing scope, and via control.

Cadence Community Blogs

9. IC6.1.6 Virtuoso Space-Based Mixed-Signal Router (VSR)

Nice FAQ-style overview of the new features in the Virtuoso Space-Based Router in the context of chip/block assembly routing in mixed-signal analog-on-top (AoT) designs.

10. Spectre XPS -- Cadence Reinvents FastSPICE Simulation

Although it has been under evaluation at several Early Access Partner customers for some months, the official launch of the Spectre XPS FastSPICE simulator was announced in October.  Initially targeted at SRAM characterization, in conjuction with Cadence's Liberate MX tool, Spectre XPS uses advanced partitioning techniques to achieve tremendous performance gains.  

11. EDA Consortium Extravaganza Celebrates 50 Years of Design Automation

"Engineering" and "Extravaganza" are not two words normally seen together, so you've got to gawk a bit when the geeks come out to play.  The 50-year anniversary, and the event, even made it into the Huffington Post, where, no doubt, my mother is still scratching her head wondering "what exactly is it that you do"? 

12. Unhinged

Our new Editor-in-Chief just released Episode 4 of this dynamic, off-the-wall Web show which combines geeky humor with actual news and interesting interviews in a format well-suited to short-attention-span creatures such as myself.  You may laugh, you may cringe, but you will be entertained. 

 

 

 

 

User View: UVM Sequence Layering Brings IC Functional Verification to Higher Level

$
0
0

Engineers at a leading IC design services company recently came up with a novel approach that improves verification efficiency by "layering" sequences of transactions to reach higher levels of abstraction. The approach is based on the Universal Verification Methodology (UVM), which defines a reusable UVM Verification Component (UVC) that, among other tasks, provides stimulus to the design under test using sequences.

This methodology was presented at the recent CDNLive India 2013 conference by Gaurav Jalan (right), Director of Engineering at SmartPlay Technologies, a global leader in VLSI design services. The paper, titled "UVM Sequence Layering in Configurable Memory Controller Verification," won a best paper award. Presentation slides are available online, as described at the end of this post. Jalan also writes a functional verification blog.

In an interview, Jalan described the problem they were trying to solve and the reasons for introducing a new methodology. He noted that his team was asked to verify a highly configurable DRAM memory controller with over 400 possible configurations. They were provided with raw RTL and a specification by the customer, and their task was to define a verification strategy using SystemVerilog and UVM, come up with an executable verification plan, develop a verification environment using Cadence verification IP (VIP), and "execute verification from scratch to closure."

Traditional and New Approach

The design under test (DUT) supported multiple protocols on both the processor interface and the DRAM interface. In a conventional verification approach, engineers would develop test sequences targeting each protocol on the processor side, and develop scoreboards for each protocol on the DRAM side. In this approach each configuration works as a separate DUT, and additional time is required to develop all the sequences for different configurations. The conventional approach also poses limitations when it comes to reusability, portability, scalability, and maintainability.

In the conventional approach, Jalan said, "the overall reusability index was low. To avoid this, we decided to move a level up - define tests and sequences based on the test plan at a higher, protocol-agnostic level that would be translated at run time into the desired protocols based on configuration. This concept is called sequence layering."

With sequence layering, engineers can develop test scenarios that are protocol independent. As shown below, the layering is achieved by a "layering agent" derived from uvm_agent. The layering agent has a monitor and a sequencer, and it "bundles" (translates) protocol-level transaction items into higher-level sequence items. A lower level sequencer translates the abstract level sequences into protocol-specific sequences, and pushes them forward to a driver.

"The way it [layering agent] works," Jalan said, "is that there is a high-level sequence item associated with the layering sequencer. It would connect to the sequence of the lower level protocols using the same mechanism as is used by the sequencer and driver in an uvm_agent. The lower level sequence would have only one sequence running as a ‘forever' thread. Inside this sequence we have a get_next_item similar to what we do in a uvm_driver. The item is received from the higher level sequencer. It is translated by the lower level sequencer and given to its driver. The response is then passed back to the layered sequencer, indicating that the lower level sequencer is ready for the next item."

The layering approach allows engineers to create generic sequences that are reusable across protocol interfaces. An example of a higher-level scenario might be, "Burst Read followed by a Burst Write in the same row." Protocol-specific coverage ensures that all scenarios are covered.  

Faster Turnaround Time

The net result was a quicker turnaround time for each DUT configuration. Jalan said that the extra effort required to build the layering agent was 6%, but engineers saved almost 25% overall time later when developing sequences and tests. The overall project effort was 100 man-months. The effort included the use of Cadence ePlanner to develop an executable verification plan, Cadence eManager to track regressions and coverage, Cadence VIP for processor interfaces, and Denali models (from Cadence) for memory interfaces. The Cadence VIP team provided "excellent" support, Jalan said.

To view the CDNLive! India paper, click here. You will be asked for your Cadence log-in (or to sign up for one - a quick and easy process). You will then see a menu of proceedings from seven tracks. Choose track 5, "Functional Verification Track II," and scroll down to the paper titled "UVM Sequence Layering in Configurable Memory Controller" by Ghouse Syed Sherraj and Gaurav Jalan.

Richard Goering

 

 


Signal Integrity Analysis of Serial Data Channels—A Complete Solution Using Allegro Sigrity

$
0
0

Back in the day, when challenged to transfer data faster, we increased the width of the interface from 8 bits to 16 or from 16 to 32 and so on. The wider the bus got, the more challenging timing became. We added strobes for interface lanes to better manage timing, but faster and wider buses added more complexity. Somewhere around 64 bits and 500 MHz (remember PCI-X 533?), we recognized that the trend could not continue.

Today, with the exception of memory interfaces, multi-gigabit data transfers are accomplished through serial interfaces. Transceivers now include complex adaptive equalization that can only be described through software algorithms. To enable simulation, the industry extended the IBIS modeling specification standard to include an algorithmic modeling interface (AMI).

Cadence's Allegro Sigrity Serial Link Analysis is a full-featured serial analysis solution that includes model extraction, topology generation, and system signal-integrity analysis that supports IBIS-AMI. The die-to-die topology is modeled in SystemSI. An impulse response of the channel is convolved with a large bitstream using high-capacity channel simulation, and then the channel analysis results can be compared to industry-standard compliance tests, such as PCI Express (PCIe) 3.0.

Grab a cup of coffee and watch the demonstration of our complete solution that addresses serial link analysis. These simulation results can be compared to PCIe 3.0 requirements using our compliance kit that ships withAllegro Sigrity Serial Link Analysis.

While many signal-integrity engineers focus on final system verification, you can see from the demonstration the advantages of using the serial link analysis technology early in the design cycle. While most would say that an 8.0Gbps channel such as PCIe 3.0 should be implemented in a flip-chip package, this demonstration shows the flip-chip versus wirebond quality differences. Given the results, an enterprising signal integrity team may want to continue tweaking the channel and transceiver in hope that they can implement PCIe 3.0 in a lower cost package and increase profits for their company.

Tell us about your experiences using SystemSI and Cadence Sigrity modeling and extraction technology!

Team Allegro

Hardware/Software Modeling Opportunities and Strange Beasts

$
0
0

SAN JOSE, Calif.—Ever seen an Eierlegende Wollmilchsau? If you have, you're sitting on an electronics design automation gold mine.

That was the question Cadence's Frank Schirrmeister posed Monday to a gathering of design engineers at ICCAD at the San Jose Hilton.

"Eierlegende Wollmilchsau" is German for an animal that's part pig, sheep, and chicken; in other words, a beast that produces wool, eggs and meat—a lot of things in one package. Whether there actually is an Eierlegende Wollmilchsau we'll get to in a minute.

Increasing complexity, one solution?

Schirrmeister's point was this "one-size-fits-all" solution is a sort-of holy grail in design engineering. This is especially so because the complexity of designs is increasing by the day, time-to-market pressures are unrelenting, and the landscape of technology providers is shifting under our feet.

Consider, for example, that semiconductor vendors provide more and more software functionality as their customers move up the value chain; SoCs can incorporate as many as 100 or more IP blocks from many different vendors. Software must be modeled as hardware is developed; we know this if we're to get a design out the door before mandatory retirement age.

That's all well and good, but who models what during the design? Schirrmeister, who is Cadence group director, product marketing, for system development, said:

"Over the last 15 years or so ... the value chain has changed completely, which has resulted in a big need of models to be exchanged. You have all these providers providing models of the hardware to the next person in the chain to build systems, and then at the end to provide an environment in which the software, like the OS..., [must] be developed in parallel, otherwise you look at a very long pole in the design flow." 

Mythical beast

The challenge is there is no Eierlegende Wollmilchsau. The software development kit may be fast, "but it basically ignores hardware. At the end (of the spectrum), I have real-time speed and I'm fully accurate, but I'm really late, and it's fairly difficult to debug," Schirrmeister argued.

In between, he added, there are different engines that all require different models. In some cases, engineers trade off speed for accuracy or vice versa.

Going hybrid

Since complexity and various design demands will never abate, solutions have to be found if we're to continue the pace of innovation. That's where the notion of verification hybrids comes in. (Examples shown in image below).

For example, Schirrmeister said, "if you combine the virtual prototyping world with the RTL implementation and create hybrids, you have the potential for good solution. That's really where the industry is going."

In that case, RTL is good for modeling complex hardware items with good hardware debug early on, on the one hand. Virtual prototyping helps engineers bring in software early on and is good for software debug, Schirrmeister said.

As you move toward the end of the design flow, you can also consider a hybrid in which you bring up real RTL with software and use emulation for gate-level simulation. That can be handy to bring together your software, hardware, and power profiles and run much longer cycles, he said. For other example of intriguing hybrid use cases, please see Richard Goering's posts "Palladium XP II—Two New Use Models for Hardware/Software Verification"  and "Designer View: New Emulation Use Models Employ Virtual Targets."

In the end, it would be amazing to come across a Eierlegende Wollmilchsau, but it's not going to happen, according to Schirrmeister.

He told the ICCAD audience:

"There is no one size fits all. Each of those representations has modeling requirements, has different advantages and disadvantages. If you can combine virtual platforms with, for example, acceleration and emulation for hardware debug, it gives you a good solution to trade speed for software development in a virtual  platform with acceleration, emulation, and hardware accuracy because you can fully look into the RTL."

 

Brian Fuller

Related stories:

Designer View: New Emulation Use Models Employ Virtual Targets

Palladium XP II—Two New Use Models for Hardware/Software Verification

 

High-Level Synthesis Now Spans the Datapath-Control Spectrum

$
0
0
When we talk to prospective high-level synthesis (HLS) customers, one of the slides we show is a pie chart that breaks down the types of production designs (that we are aware of) for which customers have used C-to-Silicon Compiler. The current snapshot looks like this: Then we overlay this breakdown: The "primarily datapath" designs have varying amounts of control logic in them, the "primarily control" designs have some datapath content, and the "mixed" designs have...(read more)

Digital Cameras to Get New Image Sensor Technology

$
0
0

Fifteen years ago (at least!) we bought an HP laptop for our home and they generously threw in a digital camera. That was the first digital camera we ever had. What a revelation it was to take pictures as long as your heart desired and delete the lousy ones.

The downside? The image sensor was just 2 megapixels (MP). Color clarity was poor and almost every image was washed out, but we still loved it. Today, of course, you can get digital SLR cameras with 20 MP and excellent imagers embedded in your laptop screen for webcams and in your mobile phones for "selfies."

The quality is astonishing. But more is coming, as it always will in our industry. But just what?

In the end, "it's about the pixels," to quote Marty Agan, director of engineering with Samsung Semiconductor. Agan and his colleague, senior marketing manager Justin Ging, woke up one morning, donned the same-colored shirts, and came down to the palatial Unhinged TV studios to talk tech, specifically image-sensor technology.

One of the obvious questions to pose was "why should we care when today's image quality is astounding" (and the file sizes being captured is straining our memory cards)?

Ging said:

"Better image sensors will have better quality, be better at low light in darker environments; they're faster, have burst capture, (good) slow motion and also have less power consumption."

Agan said there are four key areas engineers need to consider when specing an image sensor in a mobile design:

  • Resolution
  • Full well capacity (The amount of photons an individual pixel can handle before saturation before they're converted to electrons)
  • Cross talk
  • Sensitivity

And the pair expounded on an interesting Samsung technology called Isocell being rolled out that promises to make image capture in mobile devices even more eye-catching.

Check out our conversation about all this and more: 

 

Brian Fuller

Related stories:

--Unhinged: Alberto on Cyber-Physical Systems, 25 years in EDA

--Unhinged: Gary Smith Calls EDA, Tech Media on the Carpet

--25th Anniversary: Hogan on EDA History and Three Little Words 

ICCAD Keynote: Will Resolution-Challenged Lithography Improve IC Design?

$
0
0

First, the bad news. We may have to go all the way to the 7nm process node without extreme ultraviolet (EUV) lithography, using increasingly clever tricks—and design restrictions and constraints—so our current generation of 193nm lithography tools can print features correctly on silicon wafers.

But there may be a silver lining. According to Lars Liebmann (right), distinguished engineer at IBM and a keynote speaker at the International Conference on CAD (ICCAD) on Nov. 19, 2013, restrictions and constraints may open up new design methodologies. His talk was titled "The Escalating Design Impact of Resolution-Challenged Lithography."

While Liebmann is a renowned lithography expert, ICCAD is primarily a conference for EDA developers. Liebmann said he had come to ICCAD to "share with you a little bit of how far we're bending over backwards to keep [semiconductor] technology moving forward." He cited a growing need for deep and early collaboration among designers, EDA developers, process engineers, and lithography experts, and noted the need to "mutually understand what problems we're trying to solve."

Liebmann cited these goals for his fast-moving, technically deep, hour-long keynote:

  • Provide a tutorial background on computational lithography
  • Get sympathy from the design community
  • Highlight the need for early, deep, and sincere engagement
  • Maintain the hope of turning constraints into opportunities

"In the good old days, you had an infinite variety in the length and width of transistors, and now we're down to maybe one or two channel lengths and a 2-fin or 3-fin device," he said. "But maybe there's some opportunity to improve how you approach design and not just treat it as a restriction."

Some basics about lithography

When considering lithography issues, Liebmann doesn't pay too much attention to technology node names such as 22nm or 10nm, noting that these node names "have nothing to do with any kind of physical dimension in the technology." What's more important is the minimum pitch that has to be resolved. At the 22nm process node, that's 80nm. At the 14nm process node, that's 64nm. This is an important distinction because double patterning is generally required below an 80nm minimum pitch.

But the main variable Liebmann watches is the Rayleigh Factor, abbreviated as k1. This is basically a measurement of how complex the resolution is—the lower the number, as Liebmann put it, "the harder your lithography team is working." Yield, cost, and complexity all become problematic when k1 = <0.65.

When IC process nodes shrank lower than the 193nm wavelength provided by steppers, lithographers employed "tricks" to keep printing features accurately on silicon. One technique was optical proximity correction (OPC). Here, Liebmann said, "we would basically build mathematical models of how features are distorted on the wafer as a function of diffraction efforts. We would effectively pre-distort the mask by taking layouts and adding distortions or decorations so we could get an image that more closely resembled what the designer had in mind."

Just as the invention of the airplane was followed by the invention of the parachute, Liebmann said, OPC was quickly followed by "lithography-friendly design" in case OPC didn't quite work out. The idea here was to provide models to designers rather than presenting hundreds of restrictive design rules. Following a lithography simulation, designers could fix "hot spots" to eliminate problematic layout configurations. OPC and lithography friendly design reduced the k1 factor to 0.5.

Going off the axis

The next lithography trick, off-axis illumination, is pretty much what it sounds like. If you can tilt the illuminator on its side at exactly the right angle, Liebmann said, you can form an image that prints correctly on silicon. In addition, you can double the resolution and get close to k1=0.25.

But from a design perspective, there's a downside. "Going off axis was a genius move, but then we had to deal with non-monotonic behavior and design rules became incredibly complicated," Liebmann said. Still, he said, the industry got past three technology nodes with that technology. Around the same time equipment manufacturers started providing immersion lithography, accomplished by placing a layer of water between the lens and the wafer.

Another bag of tricks came with asymmetric illumination techniques such as double dipole lithography (DDL) and source mask optimization (SMO). These techniques involve very sophisticated optimizations that require collaboration between lithography teams and designers, who may have to design everything in one orientation (vertical or horizontal). The resulting features "are not pristine but are within spec, and you can avoid multiple exposures."

If you're doing SMO, Liebmann noted, you can get a 15% improvement in pattern variability by reducing the number of layout constructs and doing a detailed optimization. "Let's not argue over design rules," he said. "Let's argue over what are the exact shapes you need to build your design, and how we can design tools that think in terms of constructs rather than rules."

Lithography for 14nm and below

If you want to go below k1=0.25, it will have to be done with double and/or triple patterning. With the simplest approach to double pattering, litho-etch litho-etch (LELE), you are simply printing half the patterns at twice the pitch. This works when the minimum pitch is between 50nm and 80nm. At the 10nm process node, however, when the minimum pitch is below 50nm, self-aligned double patterning (SADP, also called sidewall image transfer) is needed. Rather than a detailed explanation here, I'll just note that this is a much more complicated process that involves forming relief features called "mandrels," depositing sidewalls, and cutting away patterns that are not needed.

With the "first generation" of multiple patterning (14nm process node), Liebmann said, only a few metal levels need LELE. In the "second generation" (10nm process node) more levels will need LELE, a few levels will use SADP, and a few levels will need triple patterning. Why triple patterning? "We need the third color to overcome 2D violations," Liebmann said.

What about design? Liebmann emphasized that the entire IC physical design flow needs to be double-patterning aware. He presented some of the requirements for placement, routing, and extraction. For routing, he noted, it's not enough to follow the relatively simple set of LELE rules—routers must also be ready for SADP, which will impose limitations such as forbidden spaces. Liebmann said researchers came across some unexpected challenges with line-end stagger rules and via coloring.

Liebmann briefly mentioned some work he has been doing with Cadence on via coloring, line-end stagger rules, and cell flipping and color swapping. "If you want to get into this game, you have to start early," he said.

What happens if EUV isn't ready?

For many years, chip makers have awaited the commercialization of extreme ultraviolet (EUV) lithography. It has many advantages, including a 13.5nm wavelength (compared to 193nm for optical lithography). It has many challenges, including source power, image placement, and mask defects. IBM hopes to run some integrated wafers using EUV next year, but Liebmann noted that "there is still a huge possibility that this [EUV] won't be ready for 7nm and we will have to keep going with other means."

What other means? One may be block copolymer directed self-assembly (DSA). This is a material-based method that extends the patterning capability of 193nm steppers. "We take every fourth space and make it a guide pattern," Liebmann said. "We apply a copolymer film, heat the thing up, and like magic that film self-assembles into the pitches we need in our design."

Another possible approach is to take SADP and turn it into SAQP—self-aligned quadruple patterning.  It's easier to understand than DSA, but it requires even more mask layers than SADP.

"The significant outcome of all this work," Liebmann concluded, "is that once you deal with the fact that your layout is very constrained, you can take advantage of it and come up with a very optimized design environment where everything is synthesized, you no longer have custom layout, and you have smart memory compilers. You can get significant improvements in design efficiency."

This ICCAD keynote session was organized by Joel Phillips, senior architect at Cadence, on behalf of the IEEE Council on EDA (CEDA).

Related blog posts

CDNLive!—IBM Expert Quantifies Design Impact of Double Patterning

Video—Easing the Challenges of Double Patterning at 20nm

Cadence ICCAD blog post

Hardware/Software Modeling Opportunities and Strange Beasts

 

SKILL for the Skilled: Simple Testing Macros

$
0
0
In this post I want to look at an easy way to write simple self-testing code. This includes using the SKILL built-in assert macro and a few other macros which you can derive from it.

The assert macro

This new macro, assert, was added to SKILL in SKILL version 32. You can find out which version of SKILL you are using with the SKILL functiongetSkillVersion, which returns a string such as"SKILL32.00" or "SKILL33.00".

Using this macro is simple. In its simplest form, you wrap any single expression within (assert ...). At evaluation time this will trigger an error if the expression evaluates to nil.

CIW> (assert 1 + 1 == 2)nil

CIW> (assert 2 + 2 == 5)*Error* ASSERT FAILED: ((2 + 2) == 5)<<< Stack Trace >>>
error("ASSERT FAILED: %L\n" '((2 + 2) == 5))
unless(((2 + 2) == 5) error("ASSERT FAILED: %L\n" '(& == 5)))

You can also specify the error message using printf-style arguments.

CIW> (defun testit (x "n")
       (assert x > 3 "expecting x > 3, not %L" x)
       x-3)

CIW> (testit 12)9

CIW> (testit 2)*Error* expecting x > 3, not 2<<< Stack Trace >>>
error("expecting x > 3, not %L" x)
unless((x > 3) error("expecting x > 3, not %L" x))
testit(2)

What is a macro?

Macros are a feature in many lisps including emacs lisp, common lisp, and SKILL. Consequently, you can find tons of information on the Internet explaining lisp macros. A quick Google search for "lisp macro" returns pages of useful results.

In particular, a SKILL macro is a special type of function which computes and returns another piece of SKILL code in the form of an s-expression. This capability allows SKILL programs to write SKILL programs as a function of their raw, unevaluated, operands at the call-site. Although macros can certainly be abused, when used correctly SKILL macros can greatly enhance readability by abstracting away certain details, or by automating repetitive patterns.

You can find out more about SKILL macros by consulting the Cadence documentation. There is also an Advanced SKILL Programming class which Cadence Educational Services offers.

In-line tests

You can use assert inside your functions for run-time checks. You can also use assert at the top level of your SKILL files for load-time checks. This has an added advantage of helping the person who reads your code to understand how the function you are defining is used. In the following example anyone who reads your function definition can immediately see some examples of how to use the function.
;; Sort a list of strings case independently into alphabetical order.
(defun sort_case_independent (words "l")
  (sort words (lambda (w1 w2)
                (alphalessp (lowerCase w1)
                            (lowerCase w2)))))

(assert nil 
        ; works for empty list?
        == (sort_case_independent nil)) 

(assert '("a"); works for singleton list?
         ==  (sort_case_independent '("a")))) 

(assert '("a" "B")
         == (sort_case_independent '("B" "a")))

(assert '("a" "B")
         == (sort_case_independent '("a" "B")))

(assert '("A" "b")
        == (sort_case_independent '("b" "A")))

(assert '("A" "b")
        == (sort_case_independent '("A" "b")))

(assert '("A" "b" "c" "D")
        == (sort_case_independent '("c" "D" "A" "b")))
Writing your SKILL files to include these top-level assertions has yet another advantage: if someone later modifies your function, sort_case_independent, the tests will run the next time anyone loads the file. This means if an error has been introduced in the function, some sanity testing happens at load time. Furthermore, if someone enhances the function in a way that breaks backward compatibility, the assertion will fail at load time.

Defining assert if it is missing

If you are using a version of Virtuoso, Allegro, etc, which does not contain a definition of assert, you can define it yourself.
(unless (isCallable 'assert)
  (defmacro assert (expression @rest printf_style_args) 
    (if printf_style_args 
        `(unless ,expression
           (error ,@printf_style_args))
        `(unless ,expression
           (error "ASSERTION FAILED: %L\n" ',expression)))))

The assert_test macro

Some unit testing frameworks supply assertion functions such as assert_less, assert_greater, assert_equal, assert_not_equal. It is possible in SKILL to define a single assertion macro called, assert_test, which provides all these capabilities in one. You don't really need assert_equal, assert_not_equal, asset_lessp etc...

This macro is useful for building test cases. This macro attempts to output a helpful message if the assertion fails. The message includes the parameters to the testing expression, and the values they evaluate to. For example:

CIW> A = 33

CIW> B = 22

CIW> (assert_test A+B == B+2) *Error* (A + B)
  --> 3
(B + 2)
  --> 4
FAILED ASSERTION: ((A + B) == (B + 2))<<< Stack Trace >>>
...
CIW> (assert_test A+B > B+2)
*Error* (A + B)
  --> 3
(B + 2)
  --> 4
FAILED ASSERTION: ((A + B) > (B + 2))<<< Stack Trace >>>
...

The intelligent thing about assert_test as can be seen from the above example, is that it constructs an error message which tells you the text of the assertion that failed: ((A + B) == (B + 2)). It also tells you the arguments to the testing function in both raw and evaluated form: (A + B) --> 3 and (B + 2) --> 4

The macro definition is not trivial, but the code is given here. You don't really need to understand how it works in order to use it.

;; ARGUMENTS:
;;   expr - an expression to evaluate, asserting that it does not return nil
;;   ?ident ident - specifies an optional identifier which will be printed with [%L] in
;;                     the output if the assertion fails.  This will help you identify the
;;                     exact assertion that failed when scanning a testing log file.
;;   printf_style_args - additional printed information which will be output if the
;;                     assertion fails.
(defmacro assert_test (expr @key ident @rest printf_style_args)
  (if (atom expr)
      `(assert ,expr)
      (let ((extra (if printf_style_args
                       `(strcat "\n" (sprintf nil ,@printf_style_args))"")))
        (destructuringBind (operator @rest operands) expr
          (letseq ((vars (foreach mapcar _operand operands
                           (gensym)))
                   (bindings (foreach mapcar (var operand) vars operands
                               (list var operand)))
                   (assertion `(,operator ,@vars))
                   (errors (foreach mapcar (var operand) vars operands
                             `(sprintf nil "%L\n  --> %L" ',operand ,var))))
            `(let ,bindings
               (unless ,assertion
                 (error "%s%s%s"
                        (if ',ident
                            (sprintf nil "[%L] " ,ident)"")
                        (buildString (list ,@errors
                                           (sprintf nil "FAILED ASSERTION: %L" ',expr))"\n")
                        ,extra))))))))

The assert_fails macro

With the assertion macros presented above you can pretty robustly make assertions about the return values of functions. A limitation, however, is you cannot easily make assertions about the corner cases where your function triggers an error.

The following macro,assert_fails, provides the ability to assert that an expression triggers an error. For example, thesort_case_independent function defined above will fail, triggering an error, if given a list containing a non-string.

CIW> (sort_case_independent '("a" "b" 42 "c" "d"))*Error* lowerCase: argument #1 should be either a string or a symbol (type template = "S") at line 112 of file "*ciwInPort*" - 42<<< Stack Trace >>>
lowerCase(w2)
alphalessp(lowerCase(w1) lowerCase(w2))
funobj@0x2cac49a8("a" 42)
sort(words lambda((w1 w2) alphalessp(lowerCase(w1) lowerCase(w2))))
sort_case_independent('("a" "b"))

You could fix this by enhancing the function to do something reasonable in such a situation. Or you could simply document the limitation, in which case you might want to extend the in-line test cases as well.

(assert_fails (sort_case_independent '("a" "b" 42 "c" "d")))

Here is an implementation of such a assert_fails macro.

(defmacro assert_fails (expression)
  `(assert (not (errset ,expression))"EXPECTING FAILURE: %L\n"',expression))

Summary

In this article we looked at the assert macro, which is probably in the version of Virtuoso or Allegro you are using, and if not you can easily define it yourself. We also looked at assert_test and assert_fails which you can define yourself. You can use these three macros to easily improve the robustness of your SKILL programs.

See Also

Low-Power IEEE 1801 / UPF Simulation Rapid Adoption Kit Now Available

$
0
0
v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} 0 0 1 644 3674 Cadence Design Systems 30 8 4310 14.0 Normal 0 false false false false EN-US JA HI /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:Calibri; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

There is no better way other than a self-help training kit -- (rapid adoption kit, or RAK) -- to demonstrate the Incisive Enterprise Simulator's IEEE 1801 / UPF low-power features and its usage. The features include:

  • Unique SimVision debugging 
  • Patent-pending power supply network visualization and debugging
  • Tcl extensions for LP debugging
  • Support for Liberty file power description
  • Standby mode support
  • Support for Verilog, VHDL, and mixed language
  • Automatic understanding of complex feedthroughs
  • Replay of initial blocks
  • ‘x' corruption for integers and enumerated types
  • Automatic understanding of loop variables
  • Automatic support for analog interconnections

 

Mickey Rodriguez, AVS Staff Solutions Engineer has developed a low power UPF-based RAK, which is now available on Cadence Online Support for you to download.

  • This rapid adoption kit illustrates Incisive Enterprise Simulator (IES) support for the IEEE 1801 power intent standard. 

Patent-Pending Power Supply Network Browser. (Only available with the LP option to IES)

  • In addition to an overview of IES features, SimVision and Tcl debug features, a lab is provided to give the user an opportunity to try these out.

The complete RAK and associated overview presentation can be downloaded from our SoC and Functional Verification RAK page:

Rapid Adoption Kits

Overview

RAK Database

Introduction to IEEE-1801 Low Power Simulation

View

Download (2.3 MB)

 

We are covering the following technologies through our RAKs at this moment:

Synthesis, Test and Verification flow
Encounter Digital Implementation (EDI) System and Sign-off Flow
Virtuoso Custom IC and Sign-off Flow
Silicon-Package-Board Design
Verification IP
SOC and IP level Functional Verification
System level verification and validation with Palladium XP

Please visit http://support.cadence.com/raks to download your copy of RAK.

We will continue to provide self-help content on Cadence Online Support, your 24/7 partner for learning more about Cadence tools, technologies, and methodologies as well as getting help in resolving issues related to Cadence software. If you are signed up for e-mail notifications, you're likely to notice new solutions, application notes (technical papers), videos, manuals, etc.

Note: To access the above documents, click a link and use your Cadence credentials to log on to the Cadence Online Support http://support.cadence.com website.

Happy Learning!

Sumeet Aggarwal and Adam Sherer


Optimize Your PCB Decoupling Capacitors and Remain a Person of Integrity

$
0
0

How much integrity is too much?  If your PCB designs apply one or more decoupling capacitors (decaps) per power pin, then you may have too much integrity - power integrity, that is. Your designs are also more expensive than necessary and your decap mounting structures have vias in areas that could be better applied for signal routing.  If you reduce the number of decaps, will you have less integrity?  Will your PCB's Power Delivery Network (PDN) performance (and your performance as a designer) be challenged?

Successful selection of the type and quantity of decaps and their placement locations depend on many factors.  Included in the factors are: device switching current, target impedance profiles, capacitance and inductance (ESL) of the decaps, mounting inductance of each decap and device, and PDN inductance between device and decap.  Even with the availability of detailed simulation tools to verify PDN performance, it is often not clear how to make decap implementation tradeoffs.  Pre-layout decisions tend to add more decaps than truly needed. Selection and placement is often based on experience and"best practices".  And while it is easier to remove extra decaps than to add more during post-layout verification, over-design not only adds the cost of unneeded decaps, but may unnecessarily force use of extra PCB layers due to blocked routing channels that need not be blocked.

Cadence Sigrity OptimizePI provides an analytical basis upon which to make decisions regarding PDN design tradeoffs.  Pre-layout guidance is provided for decap types and how many should be placed on the top/bottom of the design and under the devices. This helps to dramatically reduce over-design at an early stage in the design flow, where it can yield the greatest benefit to the overall design.  Post-layout analysis considers thousands of design alternatives in a completely automated manner and provides a short list of optimal decap schemes from which to select the most appropriate tradeoff for your design. PDN performance is maximized while cost, area, and emissions are simultaneously minimized.  Even for designs that have undergone pre-layout analysis, it is typical to reduce decap cost by 15% while maintaining or improving performance during post-layout optimization. For decap implementations that are over-designed from the beginning, the decap cost savings are often 50% or more with the potential for significant PDN performance improvements.

Grab a warm or frosty cold beverage and enjoy a demonstration of Cadence Sigrity OptimizePI. An 18-layer FPGA-based board is examined for which the 1.5V rail of the original design contained more than 120 decaps. This original design is observed to have impedance peaks, corresponding to high PDN noise, in the frequency range where significant energy will exist for typical switching circuits.  SPOILER ALERT: OptimizePI reduced decap cost and improved performance.

You can see from the demonstration how easily design engineers, board layout designers, and power integrity experts alike can utilize OptimizePI to provide analytical guidance for their decap implementations.

Tell us about your experiences using OptimizePI.

TeamAllegro

 

Accelerating Code Coverage Using Palladium XP Rapid Adoption Kit

$
0
0
Code coverage is an effective tool in the verification process, giving insights into testing completeness as well as identifying highly active or inactive areas of a design. Collecting code coverage in simulation on large designs can be a very time-consuming process. Code coverage can be collected at emulation speeds in the Palladium XP system. The Accelerating Code Coverage Rapid Adoption Kit (RAK) defines which types of code coverage can be accelerated and how to enable and configure code coverage...(read more)

Signoff Summit: An Update on OCV, AOCV, SOCV, and Statistical Timing

$
0
0

It's easy to be confused by the alphabet soup of acronyms that surrounds static timing analysis (STA). At the Signoff Summit at Cadence headquarters on Nov. 21, 2013, Igor Keller, senior R&D architect at the company, explained several on-chip variation (OCV) approaches that provide some of the advantages of statistical STA (SSTA) without its relatively high costs.

Ten years ago, as I well remember, SSTA was poised to become one of the next big things in IC design. It made sense. Rather than returning a single timing number, SSTA could return a statistical distribution. It could tell you, for example, that you have an 80% chance of hitting a given timing number. But the development of SSTA libraries proved to be a stumbling block.

"Statistical timing is a great approach," Keller said. "It's the most accurate I can think of, but it's also the most expensive. Except for IDMs, nobody could really deploy it in production because it required too much run time and memory."

Alternatives to SSTA

Keller reviewed several approaches for handling in-die variations at advanced nodes, starting with plain old OCV analysis. OCV provides a single derating factor for all instances. Results can be grossly optimistic or pessimistic. As Keller noted, you may not be able to close your timing without leaving a lot of performance on the table.

Keller noted that the distinction between local and global variations is very important with OCV. You can handle global variations with corners (best case, nominal, and worst-case combinations) but corner analysis is very difficult with local variations. These variations "do not correlate statistically, and they have a profound effect on OCV," he said. "The biggest challenge in OCV variations today is handling the local uncorrelated variables."

Advanced OCV (AOCV), sometimes referred to as location-based OCV (LOCV), is aimed at reducing pessimism. It provides variable derating for min/max, cell, arc, and stage count. Libraries can be created from existing SSTA characterization tools. The graphic below shows the OCV and AOCV derates compared to an "ideal" derate.

 

AOCV, however, assumes similar statistical variability between cells regardless of slew and load. It can still be very optimistic or pessimistic compared to SSTA. "You cannot assume that all your instances on the path are the same cells," Keller said. "You cannot assume that all the input slews are the same. People realized that you cannot really reduce pessimism a lot with AOCV." In particular, he said, stage count is a "flaky number" that can generate a lot of pessimism.

Keller had relatively little to say about parametric OCV (POCV), other than its elimination of stage count as a parameter. It represents one more step towards SSTA but still does not resolve the delay dependency on slew and load.

Statistical OCV in the Sweet Spot?

Statistical OCV (SOCV) is a simplified approach to SSTA that uses a single local variable. It solves the major limitations of AOCV, including variation dependency on slew and load, and the assumption that the same cell, or load, is in the path. It promises near SSTA accuracy for a small additional cost of runtime and memory compared to AOCV, and it can include signoff-accurate signal integrity (SI) analysis.

"You handle global variations by going to corners," Keller said. "The corner based approach is well understood by engineers. At the same time, you push the tricky part of the variation - which is local variation - into statistical. You compress everything into one variable and that's your statistical OCV."

Keller said that SOCV is a "version of SSTA which is not as expensive as statistical timing, yet is almost as accurate." SOCV also has a "look and feel" that is familiar to users of STA. Users who want to see a single flat timing number report, as they would for STA, can continue to do so. SOCV can also provide a three-sigma statistical distribution for those who want to see it.

According to Keller, SOCV is much more accurate than AOCV, especially for graph-based analysis AOCV. The SOCV timing flow is very similar to the "regular" timing flow. SOCV can be validated with SPICE Monte Carlo analysis.

In conclusion, Keller said, "SOCV brings you advantages over other approaches by doing a more accurate analysis in terms of dependency on slew and load." It's a proven technology, he said, and automated flows exist for library generation.

So, maybe full SSTA wasn't the "next big thing" in IC design after all - but it has clearly inspired some new and more accurate approaches to timing analysis.

Note: This was one of a number of presentations in the day-long Cadence Signoff Summit, which also included updates on the Cadence Voltus IC Power Integrity Solution, Tempus Timing Signoff Solution, signoff extraction, incremental metal fill, path-based timing analysis, physical verification signoff, and design for manufacturability. Presenters included Cadence R&D experts and customers. Presentations will be archived online at a later time.

Richard Goering

 

ICCAD 2013: The New Electrically Aware Design Paradigm

$
0
0

SAN JOSE, Calf.--Pop quiz: What percentage of verification time do design teams spend on re-iterating their layout design after checking electrical parameters?

If you said 30-40 percent, move to the head of the class.

And given the ceaseless increase in design complexity, you'd expect that percentage to balloon as we move deeper into ultra-deep submicron geometries. But a new methodology addressing layout-dependent effects has emerged to address the challenge and improve engineering productivity.

EAD Overview

Electrically aware design (EAD) enables design teams to do electrical verification incrementally in real time as each physical layout design decision is made. It fundamentally moves verification earlier in the design flow. 

Cadence's David White (pictured, left), Group Director for Virtuoso EAD, laid out the new frontier during a recent presentation at ICCAD here.

He said:

"As we introduce these new silicon technologies, we're seeing difficulty in trying to maintain design intent through layout and into verification. At the same time, there's a need for more in-design verification methods to catch problems during design. The tools need to be faster and use unified common data models."

In the conventional flow, White said:

"A series of decisions are made with regards to placement and routing and not until you get to the end doing LVS and DRC can you do parasitic extraction and re-simulation and you might do something like EM checking for reliability."

Incremental Approach

Therefore, you can't know the electrical consequences of all your layout decisions until that's done, and it's very difficult to go back and make changes, White said.

The need at advanced nodes is acute, because of interconnect and proximity issues which can enhance electromigration problems.

"No longer can you just check the electromigration for a given shape," White told the audience. "You have to look at the shapes around it."

You could, for example at earlier nodes, look at three identical vias and conclude that they have the same EM limits. But at advanced nodes, the wires going into and out of those vias can impact the via EM characteristics and the rules you need to apply.

electrically aware design flow

A connectivity-driven environment enables the kind of in-design electrical feedback that teams need to tackle these advanced node issues and improve productivity, White said.

Said White:

"You need a way to continually maintain connectivity to ensure the layout is in synch with the schematic. The way to  do that is you create a connectivity-driven environment that can do this in real time."

The solution--the new flow (pictured, right)--needs to produce no noticeable lag, match the level of abstraction used by the designer, and be accurate and fast.

Shape-Based Approach

Doing that with conventional verification tools would require multiple tool invocations and many database translations. It also forces multiple scripts to be managed.

With a shape-based data model, in-design analysis can be done on a net-by-net basis, which is the way real layout is generated, White said.

At extraction, using an incremental approach, each change can be evaluated. The approach translates a geometric description of the shapes to their electrical equivalent. The extracted parasitics become available as each incremental change is made, White said.

You maintain speed by taking advantage of things like machine learning to "basically mimic the answers you'd get from a field solver," White said.

He added:

"You could use a field solver -- take the geometric descriptions of test patterns, run them through a field solver and train the pattern-matching extractor to mimic the input and output behavior of the field solver. You build up the models, which go into a techfile and that gets loaded with the layout, and that's one way you can create extraction methods that are fast and accurate."

Methodological Evolution

As electrically aware design flows evolve, early solutions will focus on parasitic and parameter extraction, White said. Electrical analysis solutions will build on core extraction technologies and initial analysis solutions will leverage fast resimulation and EM, he added.

Said White:

"We believe in-design solutions are the next major wave for EDA, especially in the physical implementation piece and being able to tie simulation much closer to layout creation and optimization... (In this way designers can be) focusing on correct by construction rather than waiting until the end ... and iterating back through."

Here are two resources that provide more information about electrically aware design as a methodology:

Brian Fuller

Related stories:

--Virtuoso Electrically Aware Design (EAD) - A New Approach to Custom/Analog Layout

 

SKILL for the Skilled: SKILL++ hi App Forms

$
0
0
One way to learn how to use the SKILL++ Object System is by extending an application which already exists. Once you understand how extension by inheritance works, it will be easier to implement SKILL++ applications from the ground up. I.e., if you understand inheritance, you can better architect your application to prepare for it.

This episode of SKILL for the Skilled starts with an existing SKILL++ GUI application and extends it several times. This is done each time by declaring a subclass of an existing SKILL++ class and adding methods on existing generic functions.

Overview

The application presented here is a hi GUI which walks an instance of a designated SKILL++ class across a designated design hierarchy. If the class of the instance is the base class smpDescendDesign, each cellView in the hierarchy is silently visited and, thus, is opened into virtual memory.

Please download theSKILL++ code, load the file smpGUI.ils, and call the function smpGUI().

The SKILL++ programmer is allowed to extend this base class to augment, circumvent, or modify some of its behavior.

FILE UNREADABLE

In the following paragraphs we'll extend this application in several ways:

  • Add diagnostic messages for each cellView visited
  • Save the modified cellViews encountered
  • Accommodate descent of schematics
  • Descend schematic with diagnostic messages

The Sample GUI

What does this application do? If you trace the smpWalk function and press Apply on the form, you get an idea of what's happening.

 

|[4]smpWalk(#{smpDescendDesign} db:0x311f7d9a)
|[4]smpWalk (smpDescendDesign t)(#{smpDescendDesign} db:0x311f7d9a)
|[6]smpWalk(#{smpDescendDesign 0x2ef12410} db:0x311f7a9a ?depth 1 ?lineage ... )
|[6]smpWalk (smpDescendDesign t)(#{smpDescendDesign} db:0x311f7a9a ?depth 1 ?lineage ... )
|[6]smpWalk (smpDescendDesign t) --> nil
|[6]smpWalk --> nil

...[stuff omitted]...

|[4]smpWalk (smpDescendDesign t) --> (db:0x311f7bf3 db:0x311f7bf2 db:0x311f7b23 db:0x311f7b22 db:0x311f7b1d ... )
|[4]smpWalk --> (db:0x311f7bf3 db:0x311f7bf2 db:0x311f7b23 db:0x311f7b22 db:0x311f7b1d ... )

The SKILL++ code is given insmpGUI.ils and smpDescendDesign.ils. You can experiment with it by loading startup.ils and calling the function smpGUI().

Extending the Application

A well-written SKILL++ application is extended not by modifying the original source code, but rather by creating sub-classes of built-in classes and providing methods on generic functions. This sample application defines as extensibility points a class named smpDescendDesign and several generic functions, smpDescription, smpWalk,smpGetSwitchMaster, and smpFilterInstances.

If a SKILL++ application is documented well enough, you will be able to read the documentation to understand which classes can be extended and which methods need to be overwritten to accomplish the desired results. Lacking sufficient documentation, you can read the source code comments.

Printing the Design Hierarchy

We want to extend the application so that the Class Name field of the GUI contains both strings: smpDescendDesign and smpPrintDesign. When the user selects smpPrintDesign and presses OK/Apply, an additional thing should happen: in particular, as the hierarchy is being visited, messages should be printed to standard output, indicating information about the cellViews visited, such as the following:
 0 -> ether_adc45n adc_sample_hold     layout: 
 1 -->    gpdk045  pmoscap2v     layout: |C3
 1 -->    gpdk045  pmoscap2v     layout: |C4
 1 --> ether_adc45n inv_2x_hv_small     layout: |I58
 2 --->    gpdk045     nmos2v     layout: |I58/|NM0
 2 --->    gpdk045     pmos2v     layout: |I58/|PM0

What is a Class?

Long-time readers may recall a series of SKILL for the Skilled articles some time ago,Introduction to Classes (parts 1 through 5).

A class in SKILL++ is an object which specifies a common structure of other objects (called instances). A class has at least one parent (called a direct super-class), and zero or more children (called direct sub-classes). The set of all classes form a directed acyclic graph from a special class at the top called t downward from super-class to sub-class ending at leaf-level classes at the bottom. The list of the class itself and all such parent classes starting with the class and terminating at t is called the list of super-classes. In SKILL++ this is a well defined and ordered list.

The class hierarchy is important because a class inherits structure and behavior from all its super-classes.

Creating a Subclass

The SKILL built-in macro defclass is used to create a new class or a sub-class of an existing class. If you don't specify an explicit super-class when creating a new class, its parent will be the special built-in class called standardObject.

To create a class named smpPrintDesign inheriting from smpDescendDesign, use the following syntax:

(defclass smpPrintDesign (smpDescendDesign)
   ())

This defines a simple class hierarchy as shown in the graphic:

FILE UNREADABLE

As far as SKILL++ is concerned, that's all that is necessary to create a class. However, to register the class with the sample application, the API function smpAddClass is provided. You need to call it with the name of the class. This tells the GUI form creation function to add the string "smpPrintDesign" to the cyclic field choices.

(smpAddClass 'smpPrintDesign)
You are free to create other building-block classes without registering them with smpAddClass. Those building-block classes won't appear in the GUI.

At this point you will have two Class Name choices in the GUI, but pressing OK/Apply will do the same thing regardless of which one is active.

What is a Generic Function?

While classes determine hierarchy and structure, generic functions and their methods implement behavior. A generic function declares the interface for all the methods of the same name. SKILL++ enforces parameter list congruency of all the methods of a generic function. SKILL++ programmers must be careful to enforce return value of the methods in a way which makes sense for the particular application.

For example, the return value of smpDescription must be a string because that string will be used as the value of a multi-line-string field in an application form.

The Sample API

This sample application pre-defines several generic functions which together with the smpDescendDesign class form the API for extending the application. A SKILL++ programmer is allowed to specialize methods of these generic functions on application-specific classes derived from smpDescendDesign.

The generic functions with their documentation are repeated here, but may also be found in the file smpGUI.ils.

 

smpDescription
(defgeneric smpDescription (obj))
Returns a string (with embedded \n characters) describing the action to be preformed if OK/Apply is pressed on the GUI while a particular class name is selected.
Methods on this generic function should return a string, possibly strcat'ing the result with callNextMethod().

smpWalk
(defgeneric smpWalk (obj cv @key lineage (depth 0) @rest _others))
Descend the hierarchy of the given cellview. Methods on this generic function may perform some action for side effect on the given cellView. Primary methods should call callNextMethod if they wish the descent to go deeper, and should avoid calling callNextMethod to prune the descent at this point. The return value of smpWalk is not specified.
ARGUMENTS:obj the object being specializedcv the opened cellView which is being visitedlineage a list of instances representing the lineage of this cellView back to the top level. The first element of lineage is the immediate parent of the cellView and the top-level instance is the last element of lineagedepth an integer indicating the hierarchy level. 0 indicates the top-level cellView.

smpFilterInstances
(defgeneric smpFilterInstances (obj cv))
Given a cellView, return the list of instances whose masters should be descended. The order of the list returned from smpFilterInstances is unimportant.

smpGetSwitchMaster
(defgeneric smpGetSwitchMaster (obj inst))
Given an instance in a cellView being visited by smpWalk,smpGetSwitchMaster returns the cellView to descend into. IfsmpGetSwitchMaster returns nil, then the descent is pruned at this point.

Updating the GUI Description Field

The GUI has a description field. We'd like this description to change as the user selects a different class name. We can do this by implementing the method smpDescription specialized on the new class. If you look at the smpDescription documentation above the comment on the generic function definition, you'll see some instructions for implementing methods.

These instructions describe how to implement a method on smpDescription: all smpDescription methods are unary functions, each method is expected to return a string.

(defmethod smpDescription ((_obj smpPrintDesign))
  (strcat (callNextMethod)"\nAnd print the names of each lib/cell/view encountered."))

Now if you interactively select the Class Name smpPrintDesign in the GUI, the description should change as shown here. As you see, the original text Descend the design hierarchy. has been augmented by the string concatenated by the new method.

FILE UNREADABLE

Adding the Diagnostic Messages

We now want to add a method to the smpWalk generic function. For clues on how to do this, consult the smpWalk documentation above. We can see that each method must have two required arguments, and must accept some optional arguments, in particular ?lineage and ?depth.
(defmethod smpWalk ((_obj smpPrintDesign) cv @key lineage (depth 0))
  (printf "%2d " depth)
  (for _i 0 depth (printf "-"))
  (printf "> ")
  (printf "%10s %10s %10s: %s\n"
          cv~>libName cv~>cellName cv~>viewName
          (buildString (reverse lineage~>name) "/"))
  (callNextMethod))

The method smpWalk specializing on smpPrintDesign prints some information about the cellView being visited, then calls callNextMethod, which continues with the descent.

The result of pressing the OK/Apply button is now something like the following being printed to the CIWindow:

 

 0 -> ether_adc45n adc_sample_hold     layout: 
 1 -->    gpdk045  pmoscap2v     layout: |C3
 1 -->    gpdk045  pmoscap2v     layout: |C4
 1 --> ether_adc45n inv_2x_hv_small     layout: |I58
 2 --->    gpdk045     nmos2v     layout: |I58/|NM0
 2 --->    gpdk045     pmos2v     layout: |I58/|PM0

Limiting the Descent

The smpFilterInstances can be implemented for a sub-class to affect which instances get considered for visitation. The documentation for smpFilterInstances is shown above.

The following code defines the class smpSaveDesign and extends the sample GUI, adding the capability to walk the design hierarchy, saving any unsaved cellViews.

FILE UNREADABLE

(defclass smpSaveDesign (smpPrintDesign)
   ())

(defmethod smpDescription ((_obj smpSaveDesign))
  (strcat (callNextMethod)
          "\nAnd save any unsaved cellViews."))

(defmethod smpWalk ((_obj smpSaveDesign) cv @rest _otherArgs)
  (callNextMethod)
  (cond
    ((member cv~>mode '("r" "s"))
     nil)
    ((cv~>modifiedButNotSaved)
     (dbSave cv))))

(defmethod smpFilterInstances ((obj smpSaveDesign) cv)
  (foreach mapcan ih cv~>instHeaders
    (when ih~>instances
      (ncons (car ih~>instances)))))

(smpAddClass 'smpSaveDesign)

In this example a method on smpWalk which saves the cellView if needed. It doesn't try to save cellViews which are read-only, nor scratch cellViews. Also it doesn't try to save anything unless it has modifiedButNotSaved set.

The smpWalk method specializing on smpDescendDesign calls smpFilterInstances to determine the list of instances to consider for descent. A smpFilterInstances method on smpSaveDesign is added here, which DOES NOT call callNextMethod. Since we are descending the hierarchy to save unsaved cellViews, we don't need to visit the same master more than once. This method returns a list of instances in the given cellView, one per instance header, which has an instance. I.e., sometimes instHead~>instances is nil; these are skipped.

Descending a Schematic Hierarchy

The classes shown above are great for walking a layout hierarchy. At each step the smpWalk is called recursively on the master of the instance being examined. To descend a schematic hierarchy, we ignore the master of the instance which is probably the symbol, and open and descend explicitly into the schematic when it exists. To do this we define the class smpSchematicHierarchy and provide the method smpGetSwitchMaster. See the smpGetSwitchMaster documentation.

FILE UNREADABLE

(defclass smpSchematicHierarchy (smpDescendDesign)
   ())

(defmethod smpGetSwitchMaster ((obj smpSchematicHierarchy) inst)
  (when (ddGetObj inst~>libName inst~>cellName "schematic")
    (dbOpenCellViewByType inst~>libName inst~>cellName "schematic")))

(defmethod smpDescription ((_obj smpSchematicHierarchy))
  (strcat (callNextMethod)
          "\nAnd descends a schematic hierarchy by looking for schematic cellviews""\n of symbol instances."))

(smpAddClass 'smpSchematicHierarchy)
The important method implemented here is smpGetSwitchMaster, which tests whether the schematic exists; if so, it opens and returns it. The smpWalk method specializing on smpDescendDesign calls smpGetSwitchMaster to determine which cellView to descend into, if any.

Combining Classes with Multiple Inheritance

SKILL++ supports multiple inheritance. This means a class is in principle allowed to inherit from more than one class.

We can use multiple inheritance to create an option in the sample GUI, which both descends the schematic hierarchy (based on class smpSchematicHierarchy) and prints information as the visiting occurs (based on class smpPrintDesign). To do this we define the class smpPrintSchematic to inherit from both smpPrintDesign and smpSchematicHierarchy, making the graph of the class hierarchy look like the following.

FILE UNREADABLE

Here is the code:

(defclass smpPrintSchematic (smpPrintDesign smpSchematicHierarchy)
   ())

(smpAddClass 'smpPrintSchematic)

Notice in this case that we don't define any new methods. This is because the methods defined thus far suffice for what we need. For example: the smpGetSwitchMaster method for class smpSchematicHierarchy will be used. In addition, since they all call callNextMethod of smpWalk for all the classes, (smpPrintDesign, smpSchematicHierarchy, and smpDescendDesign) will be used. In addition when selecting "smpPrintSchematic" on the GUI, we get a concatenated description of all the parent classes of smpPrintSchematic.

FILE UNREADABLE

Conclusion

Download this sample SKILL++ application, and load it by loading the startup.ils file. Again you'll need to start the GUI by typing smpGUI() into the CIWindow.

In this article we looked at a SKILL++ application which extends a specially designed application form. The application form allows you to select between several different behaviors based on the name of a selected class.

FILE UNREADABLE

The article shows several examples of class and method declarations which extend the given sample application in different ways. In particular it shows some simple examples of:

  • How to create a sub-class of an existing class
  • How to specialize a method on your class
  • How to use callNextMethod
  • How to use simple multiple inheritance

For more specific information on the SKILL++ Object System, please consult the Cadence online documentation. And as always, please post comments or questions below.

See also:

Viewing all 5347 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>