69 results on '"Luigi Capodieci"'
Search Results
2. Advanced timing analysis based on post-OPC extraction of critical dimensions.
- Author
-
Jie Yang 0010, Luigi Capodieci, and Dennis Sylvester
- Published
- 2005
- Full Text
- View/download PDF
3. Toward a methodology for manufacturability-driven design rule exploration.
- Author
-
Luigi Capodieci, Puneet Gupta 0001, Andrew B. Kahng, Dennis Sylvester, and Jie Yang 0010
- Published
- 2004
- Full Text
- View/download PDF
4. Impact of RET on physical layouts.
- Author
-
Franklin M. Schellenberg and Luigi Capodieci
- Published
- 2001
- Full Text
- View/download PDF
5. Adoption of OPC and the Impact on Design and Layout.
- Author
-
Franklin M. Schellenberg, Olivier Toublan, Luigi Capodieci, and Bob Socha
- Published
- 2001
- Full Text
- View/download PDF
6. DFM in practice: hit or hype?
- Author
-
Juan C. Rey, N. S. Nagaraj, Andrew B. Kahng, Fabian Klass, Rob Aitken, Cliff Hou, Luigi Capodieci, and Vivek Singh
- Published
- 2008
- Full Text
- View/download PDF
7. Layout pattern catalogs: from abstract algebra to advanced applications for physical verification and DFM
- Author
-
Vito Dai and Luigi Capodieci
- Subjects
Mathematical theory ,Physical verification ,Theoretical computer science ,Algebraic structure ,Computer science ,Component (UML) ,Electronic design automation ,Pattern matching ,Physical design ,Design for manufacturability - Abstract
Automated generation of Layout Pattern Catalogs (LPC) has been enabled by full-chip pattern matching EDA tools, capable of searching and classifying both topological and dimensional variations in layout shapes, extracting massive datasets of component patterns from one (or more) given layouts. This work presents a novel theoretical framework for the systematic analysis of Layout Pattern Catalogs (LPC). Two algebraic structures (lattices and matroids) are introduced, allowing for the complete characterization of all LPC datasets. Technical results go beyond the general mathematical theory of combinatorial pattern spaces, demonstrating a direct path to novel physical design verification algorithms and DFM optimization applications.
- Published
- 2021
- Full Text
- View/download PDF
8. Design for Manufacturing and Design Process Technology Co-Optimization
- Author
-
John L. Sturtevant and Luigi Capodieci
- Subjects
Dennard scaling ,law ,Computer science ,Semiconductor device fabrication ,Process (engineering) ,Transistor ,Hardware_INTEGRATEDCIRCUITS ,Design process ,Integrated circuit ,Manufacturing engineering ,Design for manufacturability ,law.invention ,Design technology - Abstract
The ecosystem has historically featured so-called integrated device manufacturers spanning both integrated circuit (IC) design and manufacturing functions, as well as semiconductor foundries that produce chips for fabless design companies. The unsung hero of the arduous challenge to keep IC performance on track has been and continues to be a heterogeneous set of computationally intensive computer-aided design methodologies collectively known as design technology co-optimization or design tor manufacturing. Fundamental to the economics of semiconductor manufacturing and its relationship to electronic design is the concept of metal–oxide–semiconductor field-effect transistor device scaling, sometimes referred to as Dennard scaling. The goal of the fab process, of course, is to manufacture a given integrated circuit chip with as high a yield as possible, and to do so immediately upon introduction into manufacturing. Dose can serve as a proxy for a variety of different manufacturing process excursions, such as postexposure bake time and temperature.
- Published
- 2020
- Full Text
- View/download PDF
9. Evolving physical design paradigms in the transition from 20/14 to 10nm process technology nodes.
- Author
-
Luigi Capodieci
- Published
- 2014
- Full Text
- View/download PDF
10. Persistent homology analysis of complex high-dimensional layout configurations for IC physical designs
- Author
-
Luigi Capodieci, Vito Dai, and Yacoub H. Kureh
- Subjects
Persistent homology ,Manufacturing process ,Computer science ,law ,High dimensional ,Integrated circuit design ,Integrated circuit ,Homology (mathematics) ,Topology ,Ic devices ,law.invention - Abstract
Problems in simulation, in physical defects, or in electrical failures of the IC devices generally occur at the boundaries of dimensional tolerances, such as the minimum width and space. However, for layout configurations with four or more critical dimensions, simple minimums are insufficient to characterize dimensional coverage. Persistent homology is a multi-resolution analysis technique which robustly summarizes dimensional coverage. We apply this technique to compare dimensional coverage of IC design configurations, on the same layer, on different layers, and on different designs, yielding results both expected and unexpected based on manufacturing process and design rule knowledge.
- Published
- 2019
- Full Text
- View/download PDF
11. Machine learning of IC layout 'styles' for Mask Data Processing verification and optimization (Conference Presentation)
- Author
-
Luigi Capodieci
- Subjects
Computer science ,business.industry ,Learnability ,Feature extraction ,Machine learning ,computer.software_genre ,Integrated circuit layout ,Design for manufacturability ,Set (abstract data type) ,Computational learning theory ,Optical proximity correction ,Artificial intelligence ,Physical design ,business ,computer - Abstract
Practical machine learning (ML) techniques are being deployed, at an accelerated pace, in an expanding set of application domains. The newsworthy examples in games strategy, image recognition, automated translation and autonomous driving are only the tip of the iceberg of a massive revolution in industrial manufacturing. Semiconductor IC design and manufacturing are also starting to see a number of ML applications, albeit of limited scope. In this work we present both a novel computational tool for ML of physical design layout styles (constructs, patterns) and also a general technical framework for the implementation of ML solutions across the design to mask to silicon chain. ML applications derive their mathematical foundations from Computational Learning Theory, which establishes “learning” as a computational process. The quantitative characterization of the learnability space and its associated Vapnik-Chervonenkis, (VC) dimension, is therefore a pre-requisite of any meaningful ML application. Various types of geometric constructs in the 3D Euclidian space of physical design layouts provide an ideal learnability domain. Specifically, the entire set of generalized Design Rules, silicon retargeting, Optical proximity Correction (OPC), post-OPC verification (ORC) and mask manufacturing constraints (MRC) can be demonstrated to be in a learnable set. This means that, given a suitable feature extraction model, classifiers systems can be built to perform data analytics and optimization, with a quantifiably higher performance (in terms of speed and scale) than any of the currently used engineering-based heuristics. Additionally layout styles, particularly the ones generated by Place-and-Route (P&R) tools and even manual layout for custom blocks are amenable to automated learning (and subsequent parametric-space optimization). Two complete examples in the Design to Mask flow, for advanced technology node applications will be used to validate the ML methodological framework and to illustrate extendibility to other areas, such as process and fab flow optimization. References [1] “A Theory of the Learnable”, L. Valiant – Communications of the ACM, 27 (1984) [2] “On the Learnability of Boolean Formulae”, M. Kearns et al. - ACM Symposium on Theory of computing (1987) [3] “Classifying learnable geometric concepts with the Vapnik-Chervonenkis dimension”, Blumer, et al. - ACM Symposium on Theory of computing (1986) [4] Optimization of complex high-dimensional layout configurations for IC physical designs using graph search, data analytics, and machine learning, V. Dai, et al. - Proc. SPIE 10148, DPTCO XI, (April 2)
- Published
- 2017
- Full Text
- View/download PDF
12. Front Matter: Volume 10148
- Author
-
Luigi Capodieci and Jason P. Cain
- Subjects
Volume (thermodynamics) ,Mechanics ,Geology ,Front (military) - Published
- 2017
- Full Text
- View/download PDF
13. Data Analytics and Machine Learning for Design-Process-Yield Optimization in Electronic Design Automation and IC semiconductor manufacturing
- Author
-
Luigi Capodieci
- Subjects
Computer science ,business.industry ,010102 general mathematics ,02 engineering and technology ,Machine learning ,computer.software_genre ,01 natural sciences ,Pipeline (software) ,Design for manufacturability ,Analytics ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Data analysis ,Design process ,Algorithm design ,Electronic design automation ,Artificial intelligence ,0101 mathematics ,Physical design ,business ,computer - Abstract
In response to the current challenges of end-of-Moore scaling, a systematic analysis of the data information flows in the Design-to-Manufacturing pipeline highlights opportunities for the introduction of (big) data analytics and machine learning solutions. In this paper we review the eco-system components and describe the fundamental data-flows in the IC Design-to-Manufacturing chain, highlighting both the well-established and functioning sub-systems, as well as the critical bottlenecks. A quantitative definition of physical design space coverage is proposed, as the unifying abstraction available for all components of the Design-to-Manufacturing flow, allowing for the construction of a computational framework where Data Analytics and Machine Learning methodologies and tools can be successfully applied. The juxtaposition of Design-Technology-Co-Optimization (DTCO) with the novel paradigm of DFM-as-Search and their necessary integration in the DFM computational toolkit, clearly exemplify how the all the advanced IC nodes (14, 10, 7 and 5nm) definitely require the adoption of a new class of correlation extraction algorithms for heterogeneous data sets.
- Published
- 2017
- Full Text
- View/download PDF
14. Data analytics and machine learning for continued semiconductor scaling
- Author
-
Luigi Capodieci
- Subjects
Computer science ,business.industry ,Data analysis ,Artificial intelligence ,Machine learning ,computer.software_genre ,business ,computer ,Data science ,Scaling - Published
- 2016
- Full Text
- View/download PDF
15. Front Matter: Volume 9781
- Author
-
Luigi Capodieci and Jason P. Cain
- Subjects
Engineering ,business.industry ,Design process ,Process engineering ,business ,Design for manufacturability - Published
- 2016
- Full Text
- View/download PDF
16. Design layout analysis and DFM optimization using topological patterns
- Author
-
Karthik Krishnamoorthy, Jason Sweis, Vito Dai, Edward Kah Ching Teoh, Luigi Capodieci, Jeff J. Xu, and Ya-Chieh Lai
- Subjects
Metal ,Cover (topology) ,Computer science ,Product (mathematics) ,visual_art ,visual_art.visual_art_medium ,Process window ,Node (circuits) ,Topology ,Design for manufacturability - Abstract
During the yield ramp of semi-conductor manufacturing, data is gathered on specific design-related process window limiters, or yield detractors, through a combination of test structures, failure analysis, and model-based printability simulations. Case-by-case, this data is translated into design for manufacturability (DFM) checks to restrict design usage of problematic constructs. This case-by-case approach is inherently reactive: DFM solutions are created in response to known manufacturing marginalities as they are identified. In this paper, we propose an alternative, yet complementary approach. Using design-only topological pattern analysis, all possible layout constructs of a particular type appearing in a design are categorized. For example, all possible ways via forms a connection with the metal above it may be categorized. The frequency of occurrence of each category indicates the importance of that category for yield. Categories may be split into sub-categories to align to specific manufacturing defect mechanisms. Frequency of categories can be compared from product to product, and unexpectedly high frequencies can be highlighted for further monitoring. Each category can be weighted for yield impact, once manufacturing data is available. This methodology is demonstrated on representative layout designs from the 28 nm node. We fully analyze all possible categories and sub-categories of via enclosure such that 100% of all vias are covered. The frequency of specific categories is compared across multiple designs. The 10 most frequent via enclosure categories cover ≥90% of all the vias in all designs. KL divergence is used to compare the frequency distribution of categories between products. Outlier categories with unexpected high frequency are found in some designs, indicating the need to monitor such categories for potential impact on yield.
- Published
- 2015
- Full Text
- View/download PDF
17. A methodology to optimize design pattern context size for higher sensitivity to hotspot detection using pattern association tree (PAT)
- Author
-
Shikha Somani, Luigi Capodieci, Piyush Pathak, Sriram Madhavan, and Piyush Verma
- Subjects
Very-large-scale integration ,Computer science ,Design pattern ,Hotspot (geology) ,Data mining ,Directed graph ,Physical design ,computer.software_genre ,computer ,Pattern search ,Algorithm - Abstract
Pattern based design rule checks have emerged as an alternative to the traditional rule based design rule checks in the VLSI verification flow [1]. Typically, the design-process weak-points, also referred as design hotspots, are classified into patterns of fixed size. The size of the pattern defines the radius of influence for the process. These fixed sized patterns are used to search and detect process weak points in new designs without running computationally expensive process simulations. However, both the complexity of the pattern and different kinds of physical processes affect the radii of influence. Therefore, there is a need to determine the optimal pattern radius (size) for efficient hotspot detection. The methodology described here uses a combination of pattern classification and pattern search techniques to create a directed graph, referred to as the Pattern Association Tree (PAT). The pattern association tree is then filtered based on the relevance, sensitivity and context area of each pattern node. The critical patterns are identified by traversing the tree and ranking the patterns. This method has plausible applications in various areas such as process characterization, physical design verification and physical design optimization. Our initial experiments in the area of physical design verification confirm that a pattern deck with the radius optimized for each pattern is significantly more accurate at predicting design hotspots when compared to a conventional deck of fixed sized patterns.
- Published
- 2015
- Full Text
- View/download PDF
18. A pattern-based methodology for optimizing stitches in double-patterning technology
- Author
-
Sriram Madhavan, Lynn T.-N. Wang, Luigi Capodieci, and Vito Dai
- Subjects
Matching (graph theory) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Hardware_INTEGRATEDCIRCUITS ,Multiple patterning ,Topology (electrical circuits) ,Pattern matching ,Layer (object-oriented design) ,Algorithm - Abstract
A pattern-based methodology for optimizing stitches is developed based on identifying stitch topologies and replacing them with pre-characterized fixing solutions in decomposed layouts. A topology-based library of stitches with predetermined fixing solutions is built. A pattern-based engine searches for matching topologies in the decomposed layouts. When a match is found, the engine opportunistically replaces the predetermined fixing solution: only a design rule check error-free replacement is preserved. The methodology is demonstrated on a 20nm layout design that contains over 67 million, first metal layer stitches. Results show that a small library containing 3 stitch topologies improves the stitch area regularity by 4x.
- Published
- 2015
- Full Text
- View/download PDF
19. VLSI physical design analyzer: A profiling and data mining tool
- Author
-
Fadi Batarseh, Sriram Madhavan, Piyush Verma, Robert C. Pack, Luigi Capodieci, and Shikha Somani
- Subjects
Very-large-scale integration ,Spectrum analyzer ,Computer science ,Vlsi physical design ,Profiling (information science) ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,Wafer ,Data mining ,Physical design ,computer.software_genre ,computer - Abstract
Traditional physical design verification tools employ a deck of known design rules, each of which has a pre-defined pass/fail criteria associated with it. While passing a design rule deck is a necessary condition for a VLSI design to be manufacturable, it is not sufficient. Other physical design profiling decks that attempt to obtain statistical information about the various critical dimensions in the VLSI design lack a systematic methodology for rule enumeration. These decks are often inadequate, unable to extract all the interlayer and intralayer dimensions in a design that have a correlation with process yield. The Physical Design Analyzer is a comprehensive design analysis tool built with the objective of exhaustively exploring design-process correlations to increase the wafer yield.
- Published
- 2015
- Full Text
- View/download PDF
20. Front Matter: Volume 9053
- Author
-
Luigi Capodieci and John L. Sturtevant
- Subjects
Volume (thermodynamics) ,Mechanics ,Geology ,Front (military) - Published
- 2014
- Full Text
- View/download PDF
21. Decomposition-aware layout optimization for 20/14nm standard cells
- Author
-
Lynn T.-N. Wang, Sriram Madhavan, Luigi Capodieci, Eric Chiu, and Shobhit Malik
- Subjects
Image stitching ,Standard cell ,Computer science ,Hardware_INTEGRATEDCIRCUITS ,Decomposition (computer science) ,Enclosure ,Multiple patterning ,Design for manufacturability ,Computational science - Abstract
Decomposition-aware layout design improvements for 8, 9, 11, and 13-track 20/14nm standard cells are presented. Using a decomposition-aware scoring methodology that quantifies the manufacturability of layouts, the Double Patterning Technology (DPT)-compliant layouts are optimized for DPT-specific metrics that include: the density difference between the two decomposition mask layers, the enclosure of stitching areas, the density of stitches, and the design regularity of stitching areas. For a 9-track standard cell, eliminating the stitches from the layout design improved the composite score from 0.53 to 0.70.
- Published
- 2014
- Full Text
- View/download PDF
22. Systematic physical verification with topological patterns
- Author
-
Frank E. Gennari, Vito Dai, Ya-Chieh Lai, Edward Kah Ching Teoh, and Luigi Capodieci
- Subjects
Design rule checking ,Physical verification ,Theoretical computer science ,Computer science ,business.industry ,Constraint (computer-aided design) ,Topology ,computer.software_genre ,Automation ,Design for manufacturability ,Node (circuits) ,Electronic design automation ,Data mining ,Pattern matching ,business ,computer - Abstract
Design rule checks (DRC) are the industry workhorse for constraining design to ensure both physical and electrical manufacturability. Where DRCs fail to fully capture the concept of manufacturability, pattern-based approaches, such as DRC Plus, fill the gap using a library of patterns to capture and identify problematic 2D configurations. Today, both a DRC deck and a pattern matching deck may be found in advanced node process development kits. Major electronic design automation (EDA) vendors offer both DRC and pattern matching solutions for physical verification; in fact, both are frequently integrated into the same physical verification tool. In physical verification, DRCs represent dimensional constraints relating directly to process limitations. On the other hand, patterns represent the 2D placement of surrounding geometries that can introduce systematic process effects. It is possible to combine both DRCs and patterns in a single topological pattern representation. A topological pattern has two separate components: a bitmap representing the placement and alignment of polygon edges, and a vector of dimensional constraints. The topological pattern is unique and unambiguous; there is no code to write, and no two different ways to represent the same physical structure. Furthermore, markers aligned to the pattern can be generated to designate specific layout optimizations for improving manufacturability. In this paper, we describe how to do systematic physical verification with just topological patterns. Common mappings between traditional design rules and topological pattern rules are presented. We describe techniques that can be used during the development of a topological rule deck such as: taking constraints defined on one rule, and systematically projecting it onto other related rules; systematically separating a single rule into two or more rules, when the single rule is not sufficient to capture manufacturability constraints; creating test layout which represents the corners of what is allowed, or not allowed by a rule; improving manufacturability by systematically changing certain patterns; and quantifying how a design uses design rules. Performance of topological pattern search is demonstrated to be production full-chip capable.
- Published
- 2014
- Full Text
- View/download PDF
23. Systematic data mining using a pattern database to accelerate yield ramp
- Author
-
Luigi Capodieci, Frank E. Gennari, Edward Kah Ching Teoh, Vito Dai, and Ya-Chieh Lai
- Subjects
Physical verification ,Database ,Relational database ,Computer science ,Design flow ,Pattern matching ,Data mining ,Cluster analysis ,computer.software_genre ,computer ,Design for manufacturability - Abstract
Pattern-based approaches to physical verification, such as DRC Plus, which use a library of patterns to identify problematic 2D configurations, have been proven to be effective in capturing the concept of manufacturability where traditional DRC fails. As the industry moves to advanced technology nodes, the manufacturing process window tightens and the number of patterns continues to rapidly increase. This increase in patterns brings about challenges in identifying, organizing, and carrying forward the learning of each pattern from test chip designs to first product and then to multiple product variants. This learning includes results from printability simulation, defect scans and physical failure analysis, which are important for accelerating yield ramp. Using pattern classification technology and a relational database, GLOBALFOUNDRIES has constructed a pattern database (PDB) of more than one million potential yield detractor patterns. In PDB, 2D geometries are clustered based on similarity criteria, such as radius and edge tolerance. Each cluster is assigned a representative pattern and a unique identifier (ID). This ID is then used as a persistent reference for linking together information such as the failure mechanism of the patterns, the process condition where the pattern is likely to fail and the number of occurrences of the pattern in a design. Patterns and their associated information are used to populate DRC Plus pattern matching libraries for design-for-manufacturing (DFM) insertion into the design flow for auto-fixing and physical verification. Patterns are used in a production-ready yield learning methodology to identify and score critical hotspot patterns. Patterns are also used to select sites for process monitoring in the fab. In this paper, we describe the design of PDB, the methodology for identifying and analyzing patterns across multiple design and technology cycles, and the use of PDB to accelerate manufacturing process learning. One such analysis tracks the life cycle of a pattern from the first time it appears as a potential yield detractor until it is either fixed in the manufacturing process or stops appearing in design due to DFM techniques such as DRC Plus. Another such analysis systematically aggregates the results of a pattern to highlight potential yield detractors for further manufacturing process improvement.
- Published
- 2014
- Full Text
- View/download PDF
24. Design-enabled manufacturing enablement using manufacturing design request tracker (MDRT)
- Author
-
Luigi Capodieci, Vito Dai, Sarah McGowan, Rao Desineni, Kok Peng Chua, Sky Yeo, Carl P. Babcock, Akif Sultan, Eswar Ramanathan, Colin Hui, Robert Madge, Kristina Hoeppner, Jens Hassmann, and Edward Kah Ching Teoh
- Subjects
Engineering ,Product lifecycle ,Product life-cycle management ,business.industry ,Yield (finance) ,Suite ,Systems engineering ,business ,Integrated circuit layout ,Manufacturing engineering ,Design for manufacturability - Abstract
The shrinking dimensions with advanced technologies pose yield challenges which require continuous enhancement of yield methodologies to quickly detect and fix the marginal layout features. In this paper, we present a practical approach to enhance the DFM and DEM capabilities suite provided by GLOBALFOUNDRIES for 28nm technology and beyond. The MDRT system has been implemented in the Product Lifecycle Management (PLM) system within GLOBALFOUNDRIES.
- Published
- 2013
- Full Text
- View/download PDF
25. Pattern matching for identifying and resolving non-decomposition-friendly designs for double patterning technology (DPT)
- Author
-
Luigi Capodieci, Vito Dai, and Lynn T.-N. Wang
- Subjects
Computer science ,Design flow ,Multiple patterning ,Decomposition (computer science) ,Node (circuits) ,Pattern matching ,Layer (object-oriented design) ,Lithography ,Algorithm - Abstract
A pattern matching methodology that identifies non-decomposition-friendly designs and provides localized guidance for layout-fixing is presented for double patterning lithography. This methodolog y uses a library of patterns in which each pattern has been pre-characterized as impossible-to-decompose and annotated with a design rule for guiding the layout fixes. A pattern matching engine identifies these problematic patterns in design, which allows the layout designers to anticipate and prevent d ecomposition errors, prior to layout decomposition. The methodology has been demonstrated on a 180 um 2 layout migrated from the previous 28nm technology node for the metal 1 layer. Using a small library of just 18 patterns, the pattern matching engine identified 119 out of 400 decomposition errors, which constituted coverage of 29.8%. Keywords: Pattern matching, odd-cycles, coloring conflicts, double patterning, decomposition, design flow, design rule, DRC Plus, automated decomposition algorithm, DPT
- Published
- 2013
- Full Text
- View/download PDF
26. A scoring methodology for quantitatively evaluating the quality of double patterning technology-compliant layouts
- Author
-
Shobhit Malik, Lynn T.-N. Wang, Sriram Madhavan, Piyush Pathak, and Luigi Capodieci
- Subjects
Computer science ,media_common.quotation_subject ,computer.software_genre ,Process variation ,Set (abstract data type) ,Multiple patterning ,Quality (business) ,Instrumentation (computer programming) ,Data mining ,Sensitivity (control systems) ,Scale (map) ,computer ,Simulation ,media_common - Abstract
A Double Patterning Technology (DPT)-aware scoring methodology that systematically quantifies the quality of DPTcompliant layout designs is described. The methodology evaluates layouts based on a set of DPT-specific metrics that characterizes layout-induced process variation. Specific metrics include: the spacing variability between two adjacent oppositely-colored features, the density differences between the two exposure masks, and the stitching area's sensitivity to mask misalignment. These metrics are abstracted to a scoring scale from 0 to 1 such that 1 is the optimum. This methodology provides guidance for opportunistic layout modifications so that DPT manufacturability-related issues are mitigated earlier in design. Results show that by using this methodology, a DPT-compliant layout improved from a composite score of 0.66 and 0.78 by merely changing the decomposition solution so that the density distribution between the two exposure masks is relatively equal.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
- Published
- 2012
- Full Text
- View/download PDF
27. Framework for identifying recommended rules and DFM scoring model to improve manufacturability of sub-20nm layout design
- Author
-
Piyush Pathak, Lynn T.-N. Wang, Luigi Capodieci, Sriram Madhavan, and Shobhit Malik
- Subjects
Design rule checking ,Page layout ,Process (engineering) ,Computer science ,Context (language use) ,Instrumentation (computer programming) ,computer.software_genre ,computer ,Reliability engineering ,Design for manufacturability - Abstract
This paper addresses the framework for building critical recommended rules and a methodology for devising scoring models using simulation or silicon data. Recommended rules need to be applied to critical layout configurations (edge or polygon based geometric relations), which can cause yield issues depending on layout context and process variability. Determining of critical recommended rules is the first step for this framework. Based on process specifications and design rule calculations, recommended rules are characterized by evaluating the manufacturability response to improvements in a layout-dependent parameter. This study is applied to critical 20nm recommended rules. In order to enable the scoring of layouts, this paper also discusses a CAD framework involved in supporting use-models for improving the DFM-compliance of a physical design.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
- Published
- 2012
- Full Text
- View/download PDF
28. Smart double-cut via insertion flow with dynamic design-rules compliance for fast new technology adoption
- Author
-
Sriram Madhavan, Shobhit Malik, Ahmad Abdulghany, Piyush Pathak, Luigi Capodieci, and Rami Fathy
- Subjects
Physical verification ,business.industry ,Computer science ,Embedded system ,Flow (psychology) ,Hardware_INTEGRATEDCIRCUITS ,Hardware_PERFORMANCEANDRELIABILITY ,business ,Dynamic design - Abstract
As IC technologies shrink and via defects remain the same size, the probability of via defects increases. Redundant via insertion is an effective method to reduce yield loss related to via failures, but a large number of extremely complex design rules make efficient automatic via insertion difficult. This paper introduces an automatic redundant via insertion flow which is capable of adopting new technologies and complex design rules extremely quickly. Runtime and efficiency are optimized through a smart insertion scheduling technique. Our experiments show that it efficiently improves redundant via percentage, making designs more robust against via defects.
- Published
- 2012
- Full Text
- View/download PDF
29. Automated yield enhancements implementation on full 28nm chip: challenges and statistics
- Author
-
Luigi Capodieci, Ramy Fathy, Piyush Pathak, Shobhit Malik, Sriram Madhavan, and Ahmad Abdulghany
- Subjects
Measure (data warehouse) ,Resource (project management) ,Computer science ,Yield (finance) ,Statistics ,Mask data preparation ,Chip - Abstract
This paper shares the details of the Yield Enhancements that were done at 28nm full chip level sharing the complexity involved in implementing such a flow and then the verification challenges involved , e.g., at mask data preparation. We discuss and present the algorithm used to measure the efficiency of the tool, explaining why we used this algorithm while sharing some alternate algorithms possible. We also share the detailed statistics regarding run time, machine resource, data size, polygon counts etc. We also present good techniques used by us for efficient flow management involved in large complex 28nm chips.
- Published
- 2012
- Full Text
- View/download PDF
30. Design-of-experiments based design rule optimization
- Author
-
Puneet Gupta, Swamy Muddu, Coby Zelnik, Abde Ali Kagalwalla, and Luigi Capodieci
- Subjects
Standard cell ,Reduction (complexity) ,Computer engineering ,Process (engineering) ,Computer science ,Design of experiments ,Node (circuits) ,Instrumentation (computer programming) ,Circuit minimization for Boolean functions ,Hardware_CONTROLSTRUCTURESANDMICROPROGRAMMING ,Simulation ,Abstraction (linguistics) - Abstract
Design rules (DRs) are the primary abstraction between design and manufacturing. The optimization of DRs to achieve the correct tradeoff between scaling and yield is a key step in developing a new technology node. In this work we propose a design-of-experiments based framework to optimize DRs, where layouts are generated for different DR values using compaction. By analyzing the impact of DRs on layout scaling, we propose a novel Boolean minimization based approach to reduce the number of layouts that need to be generated through compaction. This methodology provides an automated approach to analyze several DRs simultaneously and discover area-critical DRs and DR interactions. We apply this methodology to middle-of-line (MOL) and Metal1 layer design rules for a commercial 20nm process. Our methodology results in 10 - 105 x reduction in the number of layouts that need to be generated through compaction, and demonstrates the impact of MOL and Metal1 DRs on the area of some standard cell layouts.© (2012) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
- Published
- 2012
- Full Text
- View/download PDF
31. Pattern matching for double patterning technology-compliant physical design flows
- Author
-
Vito Dai, Luigi Capodieci, and Lynn T.-N. Wang
- Subjects
Design rule checking ,Computer science ,Design flow ,Decomposition (computer science) ,Multiple patterning ,Sample (statistics) ,Pattern matching ,Physical design ,Algorithm ,Design for manufacturability - Abstract
A pattern-based methodology for guiding the generation of DPT-compliant layouts using a foundry-characterized library of difficult to decompose patterns with known corresponding solutions is presented. A pattern matching engine scans the drawn layout for patterns from the pattern library. If a match were found, one or more DPT-compliant solutions would be provided for guiding the layout modifications. This methodology is demonstrated on a sample 1.8 mm 2 layout migrated from a previous technology. A small library of 12 patterns is captur ed, which accounts fo r 59 out of the 194 DPT-compliance check violations examined. In addition, the methodology can be used to recommend specific changes to the original drawn design to improve manufacturability. This methodology is compatible with any physical design flows that use automated decomposition algorithms. Keywords: Pattern matching, double patterning, decomposition, design flow, design rule check, DRC Plus, automated decomposition algorithm, DPT
- Published
- 2012
- Full Text
- View/download PDF
32. Yield enhancement flow for analog and full custom designs reliability-rules automatic application
- Author
-
Rami Fathy, Shobhit Malik, Luigi Capodieci, and Ahmad Abdulghany
- Subjects
Engineering ,Reliability (semiconductor) ,Input design ,Standardization ,Flow (mathematics) ,Application-specific integrated circuit ,business.industry ,Embedded system ,Yield (finance) ,Full custom ,business ,Turnaround time ,Computer hardware - Abstract
As the variations of shrunk processes increase at rapid rate, the performance of fabricated analog and full custom chips remarkably fluctuate. This paper describes an effective automatic flow for reliability rules automatic application onto analog and full-custom ASIC designs, without introducing any new design rules check (DRC) violations in input design. This Yield enhancement flow has shown good improvements on used test designs, and ran in reasonable time. Based on the standardization methodology used, additional foundry Yield-enhancement-related recommendations can be also developed as extension to this flow seamlessly providing easy and quick new technology adoption and short Turnaround Time (TAT).
- Published
- 2011
- Full Text
- View/download PDF
33. Timing variability analysis for layout-dependent-effects in 28nm custom and standard cell-based designs
- Author
-
Sriram Madhavan, Philippe Hurat, Piyush Pathak, Rasit O. Topaloglu, Jac Condella, Ramez Nachman, and Luigi Capodieci
- Subjects
Standard cell ,Stress (mechanics) ,Computer science ,law ,Real-time computing ,Transistor ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Process (computing) ,Integrated circuit design ,Lithography ,law.invention - Abstract
We identify most recent sources of transistor layout dependent effects (LDE) such as stress, lithography, and well proximity effects (WPE), and outline modeling and analysis methods for 28 nm. These methods apply to custom layout, standard cell designs, and context-aware post-route analysis. We show how IC design teams can use a model-based approach to quantify and analyze variability induced by LDE. We reduce the need for guard-bands that negate the performance advantages that stress brings to advanced process technologies.
- Published
- 2011
- Full Text
- View/download PDF
34. Considerations in source-mask optimization for logic applications
- Author
-
Yi Zou, Kenji Yoshimoto, Luigi Capodieci, Harry J. Levinson, Yuansheng Ma, Yunfei Deng, Jongwook Kye, and Cyrus E. Tabery
- Subjects
Resist ,law ,Computer science ,Computational lithography ,Multiple patterning ,Process (computing) ,Electronic engineering ,Nanotechnology ,Photolithography ,Lithography process ,Random logic ,Lithography ,law.invention - Abstract
In the low k1 regime, optical lithography can be extended further to its limits by advanced computational lithography technologies such as Source-Mask Optimization (SMO) without applying costly double patterning techniques. By cooptimizing the source and mask together and utilizing new capabilities of the advanced source and mask manufacturing, SMO promises to deliver the desired scaling with reasonable lithography performance. This paper analyzes the important considerations when applying the SMO approach to global source optimization in random logic applications. SMO needs to use realistic and practical cost functions and model the lithography process with accurate process data. Through the concept of source point impact factor (SPIF), this study shows how optimization outputs depend on SMO inputs, such as limiting patterns in the optimization. This paper also discusses the modeling requirements of lithography processes in SMO, and it shows how resist blur affect optimization solutions. Using a logic test case as example, the optimized pixelated source is compared with the non-optimized source and other optimized parametric sources in the verification. These results demonstrate the importance of these considerations during optimization in achieving the best possible SMO results which can be applied successfully to the targeted lithography process.
- Published
- 2010
- Full Text
- View/download PDF
35. Evaluation of lithographic benefits of using ILT techniques for 22nm-node
- Author
-
Linyong Pang, Bob Gleason, Cyrus E. Tabery, Anthony Aadamov, Yunfei Deng, Thuc Dam, Ki-Ho Baik, Jongwook Kye, Luigi Capodieci, and Yi Zou
- Subjects
business.industry ,Computational lithography ,Computer science ,law.invention ,Process variation ,Optics ,law ,Logic gate ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Node (circuits) ,Photolithography ,business ,Random logic ,Lithography ,Immersion lithography - Abstract
As increasing complexity of design and scaling continue to push lithographic imaging to its k1 limit, lithographers have been developing computational lithography solutions to extend 193nm immersion lithography to the 22nm technology node. In our paper, we investigate the beneficial source or mask solutions with respect to pattern fidelity and process variation (PV) band performances for 1D through pitch patterns, SRAM and Random Logic Standard Cells. The performances of two different computational lithography solutions, idealized un-constrained ILT mask and manhattanized mask rule constrain (MRC) compliant mask, are compared. Additionally performance benefits for process-window aware hybrid assist feature (AF) are gauged against traditional rule-based AF. The results of this study will demonstrate the lithographic performance contribution that can be obtained from these mask optimization techniques in addition to what source optimization can achieve.
- Published
- 2010
- Full Text
- View/download PDF
36. Clustering and pattern matching for an automatic hotspot classification and detection system
- Author
-
Ning Ma, Costas J. Spanos, Justin Ghan, Norma Rodriguez, Sandipan Mishra, Luigi Capodieci, and Kameshwar Poolla
- Subjects
Computer science ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Hotspot (geology) ,Hardware_INTEGRATEDCIRCUITS ,Hardware_PERFORMANCEANDRELIABILITY ,Pattern matching ,Data mining ,Cluster analysis ,computer.software_genre ,computer - Abstract
This paper provides details of the implementation of a new design hotspot classification and detection system, and presents results of using the system to detect hotspots in layouts. A large set of hotspot snippets is grouped into a small number of clusters containing geometrically similar hotspots. A fast incremental clustering algorithm is used to perform this task efficiently on very large datasets. Each cluster is analyzed to produce a characterization of a class of hotspots, and a pattern matcher is used to detect hotspots in new design layouts based on the hotspot class descriptions.
- Published
- 2009
- Full Text
- View/download PDF
37. Inverse vs. traditional OPC for the 22nm node
- Author
-
Cynthia Zhu, Yunfei Deng, Sergei B. Rodin, Marina Medvedeva, Hesham A. F. Diab, Cyrus E. Tabery, James Word, Yi Zou, Luigi Capodieci, Mohamed Gheith, Jongwook Kye, Kenji Yoshimoto, Mohamed Habib, and Yuri Granik
- Subjects
Resolution enhancement technologies ,business.industry ,Computer science ,Inverse ,law.invention ,Optics ,Optical proximity correction ,law ,Multiple patterning ,Process window ,Node (circuits) ,Photolithography ,business ,Lithography ,Algorithm - Abstract
The 22nm node will be patterned with very challenging Resolution Enhancement Techniques (RETs) such as double exposure or double patterning. Even with those extreme RETs, the k1 factor is expected to be less than 0.3. There is some concern in the industry that traditional edge-based simulate-then-move Optical Proximity Correction (OPC) may not be up to the challenges expected at the 22nm node. Previous work presented the advantages of a so-called inverse OPC approach when coupled with extreme RETs or illumination schemes. The smooth mask contours resulting from inverse corrections were shown not to be limited by topological identity, feedback locality, or fragment conformity. In short, inverse OPC can produce practically unconstrained and often non-intuitive mask shapes. The authors will expand this comparison between traditional and inverse OPC to include likely 22nm RETs such as double dipole lithography and double patterning, comparing dimensional control through process window for each OPC method. The impact of mask simplification of the inverse OPC shapes into shapes which can be reliably manufactured will also be explored.
- Published
- 2009
- Full Text
- View/download PDF
38. Developing DRC plus rules through 2D pattern extraction and clustering techniques
- Author
-
Jie Yang, Norma Rodriguez, Luigi Capodieci, and Vito Dai
- Subjects
Resolution enhancement technologies ,business.industry ,Computer science ,Design flow ,Hardware_PERFORMANCEANDRELIABILITY ,computer.software_genre ,Design for manufacturability ,Visual inspection ,Optical proximity correction ,Hardware_INTEGRATEDCIRCUITS ,Artificial intelligence ,Data mining ,Cluster analysis ,business ,Lithography ,computer - Abstract
As technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical proximity correction (OPC) are applied, standard design rule constraints (DRC) sometimes fails to fully capture the concept of design manufacturability. DRC Plus augments standard DRC by applying fast 2D pattern matching to design layout to identify problematic 2D patterns missed by DRC. DRC Plus offers several advantages over other DFM techniques: it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not require compute intensive simulations, and it does not require highly-accurate lithographic models. These advantages allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC. The creation of DRC Plus rules, however, remains a challenge. Hotspots derived from lithographic simulation may be used to create DRC Plus rules, but the process of translating a hotspot into a pattern is a difficult and manual effort. In this paper, we present an algorithmic methodology to identify hot patterns using lithographic simulation rather than hotspots. First, a complete set of pattern classes, which covers the entire design space of a sample layout, is computed. These pattern classes, by construction, can be directly used as DRC Plus rules. Next, the manufacturability of each pattern class is evaluated as a whole. This results in a quantifiable metric for both design impact and manufacturability, which can be used to select individual pattern classes as DRC Plus rules. Simulation experiment shows that hundreds of rules can be created using this methodology, which is well beyond what is possible by hand. Selective visual inspection shows that algorithmically generated rules are quite reasonable. In addition to producing DRC Plus rules, this methodology also provides a concrete understanding of design style, design variability, and how they affect manufacturability.
- Published
- 2009
- Full Text
- View/download PDF
39. Computational technology scaling from 32 nm to 28 and 22 nm through systematic layout printability verification
- Author
-
Luigi Capodieci and Jason P. Cain
- Subjects
Computer engineering ,Computer science ,Robustness (computer science) ,Computational lithography ,Process capability ,Real-time computing ,Scalability ,Technology scaling - Abstract
In this work, we present a novel application of layout printability verification (LPV) to assess the scalability of physical layout components from 32 nm to 28 and 22 nm with respect to process variability metrics. Starting from the description of a mature LPV flow, the paper illustrates the core methodology for deriving a metric for design scalability. The functional dependency between the scalability metric and the scaling factor can then be modeled to study the scaling robustness of a set of representative layouts. Conversely, quantitative data on scalability limits can be used to determine which design rules can be pushed and which must be relaxed in the transition from 32 to 22 nm.
- Published
- 2009
- Full Text
- View/download PDF
40. A novel methodology for hybrid mask AF generation for 22 and 15nm technology
- Author
-
Luigi Capodieci, Cyrus E. Tabery, and Yi Zou
- Subjects
Semiconductor device fabrication ,business.industry ,Computer science ,law.invention ,Set (abstract data type) ,Metal ,Optics ,Optical proximity correction ,law ,visual_art ,Hardware_INTEGRATEDCIRCUITS ,visual_art.visual_art_medium ,Process window ,Node (circuits) ,Photolithography ,business ,Algorithm ,Lithography - Abstract
Mask AF (AF), both printable (PRAF) and non-printable - or sub-resolution - AF (SRAF) have been part of the established lithography and RET/OPC toolkit for achieving a manufacturable process window, for several technology generations. Deployment of AF onto a mask for a full product or test-chip layout has been traditionally rule-driven, i.e. based on look-up tables of critical feature sizes and corresponding AF, with specified dimensions and at specified distances. The number of AF per given layout main feature, the set of AF dimensions (widths, heights, etc.) and the distances (placements) from the main features are collectively referred to as the AF Rules Set. The identification of the optimal parameters values in an AF Rules Set (or the optimal AF Rules, for short) is a fundamental problem in process technology development for semiconductor manufacturing, which is typically being addressed by a mixture of heuristic parametric searches and lithographic simulations. This approach produces a limited number of usable AF rules (mainly for 1D layout configurations) and often results in insufficient coverage for 2D cases. This paper introduces a novel methodology for the determination of an optimal AF Rules set, which results in a greatly superior coverage of layout configurations (both 1D and 2D) and quantitatively verifiable larger common process window. The methodology utilizes an inverse lithography simulation step, followed by computational geometry analysis and filtering and finally automated rules extraction. The computational flow, which has been implemented for 22nm process development, delivers an order of magnitude more rules (i.e. from few tens to several hundred), and often generates non-intuitive 2D rules which could not have been discovered by heuristic search alone. The presented results illustrate the superior performance of this technique particularly in the case of contact and metal layers at 22nm. Potential extendibility of this approach for the 15nm node is also discussed.
- Published
- 2009
- Full Text
- View/download PDF
41. Automatic hotspot classification using pattern-based clustering
- Author
-
Luigi Capodieci, Sandipan Mishra, Ning Ma, Costas J. Spanos, Justin Ghan, Kameshwar Poolla, and Norma Rodriguez
- Subjects
Computer science ,Hotspot (geology) ,Hardware_INTEGRATEDCIRCUITS ,Cluster (physics) ,Hierarchical control system ,Pattern matching ,Data mining ,Snippet ,computer.software_genre ,Cluster analysis ,computer ,Design for manufacturability ,Hierarchical clustering - Abstract
This paper proposes a new design check system that works in three steps. First, hotspots such as pinching/bridging are recognized in a product layout based on thorough process simulations. Small layout snippets centered on hotspots are clipped from the layout and similarities between these snippets are calculated by computing their overlapping areas. This is accomplished using an efficient, rectangle-based algorithm. The snippet overlapping areas can be weighted by a function derived from the optical parameters of the lithography process. Second, these hotspots are clustered using a hierarchical clustering algorithm. Finally, each cluster is analyzed in order to identify the common cause of failure for all the hotspots in that cluster, and its representative pattern is fed to a pattern-matching tool for detecting similar hotspots in new design layouts. Thus, the long list of hotspots is reduced to a small number of meaningful clusters and a library of characterized hotspot types is produced. This could lead to automated hotspot corrections that exploit the similarities of hotspots occupying the same cluster. Such an application will be the subject of a future publication.
- Published
- 2008
- Full Text
- View/download PDF
42. DRC Plus: augmenting standard DRC with pattern matching on 2D geometries
- Author
-
Jie Yang, Luigi Capodieci, Norma Rodriguez, and Vito Dai
- Subjects
Resolution enhancement technologies ,Optical proximity correction ,Computer science ,Design flow ,Hardware_INTEGRATEDCIRCUITS ,Pattern matching ,Integrated circuit design ,Lithography ,Reliability engineering ,Design for manufacturability - Abstract
Design rule constraints (DRC) are the industry workhorse for constraining design to ensure both physical and electrical manufacturability. However, as technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical proximity correction (OPC) are applied, standard DRC sometimes fails to fully capture the concept of design manufacturability. Consequently, some DRC-clean layout designs are found to be difficult to manufacture. Attempts have been made to "patch up" standard DRC with additional rules to identify these specific problematic cases. However, due to the lack of specificity with DRC, these efforts often meet with mixed-success. Although it typically resolves the issue at hand, quite often, it is the enforcement of some DRC rule that causes other problematic geometries to be generated, as designers attempt to meet all the constraints given to them. In effect, designers meet the letter of the law, as defined by the DRC implementation code, without understanding the "spirit of the rule". This leads to more exceptional cases being added to the DRC manual, further increasing its complexity. DRC Plus adopts a different approach. It augments standard DRC by applying fast 2D pattern matching to design layout to identify problematic 2D configurations which are difficult to manufacture. The tool then returns specific feedback to designers on how to resolve these issues. This basic approach offers several advantages over other DFM techniques: It is enforceable, it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not require compute intensive simulations, and it does not require highly-accurate lithographic models that may not be available during design. These advantages allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC.
- Published
- 2007
- Full Text
- View/download PDF
43. Design-driven metrology: a new paradigm for DFM-enabled process characterization and control: extensibility and limitations
- Author
-
Luigi Capodieci
- Subjects
Engineering ,Engineering drawing ,business.industry ,Semiconductor device fabrication ,computer.software_genre ,Automation ,Metrology ,Design for manufacturability ,Optical proximity correction ,Systems architecture ,Systems engineering ,Computer Aided Design ,Physical design ,business ,computer - Abstract
After more than 2 years of development, Design-Driven Metrology (DDM) is now being introduced into production flows for semiconductor manufacturing, with initial applications targeted at 65 nm and below, but also backward-compatible to 90 nm and above nodes. This paper presents the fundamental components of the DDM framework, and the characteristic architectural relationships among these elements. The discussion includes current status and future prospects for this new metrology paradigm, which represents the true enabler for Design For Manufacturability (DFM) flows and applications. At the core of Design-Driven Metrology lies the simple but powerful concept of utilizing physical design layouts, and more specifically (X,Y) coordinates and polygonal shapes, to automate the generation of metrology jobs. Derived from 10 year old practices of Optical Proximity Correction, the adoption of CAD tools for visualization and manipulation of design layouts, in everyday lithography work, has provided the essential infrastructure for metrology automation. The in-depth discussion of data-flow and system architecture is followed by a presentation of key DDM applications, with specific emphasis on CDSEM metrology, ranging from process development and yield optimization to circuit design. The study concludes with an analysis of the extendibility of DDM and derived flows to other metrology areas in semiconductor manufacturing.
- Published
- 2006
- Full Text
- View/download PDF
44. From poly line to transistor: building BSIM models for non-rectangular transistors
- Author
-
Wojtek J. Poppe, Andrew R. Neureuther, Joanne Wu, and Luigi Capodieci
- Subjects
Transistor model ,Engineering ,business.industry ,Transistor ,Multiple-emitter transistor ,Hardware_PERFORMANCEANDRELIABILITY ,Condensed Matter::Mesoscopic Systems and Quantum Hall Effect ,law.invention ,Threshold voltage ,Design for manufacturability ,Computer Science::Hardware Architecture ,Computer Science::Emerging Technologies ,law ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Field-effect transistor ,BSIM ,business ,Hardware_LOGICDESIGN ,Static induction transistor - Abstract
Non-rectangular transistors in today's advanced processes pose a potential problem between manufacturing and design as today's compact transistor models have only one length and one width parameter to describe the gate dimensions. The transistor model is the critical link between manufacturing and design and needs to account for across gate CD variation as corner rounding along with other 2D proximity effects become more pronounced. This is a complex problem as threshold voltage and leakage current have a very complex non-linear relationship with gate length. There have been efforts trying to model non-rectangular gates as transistors in parallel, but this approach suffers from the lack of accurate models for "slice transistors", which could potentially necessitate new circuit simulators with new sets of complex equations. This paper will propose a new approach that approximates a non-rectangular transistor with an equivalent rectangular transistor and hence does not require a new transistor model or significant changes to circuit simulators. Effective length extraction consists of breaking a non-rectangular transistor into rectangular slices and then taking a weighted average based on simulated slice currents in HSPICE. As long as a different effective length is used for delay and static power analysis, simulation results show that the equivalent rectangular transistor behaves the same as a non-rectangular transistor.
- Published
- 2006
- Full Text
- View/download PDF
45. Layout verification and optimization based on flexible design rules
- Author
-
Jie Yang, Dennis Sylvester, and Luigi Capodieci
- Subjects
Optimal design ,Engineering ,Design rule checking ,business.industry ,Integrated circuit ,Integrated circuit design ,computer.software_genre ,Expert system ,Design for manufacturability ,law.invention ,Optical proximity correction ,Computer engineering ,law ,Electronic engineering ,Physical design ,business ,computer - Abstract
A methodology for layout verification and optimization based on exible design rules is provided. This methodology is based on image parameter determined exible design rules (FDRs), in contrast with restrictive design rules (RDRs), and enables fine-grained optimization of designs in the yield-performance space. Conventional design rules are developed based on experimental data obtained from design, fabrication and measurements of a set of test structures. They are generated at early stage of a process development and used as guidelines for later IC layouts. These design rules (DRs) serve to guarantee a high functional yield of the fabricated design. Since small areas are preferred in integrated circuit designs due to their corresponding high speed and lower cost, most design rules focus on minimum resolvable dimensions.
- Published
- 2006
- Full Text
- View/download PDF
46. From optical proximity correction to lithography-driven physical design (1996-2006): 10 years of resolution enhancement technology and the roadmap enablers for the next decade
- Author
-
Luigi Capodieci
- Subjects
Engineering ,Resolution enhancement technologies ,business.industry ,Extreme ultraviolet lithography ,Hardware_PERFORMANCEANDRELIABILITY ,Design for manufacturability ,law.invention ,Optical proximity correction ,law ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Photolithography ,Physical design ,business ,Lithography ,Immersion lithography - Abstract
The past decade has experienced a remarkable synergy between Resolution Enhancement Technologies (RET) in Optical Lithography and Optical Proximity Correction (OPC). This heterogeneous array of patterning solutions ranges from simple rule-based to more sophisticated model-based corrections, including sub-resolution assist features, partially transmitting masks and various dual mask approaches. A survey of the evolutionary development from the early introduction of the first OPC engines in 1996 to the debut of Immersion Lithography in 2006 reveals that the convergence of RET and OPC has also enabled a progressive selection and fine-tuning of Geometric Design Rules (GDR) at each technology node, based on systematic adoption of lithographic verification. This paper describes the use of "full-chip" lithography verification engines in current Design For Manufacturing (DFM) practices and extends the analysis to identify a set of key technologies and applications for the 45, 32 and 22 nm nodes. As OPC-derived tools enter the stage of maturity, from a software standpoint, their use-model is being greatly broadened from the back-end mask tape-out flow, upstream, directly integrated into physical design verification. Lithography awareness into the physical design environment, mediated by new DFM verification tools and flows, is driving various forms of manufacturable physical layout implementation: from Restricted Design Rules and Flexible Design Rules to Regular Circuit Fabrics. As new lithography solutions, such as immersion lithography and EUV, will have to be deployed within a complex technology framework, the paper also examines the trend towards "layout design regularization" and its implications for patterning and next generation lithographies.
- Published
- 2006
- Full Text
- View/download PDF
47. Platform for collaborative DFM
- Author
-
Andrew R. Neureuther, Luigi Capodieci, and Wojtek J. Poppe
- Subjects
Engineering ,Robustness (computer science) ,business.industry ,Circuit design ,Electronic engineering ,Process window ,BSIM ,business ,Engineering design process ,Aerial image ,Design for manufacturability ,Parametric statistics - Abstract
A Process/Device/Design framework called the Parametric Yield Simulator is proposed for predicting circuit variability based on circuit design and a set of characterized sources of variation. In this simulator, the aerial image of a layout is simulated across a predefined process window and resulting non-idealities in geometrical features are communicated through to circuit simulators, where circuit robustness and yield can be evaluate in terms of leakage and delay variability. The purpose of this simulator is to identify problem areas in a layout and quantify them in terms of delay and leakage in a manner in which designers and process engineers can collaborate together on an optimal solution to the problem. The Parametric Yield Simulator will serve as a launch pad for collaborative efforts between groups in different disciplines that are looking at variability and yield. Universities such as Berkeley offer a great advantage in exploring innovative approaches as different centers of key expertise exist under one roof. For example a complementary set of characterization and validation experiments has also been designed and in a collaborative study is being executed at Cypress semiconductor on a 65nm NMOS process flow. This unique opportunity of having access to a cutting edge process flow with relatively high transparency has led to a new set of experiments with contributions from six different students in circuit design, process engineering, and device physics. Collaborative efforts with the device group have also led to a new electrical linewidth metrology methodology using enhanced transistors that could prove useful for process characterization.
- Published
- 2006
- Full Text
- View/download PDF
48. Advanced DFM applications using design-based metrology on CD SEM
- Author
-
Cyrus E. Tabery, Bhanwar Singh, G. Abbott, L. Heinrichs, E. Castel, D. Stoler, A. Roberts, J. Schramm, Z. Kaliblotzky, K. Shah, A. Azordegan, S. Roling, G. F. Lorusso, Bernd Schulz, and Luigi Capodieci
- Subjects
Engineering ,Resolution enhancement technologies ,business.industry ,computer.software_genre ,Automation ,Design for manufacturability ,Metrology ,Parametric design ,Optical proximity correction ,Pattern recognition (psychology) ,Hardware_INTEGRATEDCIRCUITS ,Electronic engineering ,Computer Aided Design ,business ,computer - Abstract
Design Based Metrology (DBM) implements a novel automation flow, which allows for a direct and traceable correspondence to be established between selected locations in product designs and matching metrology locations on silicon wafers. Thus DBM constitutes the fundamental enabler of Design For Manufacturability (DFM), because of its intrinsic ability to characterize and quantify the discrepancy between design layout intent and actual patterns on silicon. The evolution of the CDSEM into a DFM tool, capable of measuring thousands of unique sites, includes 3 essential functionalities: (1) seamless integration with design layout and locations coordinate system; (2) new design-based pattern recognition and (3) fully automated recipe generation. Additionally advanced SEM metrology algorithms are required for complex 2-dimensional features, Line-Edge-Roughness (LER), etc. In this paper, we consider the overall DBM flow, its integration with traditional CDSEM metrology and the state-of-the-art in recipe automation success. We also investigate advanced DFM applications, specifically enabled by DBM, particularly for OPC model calibration and verification, design-driven RET development and parametric Design Rule evaluation and selection.
- Published
- 2006
- Full Text
- View/download PDF
49. Design-based metrology: advanced automation for CD-SEM recipe generation
- Author
-
Chen Ofek, Chris Haidinyak, Luigi Capodieci, Ariel Ben-Porath, Bhanwar Singh, Cyrus E. Tabery, O. Menadeva, M. Threefoot, B. Choo, Youval Nehmadi, and K. Shah
- Subjects
Engineering drawing ,Engineering ,business.industry ,Integrated circuit design ,Automation ,Metrology ,Design for manufacturability ,Data flow diagram ,Optical proximity correction ,Hardware_INTEGRATEDCIRCUITS ,Systems design ,Node (circuits) ,business ,Computer hardware - Abstract
The procedure for properly implementing OPC for a new technology node or chip design involves multiple steps: selection of the RET (resolution enhance technique), selection of design rules, OPC Model Building, OPC Verification, CD control quantification (across chip, reticle, wafer, focus, exposure, etc), calibration of Optical Rule Checks (ORC), and other verification steps. Many of these steps require up to thousands of wafer measurements, and while state-of-the-art CD-SEM tools provide automated metrology for production, manually creating a CD recipe with thousands of unique sites is extremely tedious and error-prone. This places a practical limit on both the quality and number of measurements that can be acquired during the technology development and qualification period. At the same time, the number of measurements required to qualify a new reticle design has increased drastically due to the growing complexity of RET and diminishing tolerances. To meet this challenge, a direct and automated link from the design systems to the process metrology tools is needed. Novel methodologies must also be developed to enable automated generation of teh recipe from the design inputs and to translate the flood of metrology results into information that can improve the design, mask data processing, or the patterning process. To facilitate this two-way data flow, a new framework has been created enabling true Design-Based Metrology (DBM), and an application named OPC-Check has been developed to operate within this framework. This DBM framework provides the common language and interface that facilitates the direct transfer of desired measurement locations from teh design to the metrology tool. This link is a critical element in Design for Manufacturability (DFM) efforts, a central theme in many presentations at Microlithography 2005. This article discusses the significant benefits of the tight integration of design and process metrology for OPC implementation in a new technology node, and provides some examples of the novel OPC-Check application as currently implemented at AMD SDC with Applied Materials CD-SEM tools.
- Published
- 2005
- Full Text
- View/download PDF
50. Advanced timing analysis based on post-OPC patterning process simulations
- Author
-
Dennis Sylvester, Luigi Capodieci, and Jie Yang
- Subjects
Engineering ,Resolution enhancement technologies ,business.industry ,Circuit design ,Design flow ,Static timing analysis ,Design for manufacturability ,Reliability engineering ,Process variation ,Optical proximity correction ,Hardware_INTEGRATEDCIRCUITS ,Netlist ,Electronic engineering ,business - Abstract
For current and upcoming technology nodes (90, 65, 45 nm and beyond) one of the fundamental enablers of Moore's Law is the use of Resolution Enhancement Techniques (RET) in optical lithography. While RETs allow for continuing reduction in integrated circuits’ critical dimensions (CD), layout distortions are introduced as an undesired consequence due to proximity effects. Complex and costly Optical Proximity Correction (OPC) is then deployed to compensate for CD variations and loss of pattern fidelity, in an effort to improve yield. This, together with other sources for CD variations, causes the actual on-silicon chip performance to be quite different from sign-off expectations. In current design optimization methodologies, process variation modeling, aimed at providing guardbands for performance analysis, is based on "worst-case scenarios" (corner cases) and yields overly pessimistic simulation results which makes meeting design targets unnecessarily difficult. Assumptions of CD distributions in Monte Carlo simulations, and statistical timing analysis in general, can be made more rigorous by considering realistic systematic and random contributions to the overall process variation. A novel methodology is presented in this paper for extracting residual OPC errors from a placed and routed full chip layout and for deriving actual (i.e., calibrated to silicon) CD values, to be used in timing analysis and speed path characterization. The implementation of this automated flow is achieved through a combination of tagging critical gates, post-OPC layout back-annotation, and selective extraction from the global circuit netlist. This approach improves upon traditional design flow practices where ideal (i.e., drawn) CD values are employed, which leads to poor performance predictability of the as-fabricated design. With this more accurate timing analysis, we are able to highlight the necessity of a post-OPC verification embedded design flow by showing substantial differences in the silicon-based timing simulations, both in terms of a significant reordering of speed path criticality and a 36.4% increase in worst-case slack. Extensions of this methodology to multi-layer extraction and timing characterization are also proposed. The paper concludes by showing how the methodology implemented in this flow also provides a general design for manufacturability (DFM) tool template. In particular, by passing design intent to process/OPC engineers, selective OPC can be applied to improve CD variation control based on gates' functions such as critical gates and matching transistors. Furthermore, back-annotated process-based data can be used during early stages of circuit design verification and optimization, driving tradeoffs when significant variability is unavoidable.
- Published
- 2005
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.