27 results on '"Vito Dai"'
Search Results
2. Layout pattern catalogs: from abstract algebra to advanced applications for physical verification and DFM
- Author
-
Vito Dai and Luigi Capodieci
- Subjects
Mathematical theory ,Physical verification ,Theoretical computer science ,Algebraic structure ,Computer science ,Component (UML) ,Electronic design automation ,Pattern matching ,Physical design ,Design for manufacturability - Abstract
Automated generation of Layout Pattern Catalogs (LPC) has been enabled by full-chip pattern matching EDA tools, capable of searching and classifying both topological and dimensional variations in layout shapes, extracting massive datasets of component patterns from one (or more) given layouts. This work presents a novel theoretical framework for the systematic analysis of Layout Pattern Catalogs (LPC). Two algebraic structures (lattices and matroids) are introduced, allowing for the complete characterization of all LPC datasets. Technical results go beyond the general mathematical theory of combinatorial pattern spaces, demonstrating a direct path to novel physical design verification algorithms and DFM optimization applications.
- Published
- 2021
- Full Text
- View/download PDF
3. Persistent homology analysis of complex high-dimensional layout configurations for IC physical designs
- Author
-
Luigi Capodieci, Vito Dai, and Yacoub H. Kureh
- Subjects
Persistent homology ,Manufacturing process ,Computer science ,law ,High dimensional ,Integrated circuit design ,Integrated circuit ,Homology (mathematics) ,Topology ,Ic devices ,law.invention - Abstract
Problems in simulation, in physical defects, or in electrical failures of the IC devices generally occur at the boundaries of dimensional tolerances, such as the minimum width and space. However, for layout configurations with four or more critical dimensions, simple minimums are insufficient to characterize dimensional coverage. Persistent homology is a multi-resolution analysis technique which robustly summarizes dimensional coverage. We apply this technique to compare dimensional coverage of IC design configurations, on the same layer, on different layers, and on different designs, yielding results both expected and unexpected based on manufacturing process and design rule knowledge.
- Published
- 2019
- Full Text
- View/download PDF
4. Binary Combinatorial Coding.
- Author
-
Vito Dai and Avideh Zakhor
- Published
- 2003
- Full Text
- View/download PDF
5. Optimization of complex high-dimensional layout configurations for IC physical designs using graph search, data analytics, and machine learning
- Author
-
Jeff J. Xu, Bharath Rangarajan, Vito Dai, and Edward Kah Ching Teoh
- Subjects
Computer science ,business.industry ,ComputerApplications_COMPUTERSINOTHERSYSTEMS ,02 engineering and technology ,Integrated circuit design ,Integrated circuit ,021001 nanoscience & nanotechnology ,Machine learning ,computer.software_genre ,01 natural sciences ,Graph ,Metrology ,law.invention ,010309 optics ,law ,Analytics ,0103 physical sciences ,Data analysis ,Graph (abstract data type) ,Artificial intelligence ,0210 nano-technology ,business ,computer ,IC layout editor - Abstract
A typical new IC design has millions of layout configurations, not seen on previous product or test chip designs. Knowing the disposition of each and every configuration, problematic or not, is the key to optimizing design for yield. In this paper, we present a method to systematically characterize the configuration coverage of any layout. Coverage can be compared between designs, and configurations for which there is a lack of coverage can also be computed. When combined with simulation, metrology, and defect data for some configurations, graph search and machine learning algorithms can be applied to optimize designs for manufacturing yield.
- Published
- 2017
- Full Text
- View/download PDF
6. Design layout analysis and DFM optimization using topological patterns
- Author
-
Karthik Krishnamoorthy, Jason Sweis, Vito Dai, Edward Kah Ching Teoh, Luigi Capodieci, Jeff J. Xu, and Ya-Chieh Lai
- Subjects
Metal ,Cover (topology) ,Computer science ,Product (mathematics) ,visual_art ,visual_art.visual_art_medium ,Process window ,Node (circuits) ,Topology ,Design for manufacturability - Abstract
During the yield ramp of semi-conductor manufacturing, data is gathered on specific design-related process window limiters, or yield detractors, through a combination of test structures, failure analysis, and model-based printability simulations. Case-by-case, this data is translated into design for manufacturability (DFM) checks to restrict design usage of problematic constructs. This case-by-case approach is inherently reactive: DFM solutions are created in response to known manufacturing marginalities as they are identified. In this paper, we propose an alternative, yet complementary approach. Using design-only topological pattern analysis, all possible layout constructs of a particular type appearing in a design are categorized. For example, all possible ways via forms a connection with the metal above it may be categorized. The frequency of occurrence of each category indicates the importance of that category for yield. Categories may be split into sub-categories to align to specific manufacturing defect mechanisms. Frequency of categories can be compared from product to product, and unexpectedly high frequencies can be highlighted for further monitoring. Each category can be weighted for yield impact, once manufacturing data is available. This methodology is demonstrated on representative layout designs from the 28 nm node. We fully analyze all possible categories and sub-categories of via enclosure such that 100% of all vias are covered. The frequency of specific categories is compared across multiple designs. The 10 most frequent via enclosure categories cover ≥90% of all the vias in all designs. KL divergence is used to compare the frequency distribution of categories between products. Outlier categories with unexpected high frequency are found in some designs, indicating the need to monitor such categories for potential impact on yield.
- Published
- 2015
- Full Text
- View/download PDF
7. A pattern-based methodology for optimizing stitches in double-patterning technology
- Author
-
Sriram Madhavan, Lynn T.-N. Wang, Luigi Capodieci, and Vito Dai
- Subjects
Matching (graph theory) ,Computer science ,ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION ,Hardware_INTEGRATEDCIRCUITS ,Multiple patterning ,Topology (electrical circuits) ,Pattern matching ,Layer (object-oriented design) ,Algorithm - Abstract
A pattern-based methodology for optimizing stitches is developed based on identifying stitch topologies and replacing them with pre-characterized fixing solutions in decomposed layouts. A topology-based library of stitches with predetermined fixing solutions is built. A pattern-based engine searches for matching topologies in the decomposed layouts. When a match is found, the engine opportunistically replaces the predetermined fixing solution: only a design rule check error-free replacement is preserved. The methodology is demonstrated on a 20nm layout design that contains over 67 million, first metal layer stitches. Results show that a small library containing 3 stitch topologies improves the stitch area regularity by 4x.
- Published
- 2015
- Full Text
- View/download PDF
8. Systematic physical verification with topological patterns
- Author
-
Frank E. Gennari, Vito Dai, Ya-Chieh Lai, Edward Kah Ching Teoh, and Luigi Capodieci
- Subjects
Design rule checking ,Physical verification ,Theoretical computer science ,Computer science ,business.industry ,Constraint (computer-aided design) ,Topology ,computer.software_genre ,Automation ,Design for manufacturability ,Node (circuits) ,Electronic design automation ,Data mining ,Pattern matching ,business ,computer - Abstract
Design rule checks (DRC) are the industry workhorse for constraining design to ensure both physical and electrical manufacturability. Where DRCs fail to fully capture the concept of manufacturability, pattern-based approaches, such as DRC Plus, fill the gap using a library of patterns to capture and identify problematic 2D configurations. Today, both a DRC deck and a pattern matching deck may be found in advanced node process development kits. Major electronic design automation (EDA) vendors offer both DRC and pattern matching solutions for physical verification; in fact, both are frequently integrated into the same physical verification tool. In physical verification, DRCs represent dimensional constraints relating directly to process limitations. On the other hand, patterns represent the 2D placement of surrounding geometries that can introduce systematic process effects. It is possible to combine both DRCs and patterns in a single topological pattern representation. A topological pattern has two separate components: a bitmap representing the placement and alignment of polygon edges, and a vector of dimensional constraints. The topological pattern is unique and unambiguous; there is no code to write, and no two different ways to represent the same physical structure. Furthermore, markers aligned to the pattern can be generated to designate specific layout optimizations for improving manufacturability. In this paper, we describe how to do systematic physical verification with just topological patterns. Common mappings between traditional design rules and topological pattern rules are presented. We describe techniques that can be used during the development of a topological rule deck such as: taking constraints defined on one rule, and systematically projecting it onto other related rules; systematically separating a single rule into two or more rules, when the single rule is not sufficient to capture manufacturability constraints; creating test layout which represents the corners of what is allowed, or not allowed by a rule; improving manufacturability by systematically changing certain patterns; and quantifying how a design uses design rules. Performance of topological pattern search is demonstrated to be production full-chip capable.
- Published
- 2014
- Full Text
- View/download PDF
9. Systematic data mining using a pattern database to accelerate yield ramp
- Author
-
Luigi Capodieci, Frank E. Gennari, Edward Kah Ching Teoh, Vito Dai, and Ya-Chieh Lai
- Subjects
Physical verification ,Database ,Relational database ,Computer science ,Design flow ,Pattern matching ,Data mining ,Cluster analysis ,computer.software_genre ,computer ,Design for manufacturability - Abstract
Pattern-based approaches to physical verification, such as DRC Plus, which use a library of patterns to identify problematic 2D configurations, have been proven to be effective in capturing the concept of manufacturability where traditional DRC fails. As the industry moves to advanced technology nodes, the manufacturing process window tightens and the number of patterns continues to rapidly increase. This increase in patterns brings about challenges in identifying, organizing, and carrying forward the learning of each pattern from test chip designs to first product and then to multiple product variants. This learning includes results from printability simulation, defect scans and physical failure analysis, which are important for accelerating yield ramp. Using pattern classification technology and a relational database, GLOBALFOUNDRIES has constructed a pattern database (PDB) of more than one million potential yield detractor patterns. In PDB, 2D geometries are clustered based on similarity criteria, such as radius and edge tolerance. Each cluster is assigned a representative pattern and a unique identifier (ID). This ID is then used as a persistent reference for linking together information such as the failure mechanism of the patterns, the process condition where the pattern is likely to fail and the number of occurrences of the pattern in a design. Patterns and their associated information are used to populate DRC Plus pattern matching libraries for design-for-manufacturing (DFM) insertion into the design flow for auto-fixing and physical verification. Patterns are used in a production-ready yield learning methodology to identify and score critical hotspot patterns. Patterns are also used to select sites for process monitoring in the fab. In this paper, we describe the design of PDB, the methodology for identifying and analyzing patterns across multiple design and technology cycles, and the use of PDB to accelerate manufacturing process learning. One such analysis tracks the life cycle of a pattern from the first time it appears as a potential yield detractor until it is either fixed in the manufacturing process or stops appearing in design due to DFM techniques such as DRC Plus. Another such analysis systematically aggregates the results of a pattern to highlight potential yield detractors for further manufacturing process improvement.
- Published
- 2014
- Full Text
- View/download PDF
10. Design-enabled manufacturing enablement using manufacturing design request tracker (MDRT)
- Author
-
Luigi Capodieci, Vito Dai, Sarah McGowan, Rao Desineni, Kok Peng Chua, Sky Yeo, Carl P. Babcock, Akif Sultan, Eswar Ramanathan, Colin Hui, Robert Madge, Kristina Hoeppner, Jens Hassmann, and Edward Kah Ching Teoh
- Subjects
Engineering ,Product lifecycle ,Product life-cycle management ,business.industry ,Yield (finance) ,Suite ,Systems engineering ,business ,Integrated circuit layout ,Manufacturing engineering ,Design for manufacturability - Abstract
The shrinking dimensions with advanced technologies pose yield challenges which require continuous enhancement of yield methodologies to quickly detect and fix the marginal layout features. In this paper, we present a practical approach to enhance the DFM and DEM capabilities suite provided by GLOBALFOUNDRIES for 28nm technology and beyond. The MDRT system has been implemented in the Product Lifecycle Management (PLM) system within GLOBALFOUNDRIES.
- Published
- 2013
- Full Text
- View/download PDF
11. Pattern matching for identifying and resolving non-decomposition-friendly designs for double patterning technology (DPT)
- Author
-
Luigi Capodieci, Vito Dai, and Lynn T.-N. Wang
- Subjects
Computer science ,Design flow ,Multiple patterning ,Decomposition (computer science) ,Node (circuits) ,Pattern matching ,Layer (object-oriented design) ,Lithography ,Algorithm - Abstract
A pattern matching methodology that identifies non-decomposition-friendly designs and provides localized guidance for layout-fixing is presented for double patterning lithography. This methodolog y uses a library of patterns in which each pattern has been pre-characterized as impossible-to-decompose and annotated with a design rule for guiding the layout fixes. A pattern matching engine identifies these problematic patterns in design, which allows the layout designers to anticipate and prevent d ecomposition errors, prior to layout decomposition. The methodology has been demonstrated on a 180 um 2 layout migrated from the previous 28nm technology node for the metal 1 layer. Using a small library of just 18 patterns, the pattern matching engine identified 119 out of 400 decomposition errors, which constituted coverage of 29.8%. Keywords: Pattern matching, odd-cycles, coloring conflicts, double patterning, decomposition, design flow, design rule, DRC Plus, automated decomposition algorithm, DPT
- Published
- 2013
- Full Text
- View/download PDF
12. Pattern matching for double patterning technology-compliant physical design flows
- Author
-
Vito Dai, Luigi Capodieci, and Lynn T.-N. Wang
- Subjects
Design rule checking ,Computer science ,Design flow ,Decomposition (computer science) ,Multiple patterning ,Sample (statistics) ,Pattern matching ,Physical design ,Algorithm ,Design for manufacturability - Abstract
A pattern-based methodology for guiding the generation of DPT-compliant layouts using a foundry-characterized library of difficult to decompose patterns with known corresponding solutions is presented. A pattern matching engine scans the drawn layout for patterns from the pattern library. If a match were found, one or more DPT-compliant solutions would be provided for guiding the layout modifications. This methodology is demonstrated on a sample 1.8 mm 2 layout migrated from a previous technology. A small library of 12 patterns is captur ed, which accounts fo r 59 out of the 194 DPT-compliance check violations examined. In addition, the methodology can be used to recommend specific changes to the original drawn design to improve manufacturability. This methodology is compatible with any physical design flows that use automated decomposition algorithms. Keywords: Pattern matching, double patterning, decomposition, design flow, design rule check, DRC Plus, automated decomposition algorithm, DPT
- Published
- 2012
- Full Text
- View/download PDF
13. Full-chip characterization of compression algorithms for direct-write maskless lithography systems
- Author
-
George Cramer, Vito Dai, and Avideh Zakhor
- Subjects
Lossless compression ,Pixel ,Semiconductor device fabrication ,business.industry ,Computer science ,Mechanical Engineering ,Condensed Matter Physics ,Chip ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,law.invention ,Optics ,law ,Electronic engineering ,Wafer ,Electrical and Electronic Engineering ,Photolithography ,business ,Throughput (business) ,Lithography ,Maskless lithography ,Computer hardware ,Image compression ,Data compression - Abstract
Future lithography systems must produce more dense microchips with smaller feature sizes, while maintaining throughput comparable to today's optical lithography systems. This places stringent data-handling requirements on the design of any maskless lithography system. Today's optical lithography systems transfer one layer of data from the mask to the entire wafer in about sixty seconds. To achieve a similar throughput for a direct-write maskless lithography system with a pixel size of 22 nm, data rates of about 12 Tb/s are required. Over the past 8 years, we have proposed a datapath architecture for delivering such a data rate to a parallel array of writers. Our proposed system achieves this data rate contingent on two assumptions: consistent 10 to 1 compression of lithography data, and implementation of real-time hardware decoder, fabricated on a microchip together with a massively parallel array of lithography writers, capable of decoding 12 Tb/s of data. To address the compression efficiency problem, in the past few years, we have developed a new technique, Context Copy Combinatorial Coding (C4), designed specifically for microchip layer images, with a low-complexity decoder for application to the datapath architecture. C4 combines the advantages of JBIG and ZIP, to achieve compression ratios higher than existing techniques. We have also devised Block C4, a variation of C4 with up to hundred times faster encoding times, with little or no loss in compression efficiency. While our past work has focused on characterizing the compression efficiency of C4 and Block C4 on samples of a variety of industrial layouts, there has been no full chip performance characterization of these algorithms. In this paper, we show compression efficiency results of Block C4 and competing techniques such as BZIP2 and ZIP for the Poly, Active, Contact, Metal1, Via1, and Metal2 layers of a complete industry 65 nm layout. Overall, we have found that compression efficiency varies significantly from design to design, from layer to layer, and even within parts of the same layer. It is difficult, if not impossible, to guarantee a lossless 10 to 1 compression for all blocks within a layer, as desired in the design of our datapath architecture. Nonetheless, on the most complex Metal1 layer of our 65 nm full chip microprocessor design, we show that a average lossless compression of 5.2 is attainable, which corresponds to a throughput of 60 wafer layers per hour for a 1.33 Tb/s board-to-chip communications link. As a reference, state-of-the-art HyperTransport 3.0 offers 0.32 Tb/s per link. These numbers demonstrate the role lossless compression can play in the design of a maskless lithography datapath.
- Published
- 2009
- Full Text
- View/download PDF
14. Developing DRC plus rules through 2D pattern extraction and clustering techniques
- Author
-
Jie Yang, Norma Rodriguez, Luigi Capodieci, and Vito Dai
- Subjects
Resolution enhancement technologies ,business.industry ,Computer science ,Design flow ,Hardware_PERFORMANCEANDRELIABILITY ,computer.software_genre ,Design for manufacturability ,Visual inspection ,Optical proximity correction ,Hardware_INTEGRATEDCIRCUITS ,Artificial intelligence ,Data mining ,Cluster analysis ,business ,Lithography ,computer - Abstract
As technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical proximity correction (OPC) are applied, standard design rule constraints (DRC) sometimes fails to fully capture the concept of design manufacturability. DRC Plus augments standard DRC by applying fast 2D pattern matching to design layout to identify problematic 2D patterns missed by DRC. DRC Plus offers several advantages over other DFM techniques: it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not require compute intensive simulations, and it does not require highly-accurate lithographic models. These advantages allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC. The creation of DRC Plus rules, however, remains a challenge. Hotspots derived from lithographic simulation may be used to create DRC Plus rules, but the process of translating a hotspot into a pattern is a difficult and manual effort. In this paper, we present an algorithmic methodology to identify hot patterns using lithographic simulation rather than hotspots. First, a complete set of pattern classes, which covers the entire design space of a sample layout, is computed. These pattern classes, by construction, can be directly used as DRC Plus rules. Next, the manufacturability of each pattern class is evaluated as a whole. This results in a quantifiable metric for both design impact and manufacturability, which can be used to select individual pattern classes as DRC Plus rules. Simulation experiment shows that hundreds of rules can be created using this methodology, which is well beyond what is possible by hand. Selective visual inspection shows that algorithmically generated rules are quite reasonable. In addition to producing DRC Plus rules, this methodology also provides a concrete understanding of design style, design variability, and how they affect manufacturability.
- Published
- 2009
- Full Text
- View/download PDF
15. Overcoming the challenges of 22-nm node patterning through litho-design co-optimization
- Author
-
Jeng-Chun Chen, Yuansheng Ma, Sean D. Burns, J. Cho, Cyrus E. Tabery, Karen Petrillo, Matt Colburn, D. Horak, Erin Mclellan, Zachary Baum, Yi Zou, Stefan Schmitz, Vito Dai, Geng Han, Azalia A. Krasnoperova, S. Holmes, Chiew-seng Koay, Vamsi Paruchuri, Martin Burkhardt, R. H. Kim, L. Zhuang, Scott M. Mansfield, Christopher A. Spence, A. Klatchko, Jongwook Kye, Yunfei Deng, John C. Arnold, Scott Halle, S. Kanakasabapathy, Yunpeng Yin, and Josephine B. Chang
- Subjects
Diffraction ,Computer science ,Nanotechnology ,Integrated circuit layout ,Numerical aperture ,law.invention ,law ,Multiple patterning ,Electronic engineering ,Node (circuits) ,Electronics ,Photolithography ,Lithography ,Immersion lithography - Abstract
Historically, lithographic scaling was driven by both improvements in wavelength and numerical aperture. Recently, the semiconductor industry completed the transition to 1.35NA immersion lithography. The industry is now focusing on double patterning techniques (DPT) as a means to circumvent the limitations of Rayleigh diffraction. Here, the IBM Alliance demonstrates the extendibility of several double patterning solutions that enable scaling of logic constructs by decoupling the pattern spatially through mask design or temporally through innovative processes. This paper details a set of solutions that have enabled early 22 nm learning through careful lithography-design optimization.
- Published
- 2009
- Full Text
- View/download PDF
16. 22 nm technology node active layer patterning for planar transistor devices
- Author
-
Steven J. Holmes, Harry J. Levinson, Aasutosh Dave, Jason E. Meiring, Matthew E. Colburn, Vito Dai, Ryoung-han Kim, and Scott Halle
- Subjects
Materials science ,business.industry ,Transistor ,Chip ,law.invention ,Design for manufacturability ,Numerical aperture ,law ,Hardware_INTEGRATEDCIRCUITS ,Optoelectronics ,Node (circuits) ,Process window ,Photolithography ,business ,Lithography - Abstract
As the semiconductor device size shrinks without concomitant increase of the numerical aperture (NA=1.35) or index of the immersion fluid from 32 nm technology node, 22 nm patterning technology presents challenges in resolution as well as process window. Therefore, aggressive Resolution Enhancement Technique (RET), Design for Manufacturability (DFM) and layer specific lithographic process development are strongly required. In order to achieve successful patterning, co-optimization of the design, RET and lithographic process becomes essential at the 22 nm technology node. In this paper, we demonstrate the patterning of the active layer for 22 nm planar transistor device and discuss achievements and challenges in 22 nm lithographic printing. Key issues identified include printing tight pitches and 2-D features simultaneously without sacrificing the cell size, while maintaining large process window. As the poly-gate pitch is tightened, the need for improved corner rounding performance is required inorder to ensure proper gate length across the entire gate width. Utilizing water immersion at NA=1.2 and 1.35, we will demonstrate patterning of the active layer in a 22 nm technology node SRAM of a bit-cell having a size of 0.1 μm 2 and smaller while providing large process window for other features across the chip. It is shown that highly layer-specific and design-aware RET and lithographic process developments are critical for the success of 22 nm node technology.
- Published
- 2009
- Full Text
- View/download PDF
17. 32 nm logic patterning options with immersion lithography
- Author
-
Geng Han, Bradley Morgenfeld, Carl P. Babcock, Scott M. Mansfield, Derren N. Dunn, Jason E. Meiring, Peggy Lawson, Sean D. Burns, Haoren Zhuang, Scott Halle, Henning Haffner, W. Yan, Yi Zou, E. Geiss, Cyrus E. Tabery, Zachary Baum, L. Zhuang, Matthew E. Colburn, Vito Dai, Kafai Lai, Dario Gil, Martin Burkhardt, David R. Medeiros, Chandrasekhar Sarma, Scott D. Allen, and Len Y. Tsou
- Subjects
Computer science ,business.industry ,Extreme ultraviolet lithography ,Numerical aperture ,law.invention ,Optics ,Resist ,law ,Multiple patterning ,Reticle ,Optoelectronics ,Photolithography ,business ,Lithography ,Next-generation lithography ,Immersion lithography - Abstract
The semiconductor industry faces a lithographic scaling limit as the industry completes the transition to 1.35 NA immersion lithography. Both high-index immersion lithography and EUV lithography are facing technical challenges and commercial timing issues. Consequently, the industry has focused on enabling double patterning technology (DPT) as a means to circumvent the limitations of Rayleigh scaling. Here, the IBM development alliance demonstrate a series of double patterning solutions that enable scaling of logic constructs by decoupling the pattern spatially through mask design or temporally through innovative processes. These techniques have been successfully employed for early 32nm node development using 45nm generation tooling. Four different double patterning techniques were implemented. The first process illustrates local RET optimization through the use of a split reticle design. In this approach, a layout is decomposed into a series of regions with similar imaging properties and the illumination conditions for each are independently optimized. These regions are then printed separately into the same resist film in a multiple exposure process. The result is a singly developed pattern that could not be printed with a single illumination-mask combination. The second approach addresses 2D imaging with particular focus on both line-end dimension and linewidth control [1]. A double exposure-double etch (DE 2 ) approach is used in conjunction with a pitch-filling sacrificial feature strategy. The third double exposure process, optimized for via patterns also utilizes DE 2 . In this method, a design is split between two separate masks such that the minimum pitch between any two vias is larger than the minimum metal pitch. This allows for final structures with vias at pitches beyond the capability of a single exposure. In the fourth method,, dark field double dipole lithography (DDL) has been successfully applied to BEOL metal structures and has been shown to be overlay tolerant [6]. Collectively, the double patterning solutions developed for early learning activities at 32nm can be extended to 22nm applications.
- Published
- 2008
- Full Text
- View/download PDF
18. DRC Plus: augmenting standard DRC with pattern matching on 2D geometries
- Author
-
Jie Yang, Luigi Capodieci, Norma Rodriguez, and Vito Dai
- Subjects
Resolution enhancement technologies ,Optical proximity correction ,Computer science ,Design flow ,Hardware_INTEGRATEDCIRCUITS ,Pattern matching ,Integrated circuit design ,Lithography ,Reliability engineering ,Design for manufacturability - Abstract
Design rule constraints (DRC) are the industry workhorse for constraining design to ensure both physical and electrical manufacturability. However, as technology processes continue to shrink and aggressive resolution enhancement technologies (RET) and optical proximity correction (OPC) are applied, standard DRC sometimes fails to fully capture the concept of design manufacturability. Consequently, some DRC-clean layout designs are found to be difficult to manufacture. Attempts have been made to "patch up" standard DRC with additional rules to identify these specific problematic cases. However, due to the lack of specificity with DRC, these efforts often meet with mixed-success. Although it typically resolves the issue at hand, quite often, it is the enforcement of some DRC rule that causes other problematic geometries to be generated, as designers attempt to meet all the constraints given to them. In effect, designers meet the letter of the law, as defined by the DRC implementation code, without understanding the "spirit of the rule". This leads to more exceptional cases being added to the DRC manual, further increasing its complexity. DRC Plus adopts a different approach. It augments standard DRC by applying fast 2D pattern matching to design layout to identify problematic 2D configurations which are difficult to manufacture. The tool then returns specific feedback to designers on how to resolve these issues. This basic approach offers several advantages over other DFM techniques: It is enforceable, it offers a simple pass/no-pass criterion, it is simple to document as part of the design manual, it does not require compute intensive simulations, and it does not require highly-accurate lithographic models that may not be available during design. These advantages allow DRC Plus to be inserted early in the design flow, and enforced in conjunction with standard DRC.
- Published
- 2007
- Full Text
- View/download PDF
19. Reduced complexity compression algorithms for direct-write maskless lithography systems
- Author
-
Avideh Zakhor, Vito Dai, Borivoje Nikolic, and Hsin-I Liu
- Subjects
Lossless compression ,Block code ,Computer science ,Golomb coding ,Block diagram ,Chip ,Algorithm ,Encoder ,Maskless lithography ,Data compression - Abstract
Achieving the throughput of one wafer layer per minute with a direct-write maskless lithography system, using 22 nm pixels for 45 nm feature sizes, requires data rates of about 12 Tb/s. In our previous work, we developed a novel lossless compression technique specifically tailored to flattened, rasterized, layout data called Context-Copy-Combinatorial-Code (C4) which exceeds the compression efficiency of all other existing techniques including BZIP2, 2D-LZ, and LZ77, especially under limited decoder buffer size, as required for hardware implementation. In this paper, we present two variations of the C4 algorithm. The first variation, Block C4, lowers the encoding time of C4 by several orders of magnitude, concurrently with lowering the decoder complexity. The second variation which involves replacing hierarchical combinatorial coding part of C4 with Golomb run-length coding, significantly reduces the decoder power and area as compared to Block C4. We refer to this algorithm as Block Golomb Context Copy Code (Block GC3). We present the detailed functional block diagrams of Block C4 and Block GC3 decoders along with their hardware performance estimates as the first step of implementing the writer chip for maskless lithography.
- Published
- 2006
- Full Text
- View/download PDF
20. Complexity reduction for C4 compression for implementation in maskless lithography datapath
- Author
-
Vito Dai and Avideh Zakhor
- Subjects
Lossless compression ,business.industry ,Computer science ,Data_CODINGANDINFORMATIONTHEORY ,Compression (functional analysis) ,Datapath ,Compression ratio ,Electronic engineering ,business ,Throughput (business) ,Lithography ,Computer hardware ,Maskless lithography ,Data compression - Abstract
Achieving the throughput of one wafer per minute per layer with a direct-write maskless lithography system, using 25 nm pixels for 50 nm feature sizes, requires data rates of about 10 Tb/s. In previous work, we have shown that lossless binary compression plays a key role in the system architecture for such a maskless writing system. Recently, we developed a new compression technique Context-Copy-Combinatorial-Code (C4) specifically tailored to lithography data which exceeds the compression efficiency of all other existing techniques including BZIP2, 2D-LZ, and LZ77. The decoder for any chosen compression scheme must be replicated in hardware tens of thousands of times in any practical direct write lithography system utilizing compression. As such, decode implementation complexity has a significant impact on overall complexity. In this paper, we explore the tradeoff between the compression ratio, and decoder buffer size for C4. Specifically, we present a number of techniques to reduce the complexity for C4 compression. First, buffer compression is introduced as a method to reduce decoder buffer size by an order of magnitude without sacrificing compression efficiency. Second, linear prediction is used as a low-complexity alternative to both context-based prediction and binarization. Finally, we allow for copy errors, which improve the compression efficiency of C4 at small buffer sizes. With these techniques in place, for a fixed buffer size, C4 achieves a significantly higher compression ratio than those of existing compression algorithms. We also present a detailed functional block diagram of the C4 decoding algorithm as a first step towards a hardware realization.
- Published
- 2005
- Full Text
- View/download PDF
21. Layout decompression chip for maskless lithography
- Author
-
Avideh Zakhor, William G. Oldham, Ben J. Wild, Vito Dai, Benjamin Warlick, Yashesh Shroff, and Borivoje Nikolic
- Subjects
Engineering ,business.industry ,Interface (computing) ,Data_CODINGANDINFORMATIONTHEORY ,Chip ,Huffman coding ,symbols.namesake ,CMOS ,Embedded system ,Hardware_INTEGRATEDCIRCUITS ,symbols ,business ,Lithography ,Throughput (business) ,Maskless lithography ,Data compression - Abstract
Future maskless lithography systems require data throughputs of the order of tens of terabits per second in order to have comparable performance to today’s mask-based lithography systems. This work presents an approach to overcome the throughput problem by compressing the layout data and decompressing it on the chip that interfaces to the writers. To achieve the required throughput, many decompression paths have to operate in parallel. The concept is demonstrated by designing an interface chip for layout decompression, consisting of a Huffman decoder and a Lempel-Ziv systolic decompressor. The 5.5mm x 2.5mm prototype chip, implemented in a 0.18μm, 1.8V CMOS process is fully functional at 100MHz dissipating 30mW per decompression row. By scaling the chip size up and implementing it in a 65nm technology, the decompressed data throughput required for writing 60 wafers per hour in 45nm technology is feasible.
- Published
- 2004
- Full Text
- View/download PDF
22. Advanced low-complexity compression for maskless lithography data
- Author
-
Avideh Zakhor and Vito Dai
- Subjects
Lossless compression ,Computer science ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,Huffman coding ,law.invention ,Arithmetic coding ,symbols.namesake ,Computer engineering ,law ,Compression ratio ,Electronic engineering ,symbols ,Photolithography ,JBIG ,Lithography ,computer ,Maskless lithography - Abstract
A direct-write maskless lithography system using 25nm for 50nm feature sizes requires data rates of about 10 Tb/s to maintain a throughput of one wafer per minute per layer achieved by today’s optical lithography systems. In a previous paper, we presented an architecture that achieves this data rate contingent on 25 to 1 compression of lithography data, and on implementation of a real-time decompressor fabricated on the same chip as a massively parallel array of lithography writers for 50 nm feature sizes. A number of compression techniques, including JBIG, ZIP, the novel 2D-LZ, and BZIP2 were demonstrated to achieve sufficiently high compression ratios on lithography data to make the architecture feasible, although no single technique could achieve this for all test layouts. In this paper we present a novel lossless compression algorithm called Context Copy Combinatorial Code (C4) specifically tailored for lithography data. It successfully combines the advantages of context-based modeling in JBIG and copying in ZIP to achieve higher compression ratios across all test layouts. As part of C4, we have developed a low-complexity binary entropy coding technique called combinatorial coding which is simultaneously as efficient as arithmetic coding and as fast as Huffman coding. Compression results show C4 outperforms JBIG, ZIP, BZIP2, and 2D-LZ, and achieves lossless compression ratios greater than 22 for binary layout image data, and greater than 14 for grey-pixel image data. The tradeoff between decoder buffer size, which directly affects implementation complexity and compression ratio is examined. For the same buffer size, C4 achieves higher compression than LZ77, ZIP, and BZIP2.
- Published
- 2004
- Full Text
- View/download PDF
23. Binary combinatorial coding
- Author
-
Avideh Zakhor and Vito Dai
- Subjects
Discrete mathematics ,symbols.namesake ,Shannon–Fano coding ,Tunstall coding ,symbols ,Variable-length code ,Data_CODINGANDINFORMATIONTHEORY ,Entropy encoding ,Huffman coding ,Modified Huffman coding ,Context-adaptive binary arithmetic coding ,Arithmetic coding ,Mathematics - Abstract
Summary form only given. A novel binary entropy code, called combinatorial coding (CC), is presented. The theoretical basis for CC has been described previously under the context of universal coding, enumerative coding, and minimum description length. The code described in these references works as follows: assume the source data are binary of length M, memoryless, and generated with an unknown parameter /spl theta/ (the probability that a "1" occurs). The compression efficiency, and encoding and decoding speed of CC against Huffman and arithmetic coding were tested. Over the entire test, CC achieved the compression efficiency of arithmetic coding, together with the coding speed of Huffman coding.
- Published
- 2003
- Full Text
- View/download PDF
24. Lossless layout compression for maskless lithography systems
- Author
-
Vito Dai and Avideh Zakhor
- Subjects
Lossless compression ,Engineering ,business.industry ,Optical engineering ,Data_CODINGANDINFORMATIONTHEORY ,computer.file_format ,law.invention ,law ,Computer data storage ,Electronic engineering ,Photolithography ,business ,JBIG ,computer ,Lithography ,Maskless lithography ,Image compression - Abstract
Future lithography systems must produce more dense chips with smaller feature sizes, while maintaining throughput comparable to today's optical lithography systems. This places stringent data-handling requirements on the design of any maskless lithography system. Today's optical lithography systems transfer one layer of data from the mask to the entire wafer in about sixty seconds. To achieve a similar throughput for a direct-write maskless lithography system with a pixel size of 25 nm, data rates of about 10 Tb/s are required. In this paper, we propose an architecture for delivering such a data rate to a parallel array of writers. In arriving at this architecture, we conclude that pixel domain compression schemes ar essential for delivering these high data rates. To achieve the desired compression ratios, we explore a number of binary lossless compression algorithms, and apply them to a variety of layers of typical circuits such as memory and control. The algorithms explored include the Joint Bi-Level Image Processing Group (JBIG), Ziv-Lempel (LZ77) as implemented by ZIP, as well as our own extension of Ziv-Lempel to two-dimensions. For all the layouts we tested, at least one of the above schemes achieves a compression ratio of 20 or larger, demonstrating the feasibility of the proposed system architecture.© (2000) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.
- Published
- 2000
- Full Text
- View/download PDF
25. 22-nm-node technology active-layer patterning for planar transistor devices
- Author
-
Steven J. Holmes, Harry J. Levinson, Matthew E. Colburn, Scott Halle, Jason E. Meiring, Vito Dai, Aasutosh Dave, and Ryoung-han Kim
- Subjects
Materials science ,Resolution enhancement technologies ,business.industry ,Mechanical Engineering ,Transistor ,Nanotechnology ,Condensed Matter Physics ,Chip ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,law.invention ,Design for manufacturability ,Optical proximity correction ,law ,Hardware_INTEGRATEDCIRCUITS ,Optoelectronics ,Process window ,Electrical and Electronic Engineering ,Photolithography ,business ,Lithography - Abstract
As the semiconductor device size shrinks without a concomitant increase of numerical aperture (NA) and refractive index of the immersion fluid, printing 22-nm-technology devices presents challenges in resolution. Therefore, aggressive integration of a resolution enhancement technique (RET), design for manufacturability (DFM), and layer-specific lithographic process development are strongly required in 22-nm-technology lithography. We show patterning of an active layer of a 22-nm-node planar logic transistor device, and discuss achievements and challenges. Key issues identified include printing tight pitches, isolated trench, and 2-D features while maintaining a large lithographic process window across the chip while scaling down the cell size. Utilizing NA=1.2, printing of the static random access memory (SRAM) of a cell size of 0.1 µm2 and other critical features across the chip with a process window are demonstrated.
- Published
- 2010
- Full Text
- View/download PDF
26. Reduced complexity compression algorithms for direct-write maskless lithography systems
- Author
-
Vito Dai, Avideh Zakhor, and Hsin-I Liu
- Subjects
Lossless compression ,Computer science ,Mechanical Engineering ,Block diagram ,Parallel computing ,Condensed Matter Physics ,Chip ,Atomic and Molecular Physics, and Optics ,Electronic, Optical and Magnetic Materials ,Golomb coding ,Electrical and Electronic Engineering ,Lithography ,Maskless lithography ,Image compression ,Data compression - Abstract
Achieving the throughput of one wafer layer per minute with a direct-write maskless lithography system, using 22-nm pixels for 45-nm feature sizes, requires data rates of about 12 Tb/s. In our previous work, we developed a novel lossless compression technique specifically tailored to flattened, rasterized, layout data called context copy combinatorial code (C4), which exceeds the compression efficiency of all other existing techniques including BZIP2, 2D-LZ, and LZ77, especially under a limited decoder buffer size, as required for hardware implementation. In this work, we present two variations of the C4 algorithm. The first variation, block C4, lowers the encoding time of C4 by several orders of magnitude, concurrently with lowering the decoder complexity. The second variation, which involves replacing the hierarchical combinatorial coding part of C4 with Golomb run-length coding, significantly reduces the decoder power and area as compared to block C4. We refer to this algorithm as block Golomb context copy code (block GC3). We present the detailed functional block diagrams of block C4 and block GC3 decoders, along with their hardware performance estimates as the first step of implementing the writer chip for maskless lithography.
- Published
- 2007
- Full Text
- View/download PDF
27. Reduced complexity compression algorithms for direct-write maskless lithography systems.
- Author
-
Hsin-I Liu, Vito Dai, Avideh Zakhor, and Borivoje Nikolic´
- Subjects
TECHNOLOGICAL complexity ,LITHOGRAPHY ,COMBINATORIAL dynamics ,CODING theory - Abstract
Achieving the throughput of one wafer layer per minute with a direct-write maskless lithography system, using 22-nm pixels for 45-nm feature sizes, requires data rates of about 12 Tb/s. In our previous work, we developed a novel lossless compression technique specifically tailored to flattened, rasterized, layout data called context copy combinatorial code (C4), which exceeds the compression efficiency of all other existing techniques including BZIP2, 2D-LZ, and LZ77, especially under a limited decoder buffer size, as required for hardware implementation. In this work, we present two variations of the C4 algorithm. The first variation, block C4, lowers the encoding time of C4 by several orders of magnitude, concurrently with lowering the decoder complexity. The second variation, which involves replacing the hierarchical combinatorial coding part of C4 with Golomb run-length coding, significantly reduces the decoder power and area as compared to block C4. We refer to this algorithm as block Golomb context copy code (block GC3). We present the detailed functional block diagrams of block C4 and block GC3 decoders, along with their hardware performance estimates as the first step of implementing the writer chip for maskless lithography. [ABSTRACT FROM AUTHOR]
- Published
- 2007
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.