As design rule (DR) scaling continues to push lithographic imaging to higher numerical aperture (NA) and smaller k 1 factor, extensive use of resolution enhancement techniques becomes a general practice. Use of these techniques not only adds considerable complexity to the design rules themselves, but also can lead to undesired and/or unanticipated problematic imaging effects known as "hotspots." This is particularly common for metal layers in interconnect patterning due to the many complex random and bidirectional (2D) patterns present in typical layout. In such situations, the validation of DR becomes challenging, and the ability to analyze large numbers of 2D layouts is paramount in generating a DR set that encodes all lithographic constraints to avoid hotspot formation.Process window (PW) and mask error enhancement factor (MEEF) are the two most important lithographic constraints in defining design rules. Traditionally, characterization of PW and MEEF by simulation has been carried out using discrete cut planes. For a complex 2D pattern or a large 2D layout, this approach is intractable, as the most likely location of the PW or MEEF hotspots often cannot be predicted empirically, and the use of large numbers of cut planes to ensure all hotspots are detected leads to excessive simulation time. In this paper, we present a novel approach to analyzing fullfield PW and MEEF using the inverse lithography technology (ILT) technique, [1] in the context of restrictive design rule development for the 32nm node. Using this technique, PW and MEEF are evaluated on every pixel within a design, thereby addressing the limitations of cut-plane approach while providing a complete view of lithographic performance. In addition, we have developed an analysis technique using color bitmaps that greatly facilitates visualization of PW and MEEF hotspots anywhere in the design and at an arbitrary level of resolution.We have employed the ILT technique to explore metal patterning options and their impact on 2D design rules. We show the utility of this technique to quickly screen specific rule and process choices-including illumination condition and process bias-using large numbers of parameterized structures. We further demonstrate how this technique can be used to ascertain the full 2D impact of these choices using carefully constructed regression suites based on standard random logic cells. The results of this study demonstrate how this simulation approach can greatly improve the accuracy and quality of 2D rules, while simultaneously accelerating learning cycles in the design phase.
In this paper we present a method that optimizes the OPC model generation process. The elements in this optimized flow include: an automated test structure layout engine; automated SEM recipe creation and data collection; and OPC model anchoring/validation software. The flow is streamlined by standardizing and automating these steps and their inputs and outputs. A major benefit of this methodology is the ability to perform multiple OPC "screening" refinement loops in a short time before embarking on final model generation. Each step of the flow is discussed in detail, as well as our multi-pass experimental design for converging on a final OPC data set. Implementation of this streamlined process flow drastically reduces the time to complete OPC modeling, and allows generation of multiple complex OPC models in a short time, resulting in faster release and transfer of a next-generation product to manufacturing.Keywords: CD-SEM, OPC, pattern matching, SEM image analysis, edge placement error, automatic recipe generation INTRODUCTIONCreating advanced OPC models for new technology nodes is an increasingly challenging aspect of lithographic process development. Increased complexity of patterns and illumination conditions and an ever increasing calibration space with each technology node means that each critical layer requires hundreds to thousands of CD-SEM measurements to characterize OPC behavior adequately and allow generation of an accurate OPC model. 1 With each new technology node, the number of levels requiring model-based OPC (MBOPC) increases significantly. As illustrated in Figure 1, the number of levels employing MBOPC has been rising with each successive node since the 180-nm node; the increase in the number of MBOPC levels greatly accelerated between the 130 nm and 90-nm nodes and has been increasing at about the same rate ever since. 2 In addition to more levels needing MBOPC, use of assist features is increasing significantly, and the number of variables needed to describe and implement proximity correction adequately is escalating (Fig. 2). Sub-resolution assist feature (SRAF) optimization requires multiple placement scenarios for each line/space or hole combination, further increasing the need for a very large body of data to develop optimal OPC corrections. In addition to the large body of data required, multiple passes and iteration of OPC generation are required to gain optimal correction. * mecoles@ti.com; phone 1 972 995-2205; fax 1 972 995-6383
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.