DTCO and Computational Patterning 2022
DOI: 10.1117/12.2606715
|View full text |Cite
|
Sign up to set email alerts
|

Machine learning OPC with generative adversarial networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…The forward process of the diffusion model describes obtaining pure noise 𝑦 𝑇 ~ N(0, I) at a large time step T by progressively adding Gaussian noise to data 𝑦 𝑡−1 at each time step t-1, where t ranges from 1 to T, as illustrated in Equation ( 6). This equation is an equivalent formulation to Equation (5). Furthermore, Eq.…”
Section: Model Architecture and Training Algorithm For Diffusion Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The forward process of the diffusion model describes obtaining pure noise 𝑦 𝑇 ~ N(0, I) at a large time step T by progressively adding Gaussian noise to data 𝑦 𝑡−1 at each time step t-1, where t ranges from 1 to T, as illustrated in Equation ( 6). This equation is an equivalent formulation to Equation (5). Furthermore, Eq.…”
Section: Model Architecture and Training Algorithm For Diffusion Modelmentioning
confidence: 99%
“…The conventional layout re-targeting approach involves multiple manual trial-and-error processes, with numerous iterations for the optimization of sizing values, resulting in an excessively long turn-around time (TAT). Leveraging the substantial growth of machine learning in chip manufacturing [3][4][5], we demonstrate the application of this technique to achieve highly accurate and efficient solutions for complex predictions and inferences on sizing values. Our proposed deep learning-assisted layout re-targeting method requires only one iteration to predict sizing values, significantly reducing TAT and compensating for errors in model or mask correction.…”
Section: Introductionmentioning
confidence: 99%
“…Recent state-of-the-art ML-RET methods use image-based input by converting design patterns into image slices [5,[7][8][9]. The ML model then translates the input images and transfers it to the optimized photomask domain to produce an image with the final photomask shapes.…”
Section: Image-based Photomask Correctionmentioning
confidence: 99%