2023
DOI: 10.1109/tase.2022.3164831
|View full text |Cite
|
Sign up to set email alerts
|

A Novel Methodology for Lens Matching in Compact Lens Module Assembly

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…In contrast, the remaining logs were used to train the predictors. For example, when the cavity number combination of a specific production log was [3,4,1,10,15,16,4], and it only appeared in the dataset once, the production log belonged to the test set. It should be noted that over-fitting could occur as our problem deals with an industrial dataset with significant information duplication due to automatic mass production.…”
Section: Baselinementioning
confidence: 99%
See 1 more Smart Citation
“…In contrast, the remaining logs were used to train the predictors. For example, when the cavity number combination of a specific production log was [3,4,1,10,15,16,4], and it only appeared in the dataset once, the production log belonged to the test set. It should be noted that over-fitting could occur as our problem deals with an industrial dataset with significant information duplication due to automatic mass production.…”
Section: Baselinementioning
confidence: 99%
“…Another study used the GA to find the optimum part lens sets for the combination of anomalous dispersion glass and materials inside a liquid lens by utilizing equations on their respective chromatic aberration and root mean square spot diameter [15]. Li et al [16] suggested a GA and deep neural network (DNN)-based surrogate model for lens matching in lens module assembly to replace expensive optical simulators. Our lens module assembly optimization method, which selects both cavity combinations and orientation angle combinations, has previously been submitted for patent in Korea, the United States, and China [17].…”
Section: Introductionmentioning
confidence: 99%
“…The models tested in this study include SVM, k-NN, and RF. To enhance our analysis, we have introduced two additional potent models: eXtreme Gradient Boosting (XGBoost) and Binary deep neural network (BDNN) [39,40], for comparison with the stacking ensemble model proposed in this research. We will validate these models through a comprehensive process of parameter tuning, training, and validation to assess their performance while considering various combinations of selected features.…”
Section: Parameter Tuning Training and Validationmentioning
confidence: 99%