2023
DOI: 10.24132/csrn.3301.58
|View full text |Cite
|
Sign up to set email alerts
|

Semi-Supervised Learning Approach for Fine Grained Human Hand Action Recognition in Industrial Assembly

Fabian Sturm,
Rahul Sathiyababu,
Elke Hergenroether
et al.

Abstract: Until now, it has been impossible to imagine industrial manual assembly without humans due to their flexibility and adaptability. But the assembly process does not always benefit from human intervention. The error-proneness of the assembler due to disturbance, distraction or inattention requires intelligent support of the employee and is ideally suited for deep learning approaches because of the permanently occurring and repetitive data patterns. However, there is the problem that the labels of the data are no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 22 publications
0
1
0
Order By: Relevance
“…Subsequently, the suitability for a transfer of the weights to an industrial use case is checked and evaluated on the basis of values such as the amount of labeled data, training time and performance on a test data set. In addition to this extended version of [31], the occurrence of concept drift during fine-tuning is considered and possible responses to prevent performance degradation are outlined. This phenomenon refers to changing circumstances potentially impacting the data and the performance of machine learning models consuming it [9].…”
Section: Introductionmentioning
confidence: 99%
“…Subsequently, the suitability for a transfer of the weights to an industrial use case is checked and evaluated on the basis of values such as the amount of labeled data, training time and performance on a test data set. In addition to this extended version of [31], the occurrence of concept drift during fine-tuning is considered and possible responses to prevent performance degradation are outlined. This phenomenon refers to changing circumstances potentially impacting the data and the performance of machine learning models consuming it [9].…”
Section: Introductionmentioning
confidence: 99%