2022 7th International Conference on Machine Learning Technologies (ICMLT) 2022
DOI: 10.1145/3529399.3529436
|View full text |Cite
|
Sign up to set email alerts
|

NLP Cross-Domain Recognition of Retail Products

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…In the other fusion models, an image-based model and a multimodal model have to be trained separately to achieve the same purpose. In [9], it is shown that text-based grocery product recognition models are robust to domain adaptation, while the performance of image-based models degrades significantly. This domain adaptation aspect is a practical problem when a system is installed in a new grocery store or if the camera type or camera position changes in an existing one.…”
Section: Discussion and Recommendationsmentioning
confidence: 99%
See 2 more Smart Citations
“…In the other fusion models, an image-based model and a multimodal model have to be trained separately to achieve the same purpose. In [9], it is shown that text-based grocery product recognition models are robust to domain adaptation, while the performance of image-based models degrades significantly. This domain adaptation aspect is a practical problem when a system is installed in a new grocery store or if the camera type or camera position changes in an existing one.…”
Section: Discussion and Recommendationsmentioning
confidence: 99%
“…For the recognition of the product package text in grocery products, BERT [85] with a small classification head [9] has shown notably better classification results compared to other methods using GloVe embeddings [86]. Therefore, we use three types of text models based on the Transformer architecture used in BERT; the baseline model BERT [85], the optimized DistilBERT [87] model, and the more accurate DeBERTa [88] model.…”
Section: Model Selectionmentioning
confidence: 99%
See 1 more Smart Citation