2022 IEEE 31st International Symposium on Industrial Electronics (ISIE) 2022
DOI: 10.1109/isie51582.2022.9831515
|View full text |Cite
|
Sign up to set email alerts
|

Person Re-Identification on a Mobile Robot Using a Depth Camera

Abstract: In this paper, we designed and implemented a realtime person re-identification API on a mobile robot, for a closedand open-world setting, using only the IR gray value image of a depth camera. Since common datasets are not usable we created our own dataset using the IR gray value images, the pose and image processing techniques. Then we trained the state-of-theart neural network for person re-identification with common parameters and methods. For running it in real-time, we sped up the model as well as the appl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 29 publications
0
4
0
Order By: Relevance
“…Therefore, in Figure 2 a, to learn a cross-modal reidentification metric, our work takes different original images of N persons and takes their generated images in random poses and in random styles to randomly transform these different training images into different modals, say RGB, Grayscale, and Sketch modals. In our work, RGB images are transformed into Grayscale and Sketch modals due to the reason that in the real world, it is not necessary to always use an RGB sensor on a mobile robot; already, a large number of works have used IR modality [ 12 ]. However, a large number of public reidentification datasets have no IR images; therefore, we opted to transform RGB images into grayscale images.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Therefore, in Figure 2 a, to learn a cross-modal reidentification metric, our work takes different original images of N persons and takes their generated images in random poses and in random styles to randomly transform these different training images into different modals, say RGB, Grayscale, and Sketch modals. In our work, RGB images are transformed into Grayscale and Sketch modals due to the reason that in the real world, it is not necessary to always use an RGB sensor on a mobile robot; already, a large number of works have used IR modality [ 12 ]. However, a large number of public reidentification datasets have no IR images; therefore, we opted to transform RGB images into grayscale images.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, to overcome these shortcomings in visual trackers [ 11 ], recently, reidentification metrics have been learned and integrated with visual trackers [ 12 ] to follow the target person [ 1 , 2 ]. These reidentification metrics are learned by matching color-histograms and gait features [ 1 , 13 ], as well as extracting deep CNN features to learn deep similarity metrics [ 2 , 3 , 14 , 15 , 16 ].…”
Section: Introductionmentioning
confidence: 99%
“…9. The detection of persons in the depth camera images and cropping them out as described in our previous research [12], stays untouched.…”
Section: B Image Processing Pipelinementioning
confidence: 99%
“…Once the image processing pipeline is adjusted, we need to optimize the model for the final image input in the same way as it is done for the image Fig. 2c in our previous research [12].…”
Section: Planned Next Stepsmentioning
confidence: 99%