2022 International Seminar on Application for Technology of Information and Communication (iSemantic) 2022
DOI: 10.1109/isemantic55962.2022.9920464
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Model In Road Surface Condition Monitoring

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 0 publications
0
0
0
Order By: Relevance
“…Similarly, Robet et al [26] used a U-Net-based neural network to tackle the task of semantically segmenting the roadway for pavement type determination and defect detection. The specialty of the approach is that images for analysis are captured by a road surveillance camera (the Road Traversing Knowledge (RTK) Dataset is used for training).…”
Section: Related Workmentioning
confidence: 99%
“…Similarly, Robet et al [26] used a U-Net-based neural network to tackle the task of semantically segmenting the roadway for pavement type determination and defect detection. The specialty of the approach is that images for analysis are captured by a road surveillance camera (the Road Traversing Knowledge (RTK) Dataset is used for training).…”
Section: Related Workmentioning
confidence: 99%
“…With the technical progress of general-purpose deep learning methods [2], many researchers apply deep learning-based detection methods to the task of detecting road damage, for instance image classification [3,4], target detection [5][6][7], and semantic segmentation [8][9][10][11]. These algorithms are effective in the detection of road damage [12][13][14]. Among them, the detection method based on image classification is to first segment the original image into sub-image fast, then judge these sub-image blocks by using a binary classification network; a final step involves stitching these sub-image modules into the original image.…”
Section: Introductionmentioning
confidence: 99%