2016 IEEE International Conference on Image Processing (ICIP) 2016
DOI: 10.1109/icip.2016.7533052
|View full text |Cite
|
Sign up to set email alerts
|

Road crack detection using deep convolutional neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

1
622
0
2

Year Published

2016
2016
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 1,116 publications
(706 citation statements)
references
References 15 publications
1
622
0
2
Order By: Relevance
“…And more re-7 cent systems, including the best systems (Wang and Lan, 2015; Oepen et al, 2016) at the recent CONLL shared tasks on PDTB-style shallow discourse parsing (Xue et al, 2015(Xue et al, , 2016, while not using a sequence model, still incorporate features about neighboring relations. Such systems have many applications, including summarization (Louis et al, 2010), information extraction (Huang and Riloff, 2012), question answering (Blair-Goldensohn, 2007), opinion analysis (Somasundaran et al, 2008), and argumentation (Zhang et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…And more re-7 cent systems, including the best systems (Wang and Lan, 2015; Oepen et al, 2016) at the recent CONLL shared tasks on PDTB-style shallow discourse parsing (Xue et al, 2015(Xue et al, , 2016, while not using a sequence model, still incorporate features about neighboring relations. Such systems have many applications, including summarization (Louis et al, 2010), information extraction (Huang and Riloff, 2012), question answering (Blair-Goldensohn, 2007), opinion analysis (Somasundaran et al, 2008), and argumentation (Zhang et al, 2016).…”
Section: Introductionmentioning
confidence: 99%
“…And more re-7 cent systems, including the best systems (Wang and Lan, 2015; Oepen et al, 2016) at the recent CONLL shared tasks on PDTB-style shallow discourse parsing (Xue et al, 2015(Xue et al, , 2016, while not using a sequence model, still incorporate features about neighboring relations. Such systems have many applications, including summarization (Louis et al, 2010), information extraction (Huang and Riloff, 2012), question answering (Blair-Goldensohn, 2007), opinion analysis (Somasundaran et al, 2008), and argumentation (Zhang et al, 2016).This paper describes our experiments in annotating cross-paragraph implicit relations in the PDTB (Section 2), with the goal of producing a set of guidelines (Section 3) to annotate such relations reliably (Section 4) and produce a representative dataset annotated with complete sequences of inter-sentential relations.Our main findings from the experiments are as follows:• The ratio of cross-paragraph implicit relations between non-adjacent sentences and between adjacent sentences is almost 1 to 1 (47% vs 51% in our experiment). This is similar to the distribution of cross-paragraph explicit relations (Prasad et al, 2010).…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…However, pre-processing continues to be a challenging task due to reasons such as the fact that noises and cracks have the same texture [15], the presence of traces such as tire tracks, oil spills etc. on the road surfaces [19], the fact that cracks do not have the same density, and low contrast [21] etc. The road cracks constitute the continuous and darkest region of the image [10].…”
Section: Introductionmentioning
confidence: 99%
“…Compared with LV segmentation methods, the direct LV volumes prediction methods without segmentation are popular in recent years, especially along with the wide use of deep learning technology in the field of medical images processing [9,10]. The research group of Li Shuo proposed a series of methods for direct LV function indexes prediction method based on machine learning technologies [11][12][13], for example, the method based on adapted Bayesian formulation, the method based on linear support vector machine , and the method based on multiscale deep networks and regression forests.…”
Section: Introductionmentioning
confidence: 99%