2019
DOI: 10.1109/access.2019.2902658
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning for Risk Detection and Trajectory Tracking at Construction Sites

Abstract: This paper investigates deep learning for risk detection and trajectory tracking at construction sites. Typically, safety officers are responsible for inspecting and verifying site safety due to many potential risks. Traditional target detection algorithms depend heavily on hand-crafted features. However, these features are difficult to design, and detection accuracy is poor. To solve these problems, this paper proposes a deep-learning-based detection algorithm that uses pedestrian wearable devices (e.g., helm… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
29
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 61 publications
(29 citation statements)
references
References 51 publications
0
29
0
Order By: Relevance
“…where L loc (T ; Θ) is the localization loss of ground truth as shown in (14) ; L obj (T ; Θ) is the object confidence loss of ground truth as shown in (15); L conf (T ; Θ) is the class confidence loss of ground truth as shown in (16); We use sum of the squared errors function f SSE to calculate the loss between the ground truth and the prediction, as shown in (13).…”
Section: Inputmentioning
confidence: 99%
“…where L loc (T ; Θ) is the localization loss of ground truth as shown in (14) ; L obj (T ; Θ) is the object confidence loss of ground truth as shown in (15); L conf (T ; Θ) is the class confidence loss of ground truth as shown in (16); We use sum of the squared errors function f SSE to calculate the loss between the ground truth and the prediction, as shown in (13).…”
Section: Inputmentioning
confidence: 99%
“…By reconstructing data with minimum error, SC searches for a set of over-complete (k n) bases whose sparse linear combination can approximately represent the data, with a penalty term (the second term in the cost function) restraining the combination coefficients to be sparse, which is the controlling weight in allocating the proportion of the reconstruction error and coefficient sparsity [16]. As more than one nonzero coefficient is allowed, sparse coding is better at precisely representing the data than K-means.…”
Section: General Models and Neural Response A K-means Sc And Icamentioning
confidence: 99%
“…Additionally, we propose a constraint here on intermediate representations, that features extracted in the hidden layer must be statistically independent, i.e., uncorrelated. It is motivated by the following qualitative reasoning: factors of variation are different aspects of the data which can vary separately and often independently; independent features can increase the dimension of observed vectors to obtain more information; a lot of evidence has shown specificity and selectivity of neurons in biological nervous systems [15], [16], [18]- [20]. An intuitive hallmark is that when we humans get stuck when attempting to recognize an object that is not readily identifiable, we tend to observe different parts of it to obtain more information to define it.…”
Section: Introductionmentioning
confidence: 99%
“…In the last few years, deep learning researches focused on both academia and industry [26] to process massive data with limited computing resources, extract high-dimensional features, and strong nonlinear fitting ability [27]- [29]. In civil engineering, DL based methods have been widely adopted in crack identification [30], [31], micro-seismic event detection and location in underground mines [32] as well as safety management in construction sites [33]- [35]. To learn more patterns from the past time series, the latest scholars started to use a more advanced type of new network, namely, the long short-term memory (LSTM) neural network.…”
Section: Introductionmentioning
confidence: 99%