2020
DOI: 10.3390/s20185030
|View full text |Cite
|
Sign up to set email alerts
|

Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data

Abstract: This study focuses on driver-behavior identification and its application to finding embedded solutions in a connected car environment. We present a lightweight, end-to-end deep-learning framework for performing driver-behavior identification using in-vehicle controller area network (CAN-BUS) sensor data. The proposed method outperforms the state-of-the-art driver-behavior profiling models. Particularly, it exhibits significantly reduced computations (i.e., reduced numbers both of floating-point operations and … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(30 citation statements)
references
References 38 publications
0
30
0
Order By: Relevance
“…With this technique, certain nodes within layers can be frozen, while others are retrained. Ullah and Kim [95] apply sparse learning for TL in driver behavior identification. They prevent the forgetting of important knowledge by freezing strong nodes and retrain weaker ones in the target domain.…”
Section: M2) Partial Freezingmentioning
confidence: 99%
See 1 more Smart Citation
“…With this technique, certain nodes within layers can be frozen, while others are retrained. Ullah and Kim [95] apply sparse learning for TL in driver behavior identification. They prevent the forgetting of important knowledge by freezing strong nodes and retrain weaker ones in the target domain.…”
Section: M2) Partial Freezingmentioning
confidence: 99%
“…For a time series of length l with d dimensions, this leads to a matrix of size d × l. After normalizing the features' value ranges, the matrix can be interpreted as an image. This method is applied in [60], [91], [95]. Several other works transform time series data into the time-frequency domain and use the spectrogram as a visual time series representation.…”
Section: Time Series To Image Transformationmentioning
confidence: 99%
“…2 Functionality to let several independent parties work with the system, providing each party potentially different data. 3 Data can be transferred from the vehicle to an external system and persisted there. 4 The external system continuously provides the received sensor data to other systems via streaming.…”
Section: Related Workmentioning
confidence: 99%
“…Besides the commercial demand identified in the insurance market, academia also has many active fields of research that depend on vehicle data. Among these fields are driver behavior identification [3], inference of lane change intentions [4], or drowsiness detection [5]. In addition, access to vehicle data is essential for municipalities to facilitate the transition to Smart Cities [6].…”
Section: Introductionmentioning
confidence: 99%
“…For instance, Long et al [ 17 ] proposed a full convolutional neural network (FCNN) [ 17 ], where the fully connected layer in the common image classification network was replaced with a convolutional layer, and deconvolution was used to generate a segmented image of the same size as the original picture directly, realizing end-to-end image semantic segmentation. The SegNet convolutional network adopts a one-to-one corresponding encoder–decoder structure, where the encoder performs the maximum pooling operation and records the index position of the pooling, which is then used by the decoder to perform nonlinear sampling; by using this structure [ 18 ], the SegNet effectively improves segmentation accuracy [ 19 ]. The U-Net network can be used for the binary semantic segmentation of medical images, and its main advantage is a fewer number of model parameters, which makes it able to complete training on a small-scale dataset, achieving good results [ 20 ].…”
Section: Introductionmentioning
confidence: 99%