2013
DOI: 10.1016/j.robot.2012.08.002
|View full text |Cite
|
Sign up to set email alerts
|

Probabilistic terrain classification in unstructured environments

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2013
2013
2019
2019

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 35 publications
(15 citation statements)
references
References 39 publications
0
15
0
Order By: Relevance
“…Three‐dimensional LiDARs are widely applied on ALVs for obstacle detection. A type of velodyne HDL‐64E LiDAR is widely adopted in mainstream ALVs, such as in Google self‐driving car, KIT's AnnieWAY (Kammel et al., ), Stanford's Junior (Montemerlo et al., ), the vehicle in paper (Häselich et al., ), and the vehicle in our own team (Chen et al., ) (shown in Figure ).…”
Section: A Novel Setup Methods Of 3‐d Lidarsmentioning
confidence: 99%
“…Three‐dimensional LiDARs are widely applied on ALVs for obstacle detection. A type of velodyne HDL‐64E LiDAR is widely adopted in mainstream ALVs, such as in Google self‐driving car, KIT's AnnieWAY (Kammel et al., ), Stanford's Junior (Montemerlo et al., ), the vehicle in paper (Häselich et al., ), and the vehicle in our own team (Chen et al., ) (shown in Figure ).…”
Section: A Novel Setup Methods Of 3‐d Lidarsmentioning
confidence: 99%
“…An effective approach to object segmentation from 2D images and 3D point clouds is the Markov random field (MRF) algorithm [24–30]. …”
Section: Related Workmentioning
confidence: 99%
“…At low level, lidar has been fused with other range‐based sensors (lidar and radar) using a joint calibration procedure (Underwood, Hill, Peynot, & Scheding, ). Additionally, lidar has been fused with cameras (monocular, stereo, and thermal) by projecting 3D lidar points onto corresponding images and concatenating either their raw outputs (Dima, Vandapel, & Hebert, ; Wellington, Courville, & Stentz, ) or precalculated features (Häselich, Arends, Wojke, Neuhaus, & Paulus, ). This approach potentially leverages the full potential of all sensors, but suffers from the fact that only regions covered by all modalities are defined.…”
Section: Introductionmentioning
confidence: 99%