2021 IEEE International Conference on Communications Workshops (ICC Workshops) 2021
DOI: 10.1109/iccworkshops50388.2021.9473500
|View full text |Cite
|
Sign up to set email alerts
|

UAV Path Planning Using on-Board Ultrasound Transducer Arrays and Edge Support

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 12 publications
0
3
0
Order By: Relevance
“…Different metrics are selected based on the type of problem, i.e., classification or regression. Some of the widely used loss functions to capture the learning of the DL model at the edge while training are mean absolute error [204,205,206], mean square error [207,208], negative log-likelihood [209,210], cross-entropy [211,212,213], Kullback-Leibler divergence [214,215,216] etc. Cross-entropy, also called logarithmic loss, log loss, or logistic loss, is a widely accepted loss function for classification problems.…”
Section: B Training Lossmentioning
confidence: 99%
“…Different metrics are selected based on the type of problem, i.e., classification or regression. Some of the widely used loss functions to capture the learning of the DL model at the edge while training are mean absolute error [204,205,206], mean square error [207,208], negative log-likelihood [209,210], cross-entropy [211,212,213], Kullback-Leibler divergence [214,215,216] etc. Cross-entropy, also called logarithmic loss, log loss, or logistic loss, is a widely accepted loss function for classification problems.…”
Section: B Training Lossmentioning
confidence: 99%
“…These studies include obstacle avoidance based on ultrasonic, radar, and image processing [ 13 ]. Ultrasonic-based methods perform in real-time but the maximum range is short [ 14 , 15 ]. Radar-based methods perform well in obstacle detection.…”
Section: Related Workmentioning
confidence: 99%
“…Different metrics are selected based on the type of problem, i.e., classification or regression. Some of the widely used loss functions to capture the learning of DNN at EDGE while training are Mean Absolute Error [60,64,273], Mean Square Error Loss [269,289,316], Negative Log-Likelihood Loss [138,147,219], Cross-Entropy Loss [54,57,128,162], Kullback-Leibler divergence [47,70,237,294] etc.…”
Section: Training Lossmentioning
confidence: 99%