2017 IEEE International Conference on Mechatronics and Automation (ICMA) 2017
DOI: 10.1109/icma.2017.8016043
|View full text |Cite
|
Sign up to set email alerts
|

The remote operation and environment reconstruction of outdoor mobile robots using virtual reality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(7 citation statements)
references
References 9 publications
0
7
0
Order By: Relevance
“…We find very appropriate the suggested use of a small number of tasks, and we wish these to follow the order given by the MANTRA as it fits nicely to robot teleguide actions. We always Overview first, while Zooming, Filtering and Details are applied on demand [31] [35]. Automatic adaptation based on speed and distance to objects is an available option useful for specific environments and situations.…”
Section: Adaptive Viewsmentioning
confidence: 99%
“…We find very appropriate the suggested use of a small number of tasks, and we wish these to follow the order given by the MANTRA as it fits nicely to robot teleguide actions. We always Overview first, while Zooming, Filtering and Details are applied on demand [31] [35]. Automatic adaptation based on speed and distance to objects is an available option useful for specific environments and situations.…”
Section: Adaptive Viewsmentioning
confidence: 99%
“…Alternatively, the DJI's Inspire 2 is capable of high standard aerial filming and surveillance. The system supports the upgraded transmission of video with a dual signal frequency and channel that stream video from the main, as well as from an onboard camera simultaneously [11]. Released on 28 January 2018, DJI's Mavic Air uses advanced Visual Inertial Odometry (VIO) technology, a very effective sensing mechanism in FlightAutonomy 2.0 comprised of a main gimbal camera with dual-vision sensors with backward, forward, and downward direction movement, IMU redundancies, and a set of cores for computation [12].…”
Section: Related Workmentioning
confidence: 99%
“…Alternatively, the operator could be shown a 3D representation of the robotâĂŹs environment. This model may be produced from a priori knowledge [26,39], or âĂŸvirtualizedâĂŹ in real-time using 3D mapping [17,29,31,32]. Rendering the environment in 3D allows the operator to separate their viewpoint from the robotâĂŹs position, allowing either egocentric, exocentric or tethered viewpoints [14].…”
Section: Introductionmentioning
confidence: 99%
“…The primary efforts to achieve waypoint selection with a HMDbased interface that we have found are [1,26,32,36]. The target selection methods in these four implementations, which all use an exocentric viewpoint, are outlined below: Drag and drop.…”
Section: Introductionmentioning
confidence: 99%