2016
DOI: 10.9746/jcmsi.9.33
|View full text |Cite
|
Sign up to set email alerts
|

View Point Decision Algorithm for an Autonomous Robot to Provide Support Images in the Operability of a Teleoperated Robot

Abstract: It is conceivable to use a teleoperated robot as a method for exploring a disaster environment quickly while avoiding secondary disaster. According to conventional researches, the image captured behind from the teleoperated robot is useful as information provided to an operator for operating a teleoperated robot. In this research, a teleoperated method to provide the image from behind is proposed by using an autonomous robot. This method allows one image to include both the teleoperated robot itself and the en… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(3 citation statements)
references
References 5 publications
0
3
0
Order By: Relevance
“…Similar strategies have been used for UAV teleoperation, such as creating a third-person view of the UAV in the remote environment, for example, by a second robot following a primary robot [34,35]. Third-person views of UAVs can also be generated from image rendering techniques using omni-directional cameras to improve UAV navigation in enclosed or unknown environments [36,37].…”
Section: Methodsmentioning
confidence: 99%
“…Similar strategies have been used for UAV teleoperation, such as creating a third-person view of the UAV in the remote environment, for example, by a second robot following a primary robot [34,35]. Third-person views of UAVs can also be generated from image rendering techniques using omni-directional cameras to improve UAV navigation in enclosed or unknown environments [36,37].…”
Section: Methodsmentioning
confidence: 99%
“…Len et al, 2016) left the choice of a viewpoint to humans who were shown to pick suboptimal viewpoints (McKee et al, 2003). (3) Reactive autonomous visual assistants (Triggs and Laugier, 1995;Hershberger et al, 2000;Simmons et al, 2001;Maeyama et al, 2016;Gawel et al, 2018;Sato et al, 2016;Ji et al, 2018;Abi-Farraj et al, 2016;Rakita et al, 2018;Nicolis et al, 2018) only tracked and zoomed on the action ignoring the question of what is the best viewpoint. (4) Deliberative autonomous visual assistants (McKee and Schenker, 1994;Brooks and McKee, 1995;McKee and Schenker, 1995b;McKee and Schenker, 1995a;Brooks and McKee, 2001;Brooks et al, 2002;McKee et al, 2003;Rahnamaei and Sirouspour, 2014;Ito and Sekiyama, 2015;Saran et al, 2017;Samejima and Sekiyama, 2016;Samejima et al, 2018;Rakita et al, 2019;Thomason et al, 2017;Thomason et al, 2019) might be globally optimal in terms of geometry if the work envelope model is available but the existing studies did not consider other attributes, particularly psychophysical aspects when selecting viewpoints.…”
Section: Visual Assistancementioning
confidence: 99%
“…In a conventional work on remote control support systems, Kamezaki et al [9] developed a remote-control support system by using multiple cameras that switched the image depending on the distances between the robots. Maeyama et al [10] proposed a visual support system in which a monitoring robot followed a remote-controlled robot to constantly monitor the rear area of the remote-controlled robot. Neither of these works uses selection criteria for the observation target object.…”
Section: Introductionmentioning
confidence: 99%