2013 Latin American Robotics Symposium and Competition 2013
DOI: 10.1109/lars.2013.57
|View full text |Cite
|
Sign up to set email alerts
|

Development of a Control Platform for the Mobile Robot Roomba Using ROS and a Kinect Sensor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
12
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(12 citation statements)
references
References 3 publications
0
12
0
Order By: Relevance
“…The maximum speed of the robot is 0.5m/s in both forward and reverse directions. [6] The embedded robot controller is capable of controlling basic functions of the robot including travelling at either a desired distant or speed and handle all the sensor signals.…”
Section: Irobot Creatementioning
confidence: 99%
“…The maximum speed of the robot is 0.5m/s in both forward and reverse directions. [6] The embedded robot controller is capable of controlling basic functions of the robot including travelling at either a desired distant or speed and handle all the sensor signals.…”
Section: Irobot Creatementioning
confidence: 99%
“…Their dimension is more than 30 cm x 15 cm, consequently not suitable for academic purpose especially when the testing area is limited [8]- [13]. Therefore, there is a need a small size robot to be designed that uses an embedded Linux single-board computer.…”
Section: Introductionmentioning
confidence: 99%
“…In on-site investigation, it is found that the places below the fruits are spacious with less sheltering and the background is simple, so the writer proposes that the fruits be identified, positioned, and picked from the bottom parts of the plants. The principle is like this: determine the sequence of fruit identification, feature point extraction and fruit picking by using elliptic Hough conversion; acquire the feature point coordinates of the images with Kinect sensors made by Microsoft Company; obtain the image coordinates of feature points by Kinect sensor referring the foreign research results for Microsoft Kinect sensor in robot navigation [11][12] and feature recognition [13][14][15]; finally, conduct coordinates conversion between the camera and sensors and construct mathematical model of the coordinates conversion to obtain the 3D coordinates of the feature points. In Figure 1, due to the greater scene depth, and complicated background, it can be seen that the images of fruits taken from the side contain not only the leaves of the near-byplant branches, but also distant non-target fruits, and serious mutual occlusion between the target fruits.…”
Section: Introductionmentioning
confidence: 99%
“…Since the spatial position of the Microsoft camera and Kinect sensors remain unchanged, pixel coordinates of feature point A in the imaged captured by Kinect sensor can be derived from the pixel coordinates of the feature point 'A' in the camera image shot by the Microsoft camera. That is (x − 320) + X = b(x ′ − 320)(11) (y − 180) + Y = b(y ′ − 240)(12)From equations(11) and(12), we can get the following formulas:…”
mentioning
confidence: 99%