2010 IEEE 9th International Conference on Cyberntic Intelligent Systems 2010
DOI: 10.1109/ukricis.2010.5898127
|View full text |Cite
|
Sign up to set email alerts
|

Evolving cooperative neural agents for controlling vision guided mobile robots

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2010
2010
2022
2022

Publication Types

Select...
2
2
2

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 8 publications
0
10
0
Order By: Relevance
“…We use the n-flop as an integrating, modulable behavior initiating agent, modulated with compressed image information coming from others agents. In previous experiments this arrangement showed encouraging results [11].…”
Section: Behavior Initiating Agentsmentioning
confidence: 64%
See 3 more Smart Citations
“…We use the n-flop as an integrating, modulable behavior initiating agent, modulated with compressed image information coming from others agents. In previous experiments this arrangement showed encouraging results [11].…”
Section: Behavior Initiating Agentsmentioning
confidence: 64%
“…End-ofrun information which come from the buffers is fed back toward the n-flops so that in free running conditions the eye improves its random search (figure 2). Considerations about this search are given in [11]. …”
Section: A the Behavior Initiating Agentmentioning
confidence: 99%
See 2 more Smart Citations
“…Actually, vision has been used in numerous robotic applications to successfully achieve a task (e.g. obstacle avoidance for navigation [12], [13], [14], [15], human recognition for Human-Robot Interaction [16], [17], activity recognition for cooperative behaviour [18], [19], [20] and object identification for manipulation [21], [22], [23], to name only a few). However, despite significant achievements, the problem of detecting and recognising objects efficiently and accurately still remains a scientific challenge when real scenes are considered.…”
Section: Introductionmentioning
confidence: 99%