In human-human interactions co-representing a partner's actions is crucial to successfully adjust and coordinate actions with others. Current research suggests that action co-representation is restricted to interactions between human agents facilitating social interaction with conspecifics. In the present study, we investigated whether action co-representation, as measured by the Social Simon Effect (SSE), is present when we share a task with a real humanoid robot. Further, we tested if the believed humanness of the robot's functional principle modulates the extent to which robotic actions are co-represented. We described the robot to participants either as functioning in a biologically inspired human-like way or in a purely deterministic machine-like manner. The SSE was present in the human-like, but not in the machinelike robot condition. The present findings suggest that humans co-represent the actions of non-biological robotic agents when they start to attribute human-like cognitive processes to the robot. Our findings provide novel evidence for top-down modulation effects on action co-representation in human-robot interaction situations.Words: 161
Abstract:Since the concept of a smart city was introduced, IoT (Internet of Things) has beenconsidered the key infrastructure in a smart city. However, there are currently no detailed explanations of the technical contributions of IoT in terms of the management, development, and improvements of smart cities. Therefore, the current study describes the importance of IoT technologies on the technology roadmap (TRM) of a smart city. Moreover, the survey with about 200 experts was conducted to investigate both the importance and essentiality of detail components of IoT technologies for a smart city. Based on the survey results, the focal points and essential elements for the successful developments of a smart city are presented.
Abstract. Manipulation skills are a key issue for a humanoid robot. Here, we are interested in a vision-based grasping system able to deal with previously unknown objects in real time and in an intelligent manner. Starting from a number of feasible candidate grasps, we focus on the problem of predicting their reliability using the knowledge acquired in previous grasping experiences. A set of visual features which take into account physical properties that can affect the stability and reliability of a grasp are defined. A humanoid robot obtains its grasping experience by repeating a large number of grasping actions on different objects. An experimental protocol is established in order to classify grasps according to their reliability. Two prediction/classification strategies are defined which allow the robot to predict the outcome of a grasp only analizing its visual features. The results indicate that these strategies are adequate to predict the realibility of a grasp and to generalize to different objects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.