Traditional industry is seeing an increasing demand for more autonomous and flexible manufacturing in unstructured settings, a shift away from the fixed, isolated workspaces where robots perform predefined actions repetitively. This work presents a case study in which a robotic manipulator, namely a KUKA KR90 R3100, is provided with smart sensing capabilities such as vision and adaptive reasoning for real-time collision avoidance and online path planning in dynamically-changing environments. A machine vision module based on low-cost cameras and color detection in the hue, saturation, value (HSV) space is developed to make the robot aware of its changing environment. Therefore, this vision allows the detection and localization of a randomly moving obstacle. Path correction to avoid collision avoidance for such obstacles with robotic manipulator is achieved by exploiting an adaptive path planning module along with a dedicated robot control module, where the three modules run simultaneously. These sensing/smart capabilities allow the smooth interactions between the robot and its dynamic environment, where the robot needs to react to dynamic changes through autonomous thinking and reasoning with the reaction times below the average human reaction time. The experimental results demonstrate that effective human-robot and robot-robot interactions can be realized through the innovative integration of emerging sensing techniques, efficient planning algorithms and systematic designs.
Facial expressions are important in people's daily communications. Recognising facial expressions also has many important applications in the areas such as healthcare and e-learning. Existing facial expression recognition systems have problems such as background interference. Furthermore, systems using traditional approaches like SVM (Support Vector Machine) have weakness in dealing with unseen images. Systems using deep neural network have problems such as requirement for GPU, longer training time and requirement for large memory. To overcome the shortcomings of pure deep neural network and traditional facial recognition approaches, this paper presents a new facial expression recognition approach which has image preprocessing techniques to remove unnecessary background information and combines deep neural network ResNet50 and a traditional classifier-the multiclass model for Support Vector Machine to recognise facial expressions. The proposed approach has better recognition accuracy than traditional approaches like Support Vector Machine and doesn't need GPU. We have compared 3 proposed frameworks with a traditional SVM approach against the Karolinska Directed Emotional Faces (KDEF) Database, the Japanese Female Facial Expression (JAFFE) Database and the extended Cohn-Kanade dataset (CK+), respectively. The experiment results show that the features extracted from the layer 49Relu have the best performance for these three datasets.
In order to extend the abilities of current robots in industrial applications towards more autonomous and flexible manufacturing, this work presents an integrated system comprising real-time sensing, path-planning and control of industrial robots to provide them with adaptive reasoning, autonomous thinking and environment interaction under dynamic and challenging conditions. The developed system consists of an intelligent motion planner for a 6 degrees-of-freedom robotic manipulator, which performs pick-and-place tasks according to an optimized path computed in real-time while avoiding a moving obstacle in the workspace. This moving obstacle is tracked by a sensing strategy based on machine vision, working on the HSV space for color detection in order to deal with changing conditions including non-uniform background, lighting reflections and shadows projection. The proposed machine vision is implemented by an offboard scheme with two low-cost cameras, where the second camera is aimed at solving the problem of vision obstruction when the robot invades the field of view of the main sensor. Real-time performance of the overall system has been experimentally tested, using a KUKA KR90 R3100 robot.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.