Traditional wheelchairs are unable to actively sense the external environment during use and have a single control method. Therefore, this paper develops an intelligent IoT wheelchair with the three functions, as follows. (1) Occupant-wheelchair-environment multimode sensing: the PAJ7620 sensor is used to recognize gesture information, while GPS (Global Positioning System) and IMU (Inertial Measurement Unit) sensors are used to sense positioning, speed and postural information. In addition, Lidar, DHT11, and BH1750 sensors obtain environmental information such as road information, temperature and humidity and light intensity. (2) Fusion control scheme: a mobile control scheme based on rocker and gesture recognition, as well as a backrest and footrest lifting, lowering and movement control scheme based on Tencent Cloud and mobile APP (Application). (3) Human-machine interaction: the wheelchair is docked to Tencent IoT Explorer through ESP8266 WiFi module, using MQTT (Message Queuing Telemetry Transport) protocol is used to upload sensory data, while the wheelchair status can be viewed and controlled on the APP. The wheelchair designed in this paper can sense and report the status of the occupant, environment and wheelchair in real time, while the user can view the sensory data on the mobile APP and control the wheelchair using the rocker, gestures and APP.
Studies have shown that driver fatigue or unpleasant emotions significantly increase driving risks. Detecting driver emotions and fatigue states and providing timely warnings can effectively minimize the incidence of traffic accidents. However, existing models rarely combine driver emotion and fatigue detection, and there is space to improve the accuracy of recognition. In this paper, we propose a non-invasive and efficient detection method for driver fatigue and emotional state, which is the first time to combine them in the detection of driver state. Firstly, the captured video image sequences are preprocessed, and Dlib (image open source processing library) is used to locate face regions and mark key points; secondly, facial features are extracted, and fatigue indicators, such as driver eye closure time (PERCLOS) and yawn frequency are calculated using the dual-threshold method and fused by mathematical methods; thirdly, an improved lightweight RM-Xception convolutional neural network is introduced to identify the driver’s emotional state; finally, the two indicators are fused based on time series to obtain a comprehensive score for evaluating the driver’s state. The results show that the fatigue detection algorithm proposed in this paper has high accuracy, and the accuracy of the emotion recognition network reaches an accuracy rate of 73.32% on the Fer2013 dataset. The composite score calculated based on time series fusion can comprehensively and accurately reflect the driver state in different environments and make a contribution to future research in the field of assisted safe driving.
Purpose Managing archives using robots rather than people can considerably enhance efficiency, while need to modify the structure of archive shelves or installation tracks. This paper aims to develop a fully automated archive access robot without modification. Design/methodology/approach First, a mobile navigation chassis and a motion algorithm based on laser ranging and map matching are created for autonomous movement to any of the archives’ locations. Second, because the existing archives are stacked vertically, the bionic manipulator is made to mimic the movement of manual access to the archives, and it is attached to the robot arm’s end to access different layers of archives. In addition, an industrial camera is used to complete barcode identification of the archives and acquire data on their location and thickness. Finally, the archive bin is created to store archives. Findings The robot can move, identify and access multiple archival copies placed on floors 1–6 and 2–5 cm thick autonomously without modifying the archival repository or using auxiliary devices. Research limitations/implications The robot is currently able to navigate, identify and access files placed on different levels. In the future, the speed of the robot’s navigation and the movement of the robot arm could be even faster, while the level of visualization of the robot could be further improved and made more intelligent. Practical implications The archive access robot developed by the authors makes it possible for robots to manage archives instead of human, while being cheaper and easier to deploy than existing robots, and has already been tested in the archive storage room of the State Grid maintenance branch in Jiangsu, China, with better results. Social implications The all-in-one archive access robot can replace existing robotic access solutions, promote intelligent management of the archive industry and the construction of unmanned archive repositories and provide ideas for the development of robots for accessing book-like materials. Originality/value This study explores the use of robots to identify and access archives without changing archive shelves or installing auxiliary devices. In this way, the robot can be quickly applied to the storage room to improve the efficiency of archive management.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.