Recent technological developments, such as the Internet of Things (IoT), artificial intelligence, edge, and cloud computing, have paved the way in transforming traditional healthcare systems into smart healthcare (SHC) systems. SHC escalates healthcare management with increased efficiency, convenience, and personalization, via use of wearable devices and connectivity, to access information with rapid responses. Wearable devices are equipped with multiple sensors to identify a person’s movements. The unlabeled data acquired from these sensors are directly trained in the cloud servers, which require vast memory and high computational costs. To overcome this limitation in SHC, we propose a federated learning-based person movement identification (FL-PMI). The deep reinforcement learning (DRL) framework is leveraged in FL-PMI for auto-labeling the unlabeled data. The data are then trained using federated learning (FL), in which the edge servers allow the parameters alone to pass on the cloud, rather than passing vast amounts of sensor data. Finally, the bidirectional long short-term memory (BiLSTM) in FL-PMI classifies the data for various processes associated with the SHC. The simulation results proved the efficiency of FL-PMI, with 99.67% accuracy scores, minimized memory usage and computational costs, and reduced transmission data by 36.73%.
The rapid development of Autonomous Vehicles (AVs) increases the requirement for the accurate prediction of objects in the vicinity to guarantee safer journeys. For effectively predicting objects, sensors such as Three-Dimensional Light Detection and Ranging (3D LiDAR) and cameras can be used. The 3D LiDAR sensor captures the 3D shape of the object and produces point cloud data that describes the geometrical structure of the object. The LiDAR-only detectors may be subject to false detection or even non-detection over objects located at high distances. The camera sensor captures RGB images with sufficient attributes that describe the distinct identification of the object. The high-resolution images produced by the camera sensor benefit the precise classification of the objects. However, hindrances such as the absence of depth information from the images, unstructured point clouds, and cross modalities affect assertion and boil down the environmental perception. To this end, this paper proposes an object detection mechanism that fuses the data received from the camera sensor and the 3D LiDAR sensor (OD-C3DL). The 3D LiDAR sensor obtains point clouds of the object such as distance, position, and geometric shape. The OD-C3DL employs Convolutional Neural Networks (CNN) for further processing point clouds obtained from the 3D LiDAR sensor and the camera sensor to recognize the objects effectively. The point cloud of the LiDAR is enhanced and fused with the image space on the Regions of Interest (ROI) for easy recognition of the objects. The evaluation results show that the OD-C3DL can provide an average of 89 real-time objects for a frame and reduces the extraction time by a recall rate of 94%. The average processing time is 65ms, which makes the OD-C3DL model incredibly suitable for the AVs perception. Furthermore, OD-C3DL provides mean accuracy for identifying automobiles and pedestrians at a moderate degree of difficulty is higher than that of the previous models at 79.13% and 88.76%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.