Recognizing different contact patterns imposed on tactile sensors plays a very important role in human–machine interaction. In this paper, a flexible tactile sensor with great dynamic response characteristics is designed and manufactured based on polyvinylidene fluoride (PVDF) material. Four contact patterns (stroking, patting, kneading, and scratching) are applied to the tactile sensor, and time sequence data of the four contact patterns are collected. After that, a fusion model based on the convolutional neural network (CNN) and the long-short term memory (LSTM) neural network named CNN-LSTM is constructed. It is used to classify and recognize the four contact patterns loaded on the tactile sensor, and the recognition accuracies of the four patterns are 99.60%, 99.67%, 99.07%, and 99.40%, respectively. At last, a CNN model and a random forest (RF) algorithm model are constructed to recognize the four contact patterns based on the same dataset as those for the CNN-LSTM model. The average accuracies of the four contact patterns based on the CNN-LSTM, the CNN, and the RF algorithm are 99.43%, 96.67%, and 91.39%, respectively. All of the experimental results indicate that the CNN‑LSTM constructed in this paper has very efficient performance in recognizing and classifying the contact patterns for the flexible tactile sensor.
Accurately recognizing the hardness and type of different objects by tactile sensors is of great significance in human–machine interaction. In this paper, a novel porous graphene flexible tactile sensor array with great performance is designed and fabricated, and it is mounted on a two-finger mechanical actuator. This is used to detect various tactile sequence features from different objects by slightly squeezing them by 2 mm. A Residual Network (ResNet) model, with excellent adaptivity and feature extraction ability, is constructed to realize the recognition of 4 hardness categories and 12 object types, based on the tactile time sequence signals collected by the novel sensor array; the average accuracies of hardness and type recognition are 100% and 99.7%, respectively. To further verify the classification ability of the ResNet model for the tactile feature information detected by the sensor array, the Multilayer Perceptron (MLP), LeNet, Multi-Channel Deep Convolutional Neural Network (MCDCNN), and ENCODER models are built based on the same dataset used for the ResNet model. The average recognition accuracies of the 4hardness categories, based on those four models, are 93.6%, 98.3%, 93.3%, and 98.1%. Meanwhile, the average recognition accuracies of the 12 object types, based on the four models, are 94.7%, 98.9%, 85.0%, and 96.4%. All of the results demonstrate that the novel porous graphene tactile sensor array has excellent perceptual performance and the ResNet model can very effectively and precisely complete the hardness and type recognition of objects for the flexible tactile sensor array.
To measure three-dimensional (3D) forces efficiently and improve the sensitivity of tactile sensors, a novel piezoelectric tactile sensor with a “sandwich” structure is proposed in this paper. An array of circular truncated cone-shaped sensitive units made of polyvinylidene fluoride (PVDF) is sandwiched between two flexible substrates of polydimethylsiloxane (PDMS). Based on the piezoelectric properties of the PVD F sensitive units, finite element modelling and analysis are carried out for the sensor. The relation between the force and the voltage of the sensitive unit is obtained, and a tactile perception model is established. The model can distinguish the sliding direction and identify the material of the slider loaded on the sensor. A backpropagation neural network (BPNN) algorithm is built to predict the 3D forces applied on the tactile sensor model, and the 3D forces are decoupled from the voltages of the sensitive units. The BPNN is further optimized by a genetic algorithm (GA) to improve the accuracy of the 3D force prediction, and fairly good prediction results are obtained. The experimental results show that the novel tactile sensor model can effectively predict the 3D forces, and the BPNN model optimized by the GA can predict the 3D forces with much higher precision, which also improves the intelligence of the sensor. All the prediction results indicate that the BPNN algorithm has very efficient performance in 3D force prediction for the piezoelectric tactile sensor.
Haptic force feedback is an important perception method for humans to understand the surrounding environment. It can estimate tactile force in real time and provide appropriate feedback. It has important research value for robot-assisted minimally invasive surgery, interactive tactile robots, and other application fields. However, most of the existing noncontact visual power estimation methods are implemented using traditional machine learning or 2D/3D CNN combined with LSTM. Such methods are difficult to fully extract the contextual spatiotemporal interaction semantic information of consecutive multiple frames of images, and their performance is limited. To this end, this paper proposes a time-sensitive dual-resolution learning network-based force estimation model to achieve accurate noncontact visual force prediction. First, we perform continuous frame normalization processing on the robot running the video captured by the camera and use the hybrid data augmentation to improve the data diversity; secondly, a deep semantic interaction model is constructed based on the time-sensitive dual-resolution learning network, which is used to automatically extract the deep spatiotemporal semantic interaction information of continuous multiframe images; finally, we construct a simplified prediction model to realize the efficient estimation of interaction force. The results based on the large-scale robot hand interaction dataset show that our method can estimate the interaction force of the robot hand more accurately and faster. The average prediction MSE reaches 0.0009 N, R 2 reaches 0.9833, and the average inference time for a single image is 6.5532 ms; in addition, our method has good prediction generalization performance under different environments and parameter settings.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.