Accurate time perception is clearly essential for the successful implementation of space missions. To elucidate the effect of microgravity on time perception, we used three emotional picture stimuli: neutral, fear, and disgust, in combination with a temporal bisection task to measure 16 male participants’ time perception in 15 days of –6° head-down bed rest, which is a reliable simulation model for most physiological effects of spaceflight. We found that: (1) participants showed temporal overestimation of the fear stimuli in the middle phase (day 8), suggesting that when participants’ behavioral simulations were consistent with the action implications of the emotional stimuli, they could still elicit an overestimation of time even if the subjective arousal of the emotional stimuli was not high. (2) Participants’ temporal sensitivity tends to get worse in the bed rest phase (days 8 and 15) and better in the post-bed rest phase, especially for neutral and fear stimuli, suggesting that multiple presentations of short-term emotional stimuli may also lead to a lack of affective effects. This reduced the pacemaker rate and affected temporal perceptual sensitivity. Also, this may be related to changes in physiological factors in participants in the bed rest state, such as reduced vagal excitability. These results provide new evidence to support the theory of embodied cognition in the context of time perception in head-down bed rest and suggest important perspectives for future perception science research in special environments such as microgravity.
Sheep body segmentation robot can improve production hygiene, product quality, and cutting accuracy, which is a huge change for traditional manual segmentation. With reference to the New Zealand sheep body segmentation specification, a vision system for Cartesian coordinate robot cutting half-sheep was developed and tested. The workflow of the vision system was designed and the image acquisition device with an Azure Kinect sensor was developed. Furthermore, a LabVIEW software with the image processing algorithm was then integrated with the RGBD image acquisition device in order to construct an automatic vision system. Based on Deeplab v3+ networks, an image processing system for locating ribs and spine was employed. Taking advantage of the location characteristics of ribs and spine in the split half-sheep, a calculation method of cutting line based on the key points is designed to determine five cutting curves. The seven key points are located by convex points of ribs and spine and the root of hind leg. Using the conversion relation between depth image and the space coordinates, the 3D coordinates of the curves were computed. Finally, the kinematics equation of the rectangular coordinate robot arm is established, and the 3D coordinates of the curves are converted into the corresponding motion parameters of the robot arm. The experimental results indicated that the automatic vision system had a success rate of 98.4% in the cutting curves location, 4.2 s time consumption per half-sheep, and approximately 1.3 mm location error. The positioning accuracy and speed of the vision system can meet the requirements of the sheep cutting production line. The vision system shows that there is potential to automate even the most challenging processing operations currently carried out manually by human operators.
<div><br></div><div>Scene text recognition is a popular topic and can benefit various tasks. Although many methods have been proposed for the close-set text recognition challenges, they cannot be directly applied to open-set scenarios, where the evaluation set contains novel characters not appearing in the training set. Conventional methods require collecting new data and retraining the model to handle these novel characters, which is an expensive and tedious process. In this paper, we propose a label-to-prototype learning framework to handle novel characters without retraining the model. In the proposed framework, novel characters are effectively mapped to their corresponding prototypes with a label-to-prototype learning module. This module is trained on characters with seen labels and can be easily generalized to novel characters. Additionally, feature-level rectification is conducted via topology-preserving transformation, resulting in better alignments between visual features and constructed prototypes while having a reasonably small impact on model speed. A lot of experiments show that our method achieves promising performance on a variety of zero-shot, close-set, and open-set text recognition datasets.</div>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.