Motivation Biomedical research findings are typically disseminated through publications. To simplify access to domain-specific knowledge while supporting the research community, several biomedical databases devote significant effort to manual curation of the literature—a labor intensive process. The first step toward biocuration requires identifying articles relevant to the specific area on which the database focuses. Thus, automatically identifying publications relevant to a specific topic within a large volume of publications is an important task toward expediting the biocuration process and, in turn, biomedical research. Current methods focus on textual contents, typically extracted from the title-and-abstract. Notably, images and captions are often used in publications to convey pivotal evidence about processes, experiments and results. Results We present a new document classification scheme, using both image and caption information, in addition to titles-and-abstracts. To use the image information, we introduce a new image representation, namely Figure-word, based on class labels of subfigures. We use word embeddings for representing captions and titles-and-abstracts. To utilize all three types of information, we introduce two information integration methods. The first combines Figure-words and textual features obtained from captions and titles-and-abstracts into a single larger vector for document representation; the second employs a meta-classification scheme. Our experiments and results demonstrate the usefulness of the newly proposed Figure-words for representing images. Moreover, the results showcase the value of Figure-words, captions and titles-and-abstracts in providing complementary information for document classification; these three sources of information when combined, lead to an overall improved classification performance. Availability and implementation Source code and the list of PMIDs of the publications in our datasets are available upon request.
Background: In virtual reality (VR) applications such as games, virtual training, and interactive neurorehabilitation, one can employ either the first-person user perspective or the third-person perspective to perceive the virtual environment; however, applications rarely offer both perspectives for the same task. We used a targeted-reaching task in a large-scale virtual reality environment (N = 30 healthy volunteers) to evaluate the effects of user perspective on the head and upper extremity movements, and on user performance. We further evaluated how different cognitive challenges would modulate these effects. Finally, we obtained the user-reported engagement level under the different perspectives. Results: We found that first-person perspective resulted in larger head movements (3.52 ± 1.3m) than the third-person perspective (2.41 ± 0.7m). First-person perspective also resulted in more upper-extremity movement (30.08 ± 7.28m compared to 26.66 ± 4.86m) and longer completion times (61.3 ± 16.4s compared to 53 ± 10.4s) for more challenging tasks such as the "flipped mode", in which moving one arm causes the opposite virtual arm to move. We observed no significant effect of user perspective alone on the success rate. Subjects reported experiencing roughly the same level of engagement in both first-person and third-person perspectives (F(1.58) = 0.9, P = .445). Conclusion: User perspective and its interaction with higher-cognitive load tasks influences the extent of movement and user performance in a virtual theater environment, and may influence the choice of the interface type (first or third person) in immersive training depending on the user conditions and exercise requirements.
No abstract
Through the use of open data portals, cities, districts and countries are increasingly making available energy consumption data. These data have the potential to inform both policymakers and local communities. At the same time, however, these datasets are large and complicated to analyze. We present the activity-centered-design, from requirements to evaluation, of a web-based visual analysis tool to explore energy consumption in Chicago. The resulting application integrates energy consumption data and census data, making it possible for both amateurs and experts to analyze disaggregated datasets at multiple levels of spatial aggregation and to compare temporal and spatial differences. An evaluation through case studies and qualitative feedback demonstrates that this visual analysis application successfully meets the goals of integrating large, disaggregated urban energy consumption datasets and of supporting analysis by both lay users and experts.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.