The morphological analysis of dendritic spines is an important challenge for the neuroscientific community. Most state-of-the-art techniques rely on user-supervised algorithms to segment the spine surface, especially those designed for light microscopy images. Therefore, processing large dendritic branches is costly and time-consuming. Although deep learning (DL) models have become one of the most commonly used tools in image segmentation, they have not yet been successfully applied to this problem. In this article, we study the feasibility of using DL models to automatize spine segmentation from confocal microscopy images. Supervised learning is the most frequently used method for training DL models. This approach requires large data sets of high-quality segmented images (ground truth). As mentioned above, the segmentation of microscopy images is time-consuming and, therefore, in most cases, neuroanatomists only reconstruct relevant branches of the stack. Additionally, some parts of the dendritic shaft and spines are not segmented due to dyeing problems. In the context of this research, we tested the most successful architectures in the DL biomedical segmentation field. To build the ground truth, we used a large and high-quality data set, according to standards in the field. Nevertheless, this data set is not sufficient to train convolutional neural networks for accurate reconstructions. Therefore, we implemented an automatic preprocessing step and several training strategies to deal with the problems mentioned above. As shown by our results, our system produces a high-quality segmentation in most cases. Finally, we integrated several postprocessing user-supervised algorithms in a graphical user interface application to correct any possible artifacts.
The analysis and exploration of complex data sets are common problems in many areas, including scientific and business domains. This need has led to substantial development of the data visualization field. However, the diversity of problems to which visual data analysis is applied hinders the implementation of a universal solution that meets the current and future needs of all disciplines. In this paper, we present VMetaFlow, a graphical meta-framework to design interactive and coordinated views applications for data visualization. Our meta-framework is based on data flow diagrams since they have proved their value in simplifying the design of data visualizations. VMetaFlow operates as an abstraction layer that encapsulates and interconnects visualization frameworks in a web-based environment, providing them with interoperability mechanisms. The only requirement is that the visualization framework must be accessible through a JavaScript API. We propose a novel data flow model that allows users to define both interactions between multiple data views and how the data flows between visualization and data processing modules. In contrast with previous data-flow-based frameworks for visualization, we separate the view interactions from data items, broadening the expressiveness of our model and supporting the most common types of multi-view interactions. Our meta-framework allows visualization and data analysis experts to focus their efforts on creating data representations and transformations for their applications, whereas nonexperts can reuse previously developed components to design their applications through a user-friendly interface. We validate our approach through a critical inspection with visualization experts and two case studies. We have carefully selected these case studies to illustrate its capabilities. Finally, we compare our approach with the subset flow model designed for multiple coordinated views. INDEX TERMS Coordinated views, data visualization, exploratory visual analysis, visual programming I. INTRODUCTION
Technological advances enable the capture and management of complex data sets that need to be correctly understood. Visualisation techniques can help in complex data analysis and exploration, but sometimes the visual channel is not enough, or it is not always available. Some authors propose using the haptic channel to reinforce or substitute the visual sense, but the limited human haptic short-term memory still poses a challenge. We present the haptic tuning fork, a reference signal displayed before the haptic information for increasing the discriminability of haptic icons. With this reference, the user does not depend only on short-term memory. We have decided to evaluate the usefulness of the haptic tuning fork in impedance kinesthetic devices as these are the most common. Furthermore, since the renderable signal ranges are device-dependent, we introduce a methodology to select a discriminable set of signals called the haptic scale. Both the haptic tuning fork and the haptic scale proved their usefulness in the performed experiments regarding haptic stimuli varying in frequency.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.