Social cognition requires neural processing, yet a unifying method linking particular brain activities and social behaviors is lacking. Here, we embedded mobile edge computing (MEC) and light emitting diodes (LEDs) on a neurotelemetry headstage, such that a particular neural event of interest is processed by the MEC and subsequently an LED is illuminated, allowing simultaneous temporospatial visualization of that neural event in multiple, socially interacting mice. As a proof of concept, we configured our system to illuminate an LED in response to gamma oscillations in the basolateral amygdala (BLA gamma) in freely moving mice. We identified (i) BLA gamma responses to a spider robot, (ii) affect-related BLA gamma during conflict, and (iii) formation of defensive aggregation under a threat by the robot, and reduction of BLA gamma responses in the inner-located mice. Our system can provide an intuitive framework for examining brain-behavior connections in various ecological situations and population structures.
Automatic medical image segmentation is a crucial procedure for computer-assisted surgery (CAS). Especially, three-dimensional (3-D) reconstruction of medical images of the surgical targets can be accurate in fine anatomical structures with optimal image segmentation, thus leading to successful surgical results. However, the performance of the automatic segmentation algorithm highly depends on the consistent properties of medical images. To address this issue, we propose a model for standardizing computed tomography (CT) images. Hence, our CT image-to-image translation network enables diverse CT images (non-standard images) to be translated to images with identical features (standard images) for the more precise performance of U-Net segmentation. Specifically, we combine an image-to-image translation network with a Generative Adversarial Network (GAN), consisting of a residual block-based generative network and the discriminative network. Also, we utilize the feature extracting layers of VGG-16 to extract the style of the standard image and the content of the non-standard image. Moreover, for precise diagnosis and surgery, the conservation of anatomical information of the non-standard image is also essential during the synthesis of medical images. Therefore, for performance evaluation, largely three evaluation methods are employed: (1) visualization of the geometrical matching between the non-standard (content) and synthesized images to verify the maintenance of the anatomical structures; (2) measuring numerical results using image similarity evaluation metrics; and (3) assessing the performance of U-Net segmentation with our synthesized images. Specifically, we investigate that our model network can transfer the texture from standard CT images to diverse CT images (non-standard) scanned by different scanners and scan protocols. Also, we verify that the synthesized images can retain the global pose and fine structures of the non-standard images. We also compare the predicted segmentation result of the non-standard image and the synthesized image generated from its non-standard image via our proposed network. In addition, the performance of our proposed model is compared with the windowing process, where the window parameter of the standard image is applied to the non-standard image to ensure that our model outperforms the windowing process.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.