Optical microscopy is a valuable tool for in vivo monitoring of biological structures and functions because of its non-invasiveness. However, imaging deep into biological tissues is challenging due to the scattering and absorption of light. Previous research has shown that 1300 nm and 1700 nm are the two best wavelength windows for deep brain imaging. Here, we combined long-wavelength illumination of ~1700 nm with reflectance confocal microscopy and achieved an imaging depth of ~1.3 mm with ~1micrometer spatial resolution in adult mouse brains, which is 3-4 times deeper than that of conventional confocal microscopy using visible wavelength. We showed that the method can be added to any laser-scanning microscopy with simple and low-cost sources and detectors, such as continuous-wave diode lasers and InGaAs photodiodes. The long-wavelength, reflectance confocal imaging we demonstrated is label-free, and requires low illumination power. Furthermore, the imaging system is simple and low-cost, potentially creating new opportunities for biomedical research and clinical applications.
Recently, deep learning has shown substantial breakthroughs in various fields such as speech recognition, image and video classification, and natural language processing. [1-3] The explosive development of deep learning has promoted the convergence of this field with other disciplines. The progress has benefited from the update and improvement of models and theories in computer science, as well as the advancement of contemporary semiconductor chip technology. However, the limited bandwidth and computing resources of the traditional computer system greatly restrict the running speed when faced with the increasing scale of deep neural networks (DNNs). The traditional von Neumann architecture separates data storage and computing. Frequent and inefficient movement of data between the processor and memory or off-chip storage brings latency and energy consumption issues, while the mismatch between data transmission and data processing becomes a bottleneck in the implementation of deep learning in hardware. Due to the high-bandwidth and highparallelism requirements of deep learning, data-intensive artificial intelligence (AI) applications have been dominated by cloud computing; that is, edge devices act as data-collecting interfaces and pass data to clustered cloud computer centers for computing to achieve deep learning. [4] Such AI applications place high requirements on network bandwidth and latency, and take the privacy leakage issue to users. [5] For example, in areas with poor network conditions, Tesla's AI autonomous driving will become unreliable and even life-threatening. With the popularization of deep learning, the efficient AI applications that can be seen daily are becoming an urgent need. Edge intelligence is a concept relative to cloud intelligence. [6] Edge computing requires real-time intelligence on devices with strict budgets for energy consumption and device area, such as smart watches and drones. It pushes cloud services from the network core to the edge of the network that is closer to Internet-of-things (IoT) devices and data sources, and then builds up an end-to-end network. Physical proximity to the informationgeneration sources is the most crucial characteristic emphasized by edge computing, wherefore high energy efficiency, small size, low latency, and high privacy protection become valued characteristics for edge intelligence. [7-9] With the combination of hardware and AI, devices dedicated for deep learning have emerged. These devices are called neural network accelerators. The combination of a traditional complementary metal-oxide semiconductor (CMOS) and emerging nonvolatile memory provides a considerable wealth of possibilities for AI accelerators. [10-13] The use of memory technology as a synaptic weight matrix storage unit has set a foundation for the hardware implementation of neuromorphic computing systems. In some prominent AI chips, traditional memories have been utilized; for example,
Constructing a hierarchical nanostructure has been regarded as one of the most useful strategies to improve the cycling stability and rate performance of anode. Herein, hierarchical N-doped carbon combined with ultrathin WS2 nanosheets (N-C/WS2) was fabricated through a simple chelation coordination method and sulfuration treatment. Ultrathin WS2 nanosheets with few layers were embedded in the porous conductive carbon nanosheet structure, which not only restricted the aggregation of WS2 nanosheets but also enhanced the ion/electron transfer kinetics during the charge–discharge procedure. Owing to the robust interaction between WS2 nanosheets and N-doped carbon, the as-prepared N-C/WS2 exhibited excellent cycling stability and rate performance in lithium-ion storage. Specifically, N-C/WS2 anodes delivered an excellent retention capacity of 600 mAh g–1 after 500 cycles at 1.0 A g–1. This work provides a facile strategy to fabricate a three-dimensional hierarchical carbon hybrid composed of metal sulfides for energy storage and conversion fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.