Single particle imaging (SPI) at X-ray free-electron lasers is particularly well suited to determining the 3D structure of particles at room temperature. For a successful reconstruction, diffraction patterns originating from a single hit must be isolated from a large number of acquired patterns. It is proposed that this task could be formulated as an image-classification problem and solved using convolutional neural network (CNN) architectures. Two CNN configurations are developed: one that maximizes the F1 score and one that emphasizes high recall. The CNNs are also combined with expectation-maximization (EM) selection as well as size filtering. It is observed that the CNN selections have lower contrast in power spectral density functions relative to the EM selection used in previous work. However, the reconstruction of the CNN-based selections gives similar results. Introducing CNNs into SPI experiments allows the reconstruction pipeline to be streamlined, enables researchers to classify patterns on the fly, and, as a consequence, enables them to tightly control the duration of their experiments. Incorporating non-standard artificial-intelligence-based solutions into an existing SPI analysis workflow may be beneficial for the future development of SPI experiments.
A novel 2-D fluorescence imaging technique has been developed to visualize the thickness of the aqueous mass boundary layer at a free water surface. Fluorescence is stimulated by high-power LEDs and is observed from above with a low noise, high resolution and high-speed camera. The invasion of ammonia into water leads to an increase in pH (from a starting value of 4), which is visualized with the fluorescent dye pyranine. The flux of ammonia can be controlled by controlling its air side concentration. A higher flux leads to basic pH values (pH > 7) in a thicker layer at the water surface from which fluorescent light is emitted. This allows the investigation of processes affecting the transport of gases in different depths in the aqueous mass boundary layer. In this paper, the chemical system and optical components of the measurement method are presented and its applicability to a wind-wave tank experiment is demonstrated.
Challenges have become the state-of-the-art approach to benchmark image analysis algorithms in a comparative manner. While the validation on identical data sets was a great step forward, results analysis is often restricted to pure ranking tables, leaving relevant questions unanswered. Specifically, little effort has been put into the systematic investigation on what characterizes images in which state-of-the-art algorithms fail. To address this gap in the literature, we (1) present a statistical framework for learning from challenges and (2) instantiate it for the specific task of instrument instance segmentation in laparoscopic videos. Our framework relies on the semantic meta data annotation of images, which serves as foundation for a General Linear Mixed Models (GLMM) analysis. Based on 51,542 meta data annotations performed on 2,728 images, we applied our approach to the results of the Robust Medical Instrument Segmentation Challenge (ROBUST-MIS) challenge 2019 and revealed underexposure, motion and occlusion of instruments as well as the presence of smoke or other objects in the background as major sources of algorithm failure. Our subsequent method development, tailored to the specific remaining issues, yielded a deep learning model with state-of-the-art overall performance and specific strengths in the processing of images in which previous methods tended to fail. Due to the objectivity and generic applicability of our approach, it could become a valuable tool for validation in the field of medical image analysis and beyond. and segmentation of small, crossing, moving and transparent instrument(s) (parts). Keywords surgical data science • image characteristics driven algorithm development • minimally invasive surgery • endoscopic vision • grand challenges • biomedical image analysis challenges • generalized linear mixed models • instrument segmentation • deep learning • artificial intelligence
Holistic deciphering of spatially-resolved delivery and biokinetics of nanoparticles (NPs) in the lung, along with the mobility of tissue-resident macrophages (TRMs) and their role in regulating NP cellular fate, remains unclear. Multimodal imaging and deep learning were applied to elucidate the longitudinal inter- and intra-acinar deposition features and regional dosimetry of NPs. The initial NP distribution patterns depended significantly on the pulmonary delivery routes and were most uniform for aerosol inhalation. Artificial intelligence-driven 3D airway segmentation enabled direct determination of bronchial and acinar NP dose. Longitudinal imaging uncovered an intra-acinar NP kinetics profile independent of delivery route. Contrary to the traditional notion of passive diffusion, this study reveals that long-term NP lung retention is facilitated by intra-acinar NP transport mediated by phagocytosis and patrolling of TRMs. Overall, this study elucidates the complexities of NP-lung delivery features and TRM immunity on the fate of biopersistent NPs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.