Colorization in X-ray material discrimination is considered one of the main phases in X-ray baggage inspection systems for detecting contraband and hazardous materials by displaying different materials with specific colors. The substructure of material discrimination identifies materials based on their atomic number. However, the images are checked and assigned by a human factor, which may decelerate the verification process. Therefore, researchers used computer vision and machine learning methods to expedite the examination process and ascertain the precise identification of materials and elements. This study proposes a color-based material discrimination method for single-energy X-ray images based on the dual-energy colorization. We use a convolutional neural network to discriminate materials into several classes, such as organic, non-organic substances, and metals. It highlights the details of the objects, including occluded objects, compared to commonly used segmentation methods, which do not show the details of the objects. We trained and tested our model on three popular X-ray datasets, which are Korean datasets comprising three kinds of scanners: (Rapiscan, Smith, Astrophysics), SIXray, and COMPASS-XP. The results showed that the proposed method achieved high performance in X-ray colorization in terms of peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), and learned perceptual image patch similarity (LPIPS). We applied the trained models to the single-energy X-ray images and we compared the results obtained from each model.
Light field (LF) technology has become a focus of great interest (due to its use in many applications), especially since the introduction of the consumer LF camera, which facilitated the acquisition of dense LF images. Obtaining densely sampled LF images is costly due to the trade-off between spatial and angular resolutions. Accordingly, in this research, we suggest a learning-based solution to this challenging problem, reconstructing dense, high-quality LF images. Instead of training our model with several images of the same scene, we used raw LF images (lenslet images). The raw LF format enables the encoding of several images of the same scene into one image. Consequently, it helps the network to understand and simulate the relationship between different images, resulting in higher quality images. We divided our model into two successive modules: LFR and LF augmentation (LFA). Each module is represented using a convolutional neural network-based residual network (CNN). We trained our network to lessen the absolute error between the novel and reference views. Experimental findings on real-world datasets show that our suggested method has excellent performance and superiority over state-of-the-art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.