In this paper, a Modified Capsule Neural Network (Mod-CapsNet) with a pooling layer but without the squash function is used for recognition of indoor home scenes which are represented in grayscale. This Mod-CapsNet produced an accuracy of 70% compared to the 17.2% accuracy produced by a standard CapsNet. Since there is a lack of larger datasets related to indoor home scenes, to obtain better accuracy with smaller datasets is also one of the important aims in the paper. The number of images used for training and testing is 20,000 and 5000 respectively, all of dimension 128X128. The analysis proves that in the indoor home scene recognition task the combination of the capsule without a squash function and with max-pooling layers works better than by using capsules with convolutional layers. Indoor home scenes are specifically focused towards analysing capsules performance on datasets whose images have similarities but are, nonetheless, quite different. For example, tables may be present in living rooms and dining rooms even though these are quite different rooms.
Novel deep learning based network architectures are investigated for advanced brain tumor image classification and segmentation. Variations in brain tumor characteristics together with limited labelled datasets represent significant challenges in automatic brain tumor segmentation. In this paper, we present a novel architecture based on the U-Net that incorporates both global and local feature extraction paths to improve the segmentation accuracy. The results included in the paper show superior performance of the novel segmentation for five tumor regions on the large BRATs 2018 dataset over other approaches.
A novel encoder-decoder deep learning network called TwoPath U-Net for multi-class automatic brain tumor segmentation task is presented. The network uses cascaded local and global feature extraction paths in the down-sampling path of the network which allows the network to learn different aspects of both the low-level feature and high-level features. The proposed network architecture using a full image and patches input technique was used on the BraTS2020 training dataset. We tested the network performance using the BraTS2019 validation dataset and obtained the mean dice score of 0.76, 0.64, and 0.58 and the Hausdorff distance 95% of 25.05, 32.83, and 37.57 for the whole tumor, tumor core and enhancing tumor regions.
This work presents a technique for recognizing indoor home scenes by using object detection. The object detection task is achieved through pre-trained Mask-RCNN (Regional Convolutional Neural Network), whilst the scene recognition is performed through a Convolutional Neural Network (CNN). The output of the Mask-RCNN is fed in input to the CNN, as this provides the CNN with the information of objects detected in one scene. So, the CNN recognizes the scene by looking at the combination of objects detected. The CNN is trained using the various object detection outputs of Mask-RCNN. This helps the CNN learn about the various combinations of objects that a scene can have. The CNN is trained using 500 combinations of 5 different scenes (bathroom, bedroom, kitchen, living room, and dining room) of the indoor home generated by Mask-RCNN. The trained network was tested on 24,000 indoor home scene images. The final accuracyproduced by the CNN is 97.14%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.