Recently, image-to-image translation has been made much progress owing to the success of conditional Generative Adversarial Networks (cGANs). And some unpaired methods based on cycle consistency loss such as DualGAN, CycleGAN and DiscoGAN are really popular. However, it's still very challenging for translation tasks with the requirement of high-level visual information conversion, such as photo-to-caricature translation that requires satire, exaggeration, lifelikeness and artistry. We present an approach for learning to translate faces in the wild from the source photo domain to the target caricature domain with different styles, which can also be used for other high-level image-to-image translation tasks. In order to capture global structure with local statistics while translation, we design a dual pathway model with one coarse discriminator and one fine discriminator. For generator, we provide one extra perceptual loss in association with adversarial loss and cycle consistency loss to achieve representation learning for two different domains. Also the style can be learned by the auxiliary noise input. Experiments on photo-to-caricature translation of faces in the wild show considerable performance gain of our proposed method over state-of-the-art translation methods as well as its potential real applications.
This paper proposes a strategy where a structure is developed to recognize and order the tumor type. Over a time of years, numerous specialists have been examined and proposed a technique in this space. A brain tumor segmentation approach is developed based on efficient, deep learning techniques implemented in a unified system to achieve the appearance and spatial accuracy outcomes through Conditional Radom Fields (CRF) and Heterogeneous Convolution Neural Networks (HCNN). In these steps the 2D image patching and picture slices of the deep-learning model is developed. The Proposed method has following steps as follows: 1) train HCNN by image patches; 2) train CRF with CRF-Recurrent Regression based Neural Network (RRNN) by means of image slices with fixed variables of HCNN; 3) fine tune with HCNN and CRF-RRNN image slices. In general, 3 segmentation models have been trained using axial-, coronary-and sagittal image patches and slices, Further assembled into brain tumor segments using a voting fusion technique and it can be examined with Internet of Medical Things (IoMT) Platform. The experimental results proved that our approach has been capable of developing a Flair, T1c and T2 segmenting model and of achieving good performance as with Flair, T1, T1c, and T2 scans.
Intelligent detection of marine organism plays an important part in the marine economy, and it is significant to detect marine organisms quickly and accurately in a complex marine environment for the intelligence of marine equipment. The existing object detection models do not work well underwater. This paper improves the structure of EfficientDet detector and proposes the EfficientDet-Revised (EDR), which is a new marine organism object detection model. Specifically, the MBConvBlock is reconstructed by adding the Channel Shuffle module to enable the exchange of information between the channels of the feature layer. The fully connected layer of the attention module is removed and convolution is used to cut down the amount of network parameters. The Enhanced Feature Extraction module is constructed for multi-scale feature fusion to enhance the feature extraction ability of the network to different objects. The results of experiments demonstrate that the mean average precision (mAP) of the proposed method reaches 91.67% and 92.81% on the URPC dataset and the Kaggle dataset, respectively, which is better than other object detection models. At the same time, the processing speed reaches 37.5 frame per second (FPS) on the URPC dataset, which can meet the real-time requirements. It can provide a useful reference for underwater robots to perform tasks such as intelligent grasping.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.