Agricultural sectors play an important role in the process of economic development of a country, especially in developing ones. Vietnam is known as an emerging market, which depends directly on agriculture-related activities for their livelihood, in which the issue of rural credit access still remains a confounding problem. The paper focuses on the characteristics of rural credit markets, the determinants of farmer access to the markets, the socio-economic impacts of credit access in Vietnam and briefly comparing with those of some developing countries. This question is addressed by reviewing existing literature and empirical evidence, followed by a comprehensive case study in Vietnam. Comprehensive literature review with secondary data collection and key informant interviews are methods that are applied in this research. The results of this analysis indicate the features of Vietnam markets as participated constraints, government intervention, and segmentation. Other results reveal the significant determinants of credit accessibility. Impacts of credit access on output production, household income, and poverty reduction are highlighted in this paper. Some managerial implications are recommended for households through participation in lending networks; for financial institutions relating to expand target clients as well as capital allocation; and, for policy-makers via ensuring market competitiveness and sustainable development in the long run.
Skin lesion classification has recently attracted significant attention. Regularly, physicians take much time to analyze the skin lesions because of the high similarity between these skin lesions. An automated classification system using deep learning can assist physicians in detecting the skin lesion type and enhance the patient’s health. The skin lesion classification has become a hot research area with the evolution of deep learning architecture. In this study, we propose a novel method using a new segmentation approach and wide-ShuffleNet for skin lesion classification. First, we calculate the entropy-based weighting and first-order cumulative moment (EW-FCM) of the skin image. These values are used to separate the lesion from the background. Then, we input the segmentation result into a new deep learning structure wide-ShuffleNet and determine the skin lesion type. We evaluated the proposed method on two large datasets: HAM10000 and ISIC2019. Based on our numerical results, EW-FCM and wide-ShuffleNet achieve more accuracy than state-of-the-art approaches. Additionally, the proposed method is superior lightweight and suitable with a small system like a mobile healthcare system.
A vital and challenging task in computer vision is 3D Object Classification and Retrieval, with many practical applications such as an intelligent robot, autonomous driving, multimedia contents processing and retrieval, and augmented/mixed reality. Various deep learning methods were introduced for solving classification and retrieval problems of 3D objects. Almost all view-based methods use many views to handle spatial loss, although they perform the best among current techniques such as View-based, Voxelization, and Point Cloud methods. Many views make network structure more complicated due to the parallel Convolutional Neural Network (CNN). We propose a novel method that combines a Global Point Signature Plus with a Deep Wide Residual Network, namely GPSP-DWRN, in this paper. Global Point Signature Plus (GPSPlus) is a novel descriptor because it can capture more shape information of the 3D object for a single view. First, an original 3D model was converted into a colored one by applying GPSPlus. Then, a 32 × 32 × 3 matrix stored the obtained 2D projection of this color 3D model. This matrix was the input data of a Deep Residual Network, which used a single CNN structure. We evaluated the GPSP-DWRN for a retrieval task using the Shapnetcore55 dataset, while using two well-known datasets—ModelNet10 and ModelNet40 for a classification task. Based on our experimental results, our framework performed better than the state-of-the-art methods.
3D shape recognition becomes necessary due to the popularity of 3D data resources. This paper aims to introduce the new method, hybrid deep learning network convolution neural network-support vector machine (CNN-SVM), for 3D recognition. The vertices of the 3D mesh are interpolated to be converted into Point Clouds; those Point Clouds are rotated for 3D data augmentation. We obtain and store the 2D projection of this 3D augmentation data in a 32 × 32 × 12 matrix, the input data of CNN-SVM. An eight-layer CNN is used as the algorithm for feature extraction, then SVM is applied for classifying feature extraction. Two big datasets, ModelNet40 and ModelNet10, of the 3D model are used for model validation. Based on our numerical experimental results, CNN-SVM is more accurate and efficient than other methods. The proposed method is 13.48% more accurate than the PointNet method in ModelNet10 and 8.5% more precise than 3D ShapeNets for ModelNet40. The proposed method works with both the 3D model in the augmented/virtual reality system and in the 3D Point Clouds, an output of the LIDAR sensor in autonomously driving cars.Electronics 2020, 9, 649 2 of 14 connected information [4]. Hence, the right choice for the classification task is directly generating 3D Point Clouds for the original 3D shape.3D shape recognition is a base step that widely uses other tasks in intelligent electronic systems, such as 3D object tracking in intelligent robots or 3D object detection in autonomously driving cars. This paper will present the hybrid deep learning method, a combination of CNN and a Polynomial Kernel-based support vector machine (SVM) classifier, with a high accuracy in 3D shape recognition. CNN is used as the algorithm for feature extraction. The related studies are presented in section two before describing the method in section three. Then, we will compare other methods based on the numerical results. Finally, the conclusion is given. Related StudiesThere are two approaches: hand-crafted shape descriptors and the CNN-based method in some existing related methods for the recognition of 3D shapes. Descriptors of Hand-Crafted ShapeFeatures of a hand-crafted shape consist of local and global features [5]. Global shape features, for example, viewpoint histogram [6] and shape distributions [7], proceed with the whole shape but are inappropriate for the occluded shapes recognition of messy scenes. In contrast, 3D local shape features, for example, spin image [8], rotational projection statistics (RoPS) [9], heat kernel signatures (HKS) [10], and fast point feature histogram (FPFH) [11], or 2D image features extensions, 3D SURF [12] and 2.5D SIFT [13], outperform the global features in messy scenes. Various areas, including 3D shape matching, shape recognition, and 3D shape retrieval, have applied these methods successfully with heavy dependence on human design and field experience. As a result, working on the massive 3D repositories with various objects from a variety of domains is challenging for those shape features. CNN-Based Method
Light Detection and Ranging (LiDAR), which applies light in the formation of a pulsed laser to estimate the distance between the LiDAR sensor and objects, is an effective remote sensing technology. Many applications use LiDAR including autonomous vehicles, robotics, and virtual and augmented reality (VR/AR). The 3D point cloud classification is now a hot research topic with the evolution of LiDAR technology. This research aims to provide a high performance and compatible real-world data method for 3D point cloud classification. More specifically, we introduce a novel framework for 3D point cloud classification, namely, GSV-NET, which uses Gaussian Supervector and enhancing region representation. GSV-NET extracts and combines both global and regional features of the 3D point cloud to further enhance the information of the point cloud features for the 3D point cloud classification. Firstly, we input the Gaussian Supervector description into a 3D wide-inception convolution neural network (CNN) structure to define the global feature. Secondly, we convert the regions of the 3D point cloud into color representation and capture region features with a 2D wide-inception network. These extracted features are inputs of a 1D CNN architecture. We evaluate the proposed framework on the point cloud dataset: ModelNet and the LiDAR dataset: Sydney. The ModelNet dataset was developed by Princeton University (New Jersey, United States), while the Sydney dataset was created by the University of Sydney (Sydney, Australia). Based on our numerical results, our framework achieves more accuracy than the state-of-the-art approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.