Convolutional Neural Networks (CNNs) have gained a remarkable success on many real-world problems in recent years. However, the performance of CNNs is highly relied on their architectures. For some state-of-the-art CNNs, their architectures are hand-crafted with expertise in both CNNs and the investigated problems. To this end, it is difficult for researchers, who have no extended expertise in CNNs, to explore CNNs for their own problems of interest. In this paper, we propose an automatic architecture design method for CNNs by using genetic algorithms, which is capable of discovering a promising architecture of a CNN on handling image classification tasks. The proposed algorithm does not need any pre-processing before it works, nor any post-processing on the discovered CNN, which means it is completely automatic. The proposed algorithm is validated on widely used benchmark datasets, by comparing to the state-of-the-art peer competitors covering eight manually designed CNNs, four semi-automatically designed CNNs and additional four automatically designed CNNs. The experimental results indicate that the proposed algorithm achieves the best classification accuracy consistently among manually and automatically designed CNNs. Furthermore, the proposed algorithm also shows the competitive classification accuracy to the semi-automatic peer competitors, while reducing 10 times of the parameters. In addition, on the average the proposed algorithm takes only one percentage of computational resource compared to that of all the other architecture discovering algorithms. Index Terms-Convolutional neural network, genetic algorithm, neural network architecture optimization, evolutionary deep learning.✦
Evolutionary computation methods have been successfully applied to neural networks since two decades ago, while those methods cannot scale well to the modern deep neural networks due to the complicated architectures and large quantities of connection weights. In this paper, we propose a new method using genetic algorithms for evolving the architectures and connection weight initialization values of a deep convolutional neural network to address image classification problems. In the proposed algorithm, an efficient variable-length gene encoding strategy is designed to represent the different building blocks and the unpredictable optimal depth in convolutional neural networks. In addition, a new representation scheme is developed for effectively initializing connection weights of deep convolutional neural networks, which is expected to avoid networks getting stuck into local minima which is typically a major issue in the backward gradient-based optimization. Furthermore, a novel fitness evaluation method is proposed to speed up the heuristic search with substantially less computational resource. The proposed algorithm is examined and compared with 22 existing algorithms on nine widely used image classification tasks, including the stateof-the-art methods. The experimental results demonstrate the remarkable superiority of the proposed algorithm over the stateof-the-art algorithms in terms of classification error rate and the number of parameters (weights).
With a global search mechanism, Particle Swarm Optimisation (PSO) has shown promise in feature selection. However, most of the current PSO-based feature selection methods use a fix-length representation, which is inflexible and limits the performance of PSO for feature selection. When applying these methods to high-dimensional data, it not only consumes a significant amount of memory but also requires a high computational cost. Overcoming this limitation enables PSO to work on data with much higher dimensionality which has become more and more popular with the advance of data collection technologies. In this study, we propose the first variable-length PSO representation for feature selection, enabling particles to have different and shorter lengths, which defines smaller search space and therefore, improves the performance of PSO. By rearranging features in a descending order of their relevance, we facilitate particles with shorter lengths to achieve better classification performance. Furthermore, using the proposed length changing mechanism, PSO can jump out of local optima, further narrow the search space and focus its search on smaller and more fruitful area. These strategies enable PSO to reach better solutions in a shorter time. Results on ten high-dimensional datasets with varying difficulties show that the proposed variable-length PSO can achieve much smaller feature subsets with significantly higher classification performance in much shorter time than the fixedlength PSO methods. The proposed method also outperformed the compared non-PSO feature selection methods in most cases.
The performance of Convolutional Neural Networks (CNNs) highly relies on their architectures. In order to design a CNN with promising performance, extensive expertise in both CNNs and the investigated problem domain is required, which is not necessarily available to every interested user. To address this problem, we propose to automatically evolve CNN architectures by using a genetic algorithm based on ResNet and DenseNet blocks. The proposed algorithm is completely automatic in designing CNN architectures. In particular, neither pre-processing before it starts nor post-processing in terms of CNNs is needed. Furthermore, the proposed algorithm does not require users with domain knowledge on CNNs, the investigated problem or even genetic algorithms. The proposed algorithm is evaluated on the CIFAR10 and CIFAR100 benchmark datasets against 18 state-of-the-art peer competitors. Experimental results show that the proposed algorithm outperforms state-of-the-art CNNs hand-crafted and CNNs designed by automatic peer competitors in terms of the classification performance, and achieves a competitive classification accuracy against semi-automatic peer competitors. In addition, the proposed algorithm consumes much less computational resource than most peer competitors in finding the best CNN architectures.
Aim: To study the effect of gum mastic, a natural resin, on the proliferation of androgen‐independent prostate cancer PC‐3 cells, and further investigate the mechanisms involved in this regulatory system, taking nuclear factor κB (NF‐κB) signal as the target. Methods: 3‐(4,5‐dimethylthiazol‐2‐yl)‐2,5‐diphenyltetrazolium bromide (MTT) assay and a flow cytometer were used to detect the effect of gum mastic on the proliferation of PC‐3 cells. Then, reporter gene assay, RT‐PCR, and Western blotting were carried out to study the effects of gum mastic on the NF‐κB protein level and the NF‐κB signal pathway. The expression of genes involved in the NF‐κB signal pathway, including cyclin D1, inhibitors of κBS (IκBα), and phosphorylated Akt (p‐AKT), were measured. In addition, transient transfection assays with the 5 × NF‐κB consensus sequence promoter was also used to test the effects of gum mastic. Results: Gum mastic inhibited PC‐3 cell growth and blocked the PC‐3 cell cycle in the G1 phase. Gum mastic also suppressed NF‐κB activity in the PC‐3 cells. The expression of cyclin D1, a crucial cell cycle regulator and an NF‐κB downstream target gene, was reduced as well. Moreover, gum mastic decreased the p‐AKT protein level and increased the IκBα protein level. Conclusion: Gum mastic inhibited the proliferation and blocked the cell cycle progression in PC‐3 cells by suppressing NF‐κB activity and the NF‐κB signal pathway.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.