Alseodaphne changchangensis sp. nov. (Lauraceae), with a perfectly preserved fossil leaf, was found in the Eocene Changchang Formation from the Changchang Basin of Hainan Island, China. This represents the earliest occurrence at the lowest latitude for the genus Alseodaphne, and offers important fossil evidence for further investigation of the origin and evolution of this genus and the paleoclimate of Hainan Island in the Eocene. Based on leaf morphology and cuticle characteristics, our fossil specimen is closest to the living species A. hainanensis Merrill, which is distributed mainly in tropical lowland rain forests, tropical montane rain forests, and subtropical montane evergreen broad-leafed forests on Hainan Island, and also found on the Wanshan Islands of Zhuhai of Guangdong Province and in northern parts of Vietnam. In the above areas, the climate factors are similar, i.e., the mean annual temperature is 20-22.6℃, the mean annual range of temperature 12-12.6℃ and the mean annual precipitation 1784-2500 mm. Based on the nearest living species analysis, we conclude that the climate of the Changchang Basin on Hainan Island during the Eocene was close to that of the distribution areas of the living A. hainanensis.
<p>In this paper, we propose a new clustering module that can be trained jointly with existing neural network layers. Specifically, we have designed a generic clustering module with a competitive update mechanism. The module consists of a Gaussian unit and a maximum pooling layer. The Gaussian unit forward propagation conforms to the joint Gaussian distribution and contains two sets of trainable parameters. It requires no tedious setup and has a plug-and-play feature. To improve the representation capability of the network, we used an auto-encoder to extract the hidden semantics of the input features and combined the clustering module with the auto-encoder to construct an end-to-end unsupervised clustering neural network. We refer to this as HGL_CAE(High-dimensional Gaussian distribution layers combined with convolutional autoencoders).The network is highly adaptable to different input feature dimensions and can cope with situations where the number of clusters cannot be determined in advance. We conducted experiments on the MNIST dataset and the Fashion_MNIST dataset with a clustering accuracy of 93.38% and 72.83% respectively. It is highly competitive with existing methods. </p>
<p> In this paper, we propose an idea to improve various types of loss functions. It is different from the current idea of balancing the errors by increasing the number of input samples in each batch. We directly mask the top values of the error ranking to zero. During the error back-propagation, this means that the samples corresponding to that loss will not affect the parameter update of the network. In other words, even if a small number of samples are artificially mislabeled, it will not theoretically have much impact on the performance of the network. Instead, deliberately discarding anomalous losses will help smooth the training of the network. We conduct experiments on several regression and classification tasks, and the results show that the proposed method in this paper can effectively improve the expected performance of the network. </p>
<p>In this paper, we propose a new clustering module that can be trained jointly with existing neural network layers. Specifically, we have designed a generic clustering module with a competitive update mechanism. The module consists of a Gaussian unit and a maximum pooling layer. The Gaussian unit forward propagation conforms to the joint Gaussian distribution and contains two sets of trainable parameters. It requires no tedious setup and has a plug-and-play feature. To improve the representation capability of the network, we used an auto-encoder to extract the hidden semantics of the input features and combined the clustering module with the auto-encoder to construct an end-to-end unsupervised clustering neural network. We refer to this as HGL_CAE(High-dimensional Gaussian distribution layers combined with convolutional autoencoders).The network is highly adaptable to different input feature dimensions and can cope with situations where the number of clusters cannot be determined in advance. We conducted experiments on the MNIST dataset and the Fashion_MNIST dataset with a clustering accuracy of 93.38% and 72.83% respectively. It is highly competitive with existing methods. </p>
In the field of coal mine production, mine hoist plays a very important role in the whole mine transportation engineering. Its safety and stability directly affect the production efficiency of coal mine and the life safety of staff. In view of this, a fault diagnosis method of mine hoist based on MFCC-SVDD is proposed. By collecting the audio signal of the elevator, MFCC algorithm was used to extract the sound signal of multiple channels and the MEL frequency cepstrum coefficient was used to extract the fault characteristic parameters. Based on the one-class classifier SVDD, the hypersphere of the elevator was constructed to test and recognize the sound signals in the training, and the classification and recognition of the fault types of the elevator were completed. The MFCC characteristic parameters of 600 training samples were randomly selected as input to train the model, and 200 test samples were identified. The accuracy of fault identification reached 85%-96%, which provided a guarantee for mine production safety.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.