Fractional calculus has gained a lot of attention in the last couple of years. Researchers have discovered that processes in various fields follow fractional dynamics rather than ordinary integer-ordered dynamics, meaning that the corresponding differential equations feature non-integer valued derivatives. There are several arguments for why this is the case, one of which is that fractional derivatives inherit spatiotemporal memory and/or the ability to express complex naturally occurring phenomena. Another popular topic nowadays is machine learning, i.e., learning behavior and patterns from historical data. In our ever-changing world with ever-increasing amounts of data, machine learning is a powerful tool for data analysis, problem-solving, modeling, and prediction. It has provided many further insights and discoveries in various scientific disciplines. As these two modern-day topics hold a lot of potential for combined approaches in terms of describing complex dynamics, this article review combines approaches from fractional derivatives and machine learning from the past, puts them into context, and thus provides a list of possible combined approaches and the corresponding techniques. Note, however, that this article does not deal with neural networks, as there is already extensive literature on neural networks and fractional calculus. We sorted past combined approaches from the literature into three categories, i.e., preprocessing, machine learning and fractional dynamics, and optimization. The contributions of fractional derivatives to machine learning are manifold as they provide powerful preprocessing and feature augmentation techniques, can improve physically informed machine learning, and are capable of improving hyperparameter optimization. Thus, this article serves to motivate researchers dealing with data-based problems, to be specific machine learning practitioners, to adopt new tools, and enhance their existing approaches.
Iron ore is the most mined metal and the second most mined mineral in the world. The mining of iron ore and the processing of iron and steel increased sharply during the 20th century and peaked at the beginning of the 21st century. Associated processes along the iron ore cycle (mining, processing, recycling, weathering) such as the massive displacement of rock, the emission of waste and pollutants, or the weathering of products resulted in long-term environmental and stratigraphic changes. Key findings link the iron ore industry to 170 gigatons of rock overburden, a global share of CO2 with 7.6%, mercury with 7.4%, and a variety of other metals, pollutants, and residues. These global changes led to physical, chemical, biological, magnetic, and sequential markers, which are used for the justification of the Anthropocene. The potential markers vary significantly regarding their persistence and measurability, but key findings are summarised as TMPs (Technogenic Magnetic Particles), SCPs (Spheroidal Carbonaceous fly ash Particles), POPs (Persistent Organic Particles), heavy metals (vanadium, mercury, etc.), as well as steel input and steel corrosion residues.
In this article, we investigate the applicability of quantum machine learning for classification tasks using two quantum classifiers from the Qiskit Python environment: the Variational Quantum Classifier (VQC) and the Quantum Kernel Estimator (QKE). We evaluate the performance of these classifiers on six widely known and publicly available benchmark datasets and analyze how their performance varies with the number of samples in artificially generated test classification datasets. Our results demonstrate that the VQC and QKE exhibit superior performance compared to basic machine learning algorithms such as advanced linear regression models (Ridge and Lasso). However, they do not match the accuracy and runtime performance of sophisticated modern boosting classifiers like XGBoost, LightGBM, or CatBoost. Therefore, we conclude that while quantum machine learning algorithms have the potential to surpass classical machine learning methods in the future, especially when physical quantum infrastructure becomes widely available, they currently lag behind classical approaches. Furthermore, our findings highlight the significant impact of different quantum simulators, feature maps, and quantum circuits on the performance of the employed quantum estimators. This observation emphasizes the need for researchers to provide detailed explanations of their hyperparameter choices for quantum machine learning algorithms, as this aspect is currently overlooked in many studies within the field.\\ To facilitate further research in this area and ensure the transparency of our study, we have made the complete code available in a linked GitHub repository.
In this article, we investigate the applicability of quantum machine learning for classification tasks using two quantum classifiers from the Qiskit Python environment: the Variational Quantum Classifier (VQC) and the Quantum Kernel Estimator (QKE). We test the performance of these classifiers on six widely known and publicly available benchmark datasets and examine how their performance varies depending on the number of samples for artificially generated test classification data sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.