2018
DOI: 10.1109/mc.2018.3620973
|View full text |Cite
|
Sign up to set email alerts
|

Toward Anthropomorphic Machine Learning

Abstract: In this paper, we introduce and discuss the concept of anthropomorphic machine learning as an emerging direction for the future development in the area of artificial intelligence (AI) and data science. We start with outlining research challenges and opportunities, which the contemporary landscape offers. We focus on machine learning, statistical learning, deep learning and computational intelligence as theoretical and methodological areas of greater promise for breakthrough results and underpinning the future … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(18 citation statements)
references
References 19 publications
0
18
0
Order By: Relevance
“…AI has been closely related to automated reasoning and mimicking human intelligence from its inception (Angelov & Gu, 2018). ANNs, as a branch of AI that is closely related to ML, went through a roller coaster cycle of development starting in the middle of the past century while the Second World War was still raging with the introduction of the computational perceptron-the model of a single neural cell (neuron) by Warren McCulloch and Walter Pitts in 1943 (Bien & Tibshirani, 2011).…”
Section: Brief Historical Perspectivementioning
confidence: 99%
“…AI has been closely related to automated reasoning and mimicking human intelligence from its inception (Angelov & Gu, 2018). ANNs, as a branch of AI that is closely related to ML, went through a roller coaster cycle of development starting in the middle of the past century while the Second World War was still raging with the introduction of the computational perceptron-the model of a single neural cell (neuron) by Warren McCulloch and Walter Pitts in 1943 (Bien & Tibshirani, 2011).…”
Section: Brief Historical Perspectivementioning
confidence: 99%
“…The reason it is Cauchy is not arbitrary [4]. It can be demonstrated theoretically that if Euclidean or Mahalanobis type of distances in the feature space are considered, the data density reduces to Cauchy type as referred in equation (5). It can also be demonstrated that the so called typicality, τ , which is the weighted average of the data density, D, with weights representing the frequency of occurrence of a data sample [6].…”
Section: Concept and Basic Algorithmmentioning
confidence: 99%
“…Density per feature f is obtained according to the equation (5), where D f i denotes the density for f -th feature of thex i sample.…”
Section: Concept and Basic Algorithmmentioning
confidence: 99%
“…The reason it is Cauchy is not arbitrary [4]. It can be demonstrated theoretically that if Euclidean or Mahalanobis type of distances in the feature space are considered, the data density reduces to Cauchy type as referred in equation (5). It can also be demonstrated that the so-called typicality, s, which is the weighted average of the data density, D, with weights representing the frequency of occurrence of a data sample [6].…”
Section: mentioning
confidence: 99%
“…So-called reinforcement learning offers some departure from complete labeling, but still requires user input for each individual data sample. The most powerful approaches such as deep learning and support vector machines (SVM) suffer from lack of interpretability [5,11,25,30], are extremely power, time and computational resources hungry and are like dinosaurs-unable to adapt and change with agility. They require complete retraining even for a single or few new data samples.…”
Section: Introductionmentioning
confidence: 99%