2019
DOI: 10.1613/jair.1.11388
|View full text |Cite
|
Sign up to set email alerts
|

AI Generality and Spearman’s Law of Diminishing Returns

Abstract: Many areas of AI today use benchmarks and competitions with larger and wider sets of tasks. This tries to deter AI systems (and research effort) from specialising to a single task, and encourage them to be prepared to solve previously unseen tasks. It is unclear, however, whether the methods with best performance are actually those that are most general and, in perspective, whether the trend moves towards more general AI systems. This question has a striking similarity with the analysis of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 55 publications
0
3
0
Order By: Relevance
“…The analysis of cognitive trade-offs also draws attention to the issue of similarities and differences between biological and artificial intelligence systems with regard to trade-off architectures (e.g., Hernández-Orallo, 2016). For example, advanced neural networks optimized for accuracy and/or speed in object recognition can be surprisingly fragile against extremely small perturbations-sometimes consisting of a single pixel-that have been specifically engineered to "fool" their algorithms, but may be invisible to human observers (Akhtar & Mian, 2018;Moosavi-Dezfooli et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…The analysis of cognitive trade-offs also draws attention to the issue of similarities and differences between biological and artificial intelligence systems with regard to trade-off architectures (e.g., Hernández-Orallo, 2016). For example, advanced neural networks optimized for accuracy and/or speed in object recognition can be surprisingly fragile against extremely small perturbations-sometimes consisting of a single pixel-that have been specifically engineered to "fool" their algorithms, but may be invisible to human observers (Akhtar & Mian, 2018;Moosavi-Dezfooli et al, 2017).…”
Section: Discussionmentioning
confidence: 99%
“…x 1 • • • exhibiting diminishing returns, in the sense that the measurements of x i and x j are approximately equal for all large enough i, j. If human intelligence is non-Archimedean, this could potentially shed light on a psychometrical phenomenon called Spearman's Law of Diminishing Returns (Spearman, 1927;Blum and Holling, 2017;Hernández-Orallo, 2019), the empirical tendency of cognitive ability tests to be less correlated in high-intelligence populations. Even tiny measurement errors would eventually dominate the test result differences as the true measurements plateau.…”
Section: Generalized Archimedean Structuresmentioning
confidence: 99%
“…This suggests a general law of diminishing returns: any time a non-Archimedean significantly-ordered structure (X, ≪) is measured using real numbers, if the measurement does not blatantly violate ≪ (in other words, if there are no x ≪ y such that x is given a larger real-number measurement than y), then there will inevitably be elements x 0 ≪ x 1 ≪ • • • exhibiting diminishing returns, in the sense that the measurements of x i and x j are approximately equal for large i, j. If human intelligence is non-Archimedean, this could potentially shed light on a psychometrical phenomenon called Spearman's Law of Diminishing Returns [29] [8] [13], the empirical tendency of cognitive ability tests to be less correlated in high-intelligence populations. Even tiny measurement errors would eventually dominate the test result differences as the true results plateau.…”
Section: Generalized Archimedean Structuresmentioning
confidence: 99%