2021
DOI: 10.3390/app112411991
|View full text |Cite
|
Sign up to set email alerts
|

Essential Features in a Theory of Context for Enabling Artificial General Intelligence

Abstract: Despite recent Artificial Intelligence (AI) advances in narrow task areas such as face recognition and natural language processing, the emergence of general machine intelligence continues to be elusive. Such an AI must overcome several challenges, one of which is the ability to be aware of, and appropriately handle, context. In this article, we argue that context needs to be rigorously treated as a first-class citizen in AI research and discourse for achieving true general machine intelligence. Unfortunately, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 68 publications
0
2
0
Order By: Relevance
“…Ultimately, such language representation models would be used in the real world, not only in multiple‐choice settings, but also in so‐called generative settings where the model may be expected to generate answers to questions (without being given options). Even in the multiple‐choice setting, without robust commonsense, the model will likely not be usable for actual decision making unless we can trust that it is capable of generalization (Kejriwal, 2021; Misra, 2022; Wahle et al, 2022). One option to implementing such robustness in practice may be to add a ‘decision‐making layer’ on a pre‐trained language representation model rather than aim to modify the model's architecture from scratch 13 (Hong et al, 2021; Tang & Kejriwal, 2022; Zaib et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…Ultimately, such language representation models would be used in the real world, not only in multiple‐choice settings, but also in so‐called generative settings where the model may be expected to generate answers to questions (without being given options). Even in the multiple‐choice setting, without robust commonsense, the model will likely not be usable for actual decision making unless we can trust that it is capable of generalization (Kejriwal, 2021; Misra, 2022; Wahle et al, 2022). One option to implementing such robustness in practice may be to add a ‘decision‐making layer’ on a pre‐trained language representation model rather than aim to modify the model's architecture from scratch 13 (Hong et al, 2021; Tang & Kejriwal, 2022; Zaib et al, 2020).…”
Section: Discussionmentioning
confidence: 99%
“…To conclude the issue, Kejriwal [5] offers a perspective on how context will prove critical to building robust artificial general intelligence (AGI) architectures. In particular, I argue that context needs to become a focal point of the conceptual landscape, rather than a vague topic of discussion in AI papers.…”
mentioning
confidence: 99%