2023
DOI: 10.1098/rsos.221159
|View full text |Cite
|
Sign up to set email alerts
|

Did AI get more negative recently?

Abstract: In this paper, we classify scientific articles in the domain of natural language processing (NLP) and machine learning (ML), as core subfields of artificial intelligence (AI), into whether (i) they extend the current state-of-the-art by the introduction of novel techniques which beat existing models or whether (ii) they mainly criticize the existing state-of-the-art, i.e. that it is deficient with respect to some property (e.g. wrong evaluation, wrong datasets, misleading task specification). We refer to contr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 63 publications
0
2
0
Order By: Relevance
“…We use the dataset released by Beese et al (2023), which contains title-abstract pairs and corresponding meta-information such as the publication year and venue. Beese et al (2023) extracted the data from two sources: ACL Anthology (from 1984 to 2021) and machine learning conferences (from 1989 to 2021); we refer to the datasets from these two sources as NLP and ML, respectively. After filtering (described in Appendix A), 32,952 abstracttitle pairs remain in our dataset.…”
Section: Datamentioning
confidence: 99%
See 1 more Smart Citation
“…We use the dataset released by Beese et al (2023), which contains title-abstract pairs and corresponding meta-information such as the publication year and venue. Beese et al (2023) extracted the data from two sources: ACL Anthology (from 1984 to 2021) and machine learning conferences (from 1989 to 2021); we refer to the datasets from these two sources as NLP and ML, respectively. After filtering (described in Appendix A), 32,952 abstracttitle pairs remain in our dataset.…”
Section: Datamentioning
confidence: 99%
“…Stage 2: To find more funny title candidates to annotate, the two annotators annotated the funniest 396 titles in the original dataset from Beese et al (2023), predicted by the Stage 1 ensemble classifier; 75.8% (300 titles) were judged as FUNNY or FUNNY med , which is substantially higher than the proportion of funny titles in the annotated data of Stage 1 (7.3%). Thus, the annotated data expands to 2,441 titles (= 1, 730 + 315 + 396), where 1,893 are labeled as ¬FUNNY, 492 as FUNNY med and 56 as FUNNY.…”
Section: Humorous Title Generationmentioning
confidence: 99%
“…Such models could be applied in completely unsupervised settings. One novel pre-trained model for sentiment analysis is SiEBERT which potential is demonstrated in [16,17,18,19].…”
Section: Recent Advances In Sentiment Analysis and Topic Modeling Of ...mentioning
confidence: 99%