Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.2
|View full text |Cite
|
Sign up to set email alerts
|

How Did This Get Funded?! Automatically Identifying Quirky Scientific Achievements

Abstract: Humor is an important social phenomenon, serving complex social and psychological functions. However, despite being studied for millennia humor is computationally not well understood, often considered an AI-complete problem.In this work, we introduce a novel setting in humor mining: automatically detecting funny and unusual scientific papers. We are inspired by the Ig Nobel prize, a satirical prize awarded annually to celebrate funny scientific achievements (example past winner: "Are cows more likely to lie do… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 37 publications
0
3
0
Order By: Relevance
“…Transformer-based LMs and MLMs (Peters et al, 2018;Devlin et al, 2018) have revolutionized NLP in the past couple of years. While most of the impact has been achieved using these pretrained models as a source of meaningful contextual embeddings, recent works are using these models for the task they were pretrained for: Masked Language Modeling (Petroni et al, 2019;Kushilevitz et al, 2020;Lazar et al, 2021;Shani et al, 2021;Jiang et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…Transformer-based LMs and MLMs (Peters et al, 2018;Devlin et al, 2018) have revolutionized NLP in the past couple of years. While most of the impact has been achieved using these pretrained models as a source of meaningful contextual embeddings, recent works are using these models for the task they were pretrained for: Masked Language Modeling (Petroni et al, 2019;Kushilevitz et al, 2020;Lazar et al, 2021;Shani et al, 2021;Jiang et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…The majority were labeled as non-funny and annotators exhibited low agreements. Shani et al (2021) classify scientific titles as funny or not using humor-theory inspired features and scientific language models such as SciBERT (Beltagy et al, 2019) building on a dataset of Ig Nobel winners and humorous papers discussed in online forums.…”
Section: Related Workmentioning
confidence: 99%
“…To generate humorous titles, we first need a dataset of humor annotated titles in our domain (NLP and ML papers). We cannot resort to the data of Shani et al (2021); Heard et al (2022) as those leverage papers from other scientific fields. As a consequence, we build our own dataset.…”
Section: Humorous Title Generationmentioning
confidence: 99%