Findings of the Association for Computational Linguistics: EMNLP 2023 2023
DOI: 10.18653/v1/2023.findings-emnlp.563
|View full text |Cite
|
Sign up to set email alerts
|

Injecting structural hints: Using language models to study inductive biases in language learning

Isabel Papadimitriou,
Dan Jurafsky

Abstract: Both humans and large language models are able to learn language without explicit structural supervision. What inductive biases make this learning possible? We address this fundamental cognitive question by leveraging transformer language models: we inject inductive bias into language models by pretraining on formally-structured data, and then evaluate the biased learners' ability to learn typologicallydiverse natural languages. Our experimental setup creates a testbed for hypotheses about inductive bias in hu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
references
References 25 publications
0
0
0
Order By: Relevance