2023
DOI: 10.1109/taslp.2022.3226333
|View full text |Cite
|
Sign up to set email alerts
|

On Robustness and Sensitivity of a Neural Language Model: A Case Study on Italian L1 Learner Errors

Abstract: The outstanding performance recently reached by Neural Language Models (NLMs) across many Natural Language Processing (NLP) tasks has fostered the debate towards understanding whether NLMs implicitly learn linguistic competence. Probes, i.e. supervised models trained using NLM representations to predict linguistic properties, are frequently adopted to investigate this issue.However, it is still questioned if probing classification tasks really enable such investigation or if they simply hint at surface pattern… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 68 publications
0
0
0
Order By: Relevance