Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2022
DOI: 10.18653/v1/2022.acl-long.470
|View full text |Cite
|
Sign up to set email alerts
|

Interpreting Character Embeddings With Perceptual Representations: The Case of Shape, Sound, and Color

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…Since the coefficient drops through the training of the weight matrix in the baseline and the Proposed (Trainable) methods, this comparison points out that the embeddings learned from the phoneme co-occurrences in sentences do not clearly distinguish consonants and vowels. This contradicts the fact that such neural embeddings are known to represent phonetic relationships quite well [2,13]. Yet, since even the baseline performs moderately in the mAP and rank correlation metrics, we can also confirm that the neural embeddings can learn relationships within consonants and within vowels even without any explicit prior.…”
Section: Resultsmentioning
confidence: 66%
See 2 more Smart Citations
“…Since the coefficient drops through the training of the weight matrix in the baseline and the Proposed (Trainable) methods, this comparison points out that the embeddings learned from the phoneme co-occurrences in sentences do not clearly distinguish consonants and vowels. This contradicts the fact that such neural embeddings are known to represent phonetic relationships quite well [2,13]. Yet, since even the baseline performs moderately in the mAP and rank correlation metrics, we can also confirm that the neural embeddings can learn relationships within consonants and within vowels even without any explicit prior.…”
Section: Resultsmentioning
confidence: 66%
“…Recent Natural Language Processing (NLP) techniques also obtain neural phoneme embeddings that reflect phonetic similarity without explicit prior and supervision [2,13]. Kolachina and Magyar [13] evaluate if Word2vec [15,16] can learn the phonetic relationships among phonemes.…”
Section: Computational Approaches To Phoneticsmentioning
confidence: 99%
See 1 more Smart Citation
“…This implementation makes the baseline models inaccessible to the phonetic relationship among phonemes, while the proposed models can. However, it should also be noted that character/phoneme-level language models can learn such relationships to some extent implicitly from the phonological restrictions of a language compiled in the training data [36]- [38]. For instance, in every language, since consonants and vowels usually occur in different contexts, language models can implicitly learn which characters/phonemes represent consonants or vowels.…”
Section: ) Baseline and Model Settingsmentioning
confidence: 99%