2018
DOI: 10.1111/cgf.13417
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Analysis of Word Vector Embeddings

Abstract: Word vector embeddings are an emerging tool for natural language processing. They have proven beneficial for a wide variety of language processing tasks. Their utility stems from the ability to encode word relationships within the vector space. Applications range from components in natural language processing systems to tools for linguistic analysis in the study of language and literature. In many of these applications, interpreting embeddings and understanding the encoded grammatical and semantic relations be… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
56
0
3

Year Published

2018
2018
2021
2021

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 70 publications
(59 citation statements)
references
References 50 publications
0
56
0
3
Order By: Relevance
“…Interaxis [KCPE16] and Explainers [Gle13] allow users to define semantic axes and subsequently project data points onto the axes. Applying this idea to word embeddings, related work [HG18, BCZ∗16a, BCZ∗16b] has mapped a set of words to two user‐defined concept axes. We similarly enable users to construct a semantic axis from two opposing concepts, but unlike methods that simply layout two user‐defined axes, we provide additional projection strategies to ensure orthogonality and highlight variations.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…Interaxis [KCPE16] and Explainers [Gle13] allow users to define semantic axes and subsequently project data points onto the axes. Applying this idea to word embeddings, related work [HG18, BCZ∗16a, BCZ∗16b] has mapped a set of words to two user‐defined concept axes. We similarly enable users to construct a semantic axis from two opposing concepts, but unlike methods that simply layout two user‐defined axes, we provide additional projection strategies to ensure orthogonality and highlight variations.…”
Section: Related Workmentioning
confidence: 99%
“…Researchers have also developed interactive visual analysis tools for latent spaces [STN∗16, JSL∗17, HG18, LBT∗18, LNH∗18]. Some tools focus on a subset of tasks [STN∗16, LBT∗18] in word embeddings, which we extend and bring to a broader range of latent spaces.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Gleicher noted that a simplified class of models could be used in a VA application to trade off some performance in exchange for a more explainable analysis [Gle13]. Many other systems and techniques purport to render various types of models interpretable, including deep learning models [LSC∗18, LSL∗17, SGPR18, YCN∗15, BJY∗18], topic models [WLS∗10], word embeddings [HG18], regression models [MP13], classification models [PBD∗10, RSG16, ACD∗15], and composite models for classification [LXL∗18]. While model explanation can be very useful in EMA, it does not help a user discover models, it only helps interpret them.…”
Section: Related Workmentioning
confidence: 99%