2017
DOI: 10.1016/j.knosys.2016.12.013
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable semantic textual similarity: Finding and explaining differences between sentences

Abstract: User acceptance of artificial intelligence agents might depend on their ability to explain their reasoning, which requires adding an interpretability layer that facilitates users to understand their behavior. This paper focuses on adding an interpretable layer on top of Semantic Textual Similarity (STS), which measures the degree of semantic equivalence between two sentences. The interpretability layer is formalized as the alignment between pairs of segments across the two sentences, where the relation between… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
12
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 39 publications
(12 citation statements)
references
References 24 publications
0
12
0
Order By: Relevance
“…For example, by detecting synsets that are frequently involved in problems classified as incompatible. Finally, we plan to evaluate knowledge resources such as ontology domains [64] and the Multilingual Central Repository (MCR) [33], and to check the utility of Adimen-SUMO v2.6 in Natural Language Processing (NLP) tasks that involve reasoning on commonsense knowledge [18], such as Recognizing Textual Entailment (RTE) [1], [19], [21], Natural Language Inference (NLI) [20] or Interpretable Semantic Textual Similarity (iSTS) [45].…”
Section: Discussionmentioning
confidence: 99%
“…For example, by detecting synsets that are frequently involved in problems classified as incompatible. Finally, we plan to evaluate knowledge resources such as ontology domains [64] and the Multilingual Central Repository (MCR) [33], and to check the utility of Adimen-SUMO v2.6 in Natural Language Processing (NLP) tasks that involve reasoning on commonsense knowledge [18], such as Recognizing Textual Entailment (RTE) [1], [19], [21], Natural Language Inference (NLI) [20] or Interpretable Semantic Textual Similarity (iSTS) [45].…”
Section: Discussionmentioning
confidence: 99%
“…For example, by detecting synsets that are frequently involved in problems classified as incompatible. Finally, we plan to evaluate the knowledge in the Multilingual Central Repository (MCR) [23] and to check the utility of Adimen-SUMO v2.6 in Natural Language Processing (NLP) tasks that involve reasoning on commonsense knowledge [11], such as Recognizing Textual Entailment (RTE) [12,14,1], Natural Language Inference (NLI) [13] or Interpretable Semantic Textual Similarity (ISTS) [29].…”
Section: Discussionmentioning
confidence: 99%
“…The core STS task was extended to include the interpretability of similarity scores (Agirre et al, 2015;Lopez-Gazpio et al, 2017). The goal of the interpretable STS is to provide reasoning behind the assigned similarity scores by identifying the alignment between pairs of segments across the two sentences, assigning to each alignment a relation type and a similarity score.…”
Section: Related Workmentioning
confidence: 99%