Proceedings of the 30th ACM International Conference on Information &Amp; Knowledge Management 2021
DOI: 10.1145/3459637.3482099
|View full text |Cite
|
Sign up to set email alerts
|

Evaluating Fairness in Argument Retrieval

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 27 publications
0
3
0
Order By: Relevance
“…Viewpoint Diversity in Ranked Outputs. Previous research has shown that search results across topics and domains (e.g., politics [54], health [65,66]) may not always be viewpoint-diverse and that highly-ranked search results are often unbalanced concerning query subtopics [30,50]. Limited diversity, or bias, can root in the overall search result index but become amplified by biased queries and rankings [30,56,66].…”
Section: Related Workmentioning
confidence: 99%
“…Viewpoint Diversity in Ranked Outputs. Previous research has shown that search results across topics and domains (e.g., politics [54], health [65,66]) may not always be viewpoint-diverse and that highly-ranked search results are often unbalanced concerning query subtopics [30,50]. Limited diversity, or bias, can root in the overall search result index but become amplified by biased queries and rankings [30,56,66].…”
Section: Related Workmentioning
confidence: 99%
“…Participants in the TREC fair ranking track also tested various techniques to produce fair rankings. Many of them utilize diversification-based methods, such as MMR [9], PM-2 and rank fusion [8], and heuristic approaches [14]. Vardasbi et al [1] leveraged LamdaMART [2], ListNET [3], and Logistic regression to maximize evaluation metrics via swapping positions.…”
Section: Related Workmentioning
confidence: 99%
“…With respect to the evaluation of argument annotations, we could integrate into ARGAEL diverse evaluation methodologies and procedures [40], and metrics, such as the fairness and diversity of arguments [41], which allow measuring the quality of identified arguments beyond the accuracy and topic relevance, as it is commonly done [2].…”
Section: Impact and Conclusionmentioning
confidence: 99%