2022
DOI: 10.1007/978-3-031-16075-2_11
|View full text |Cite
|
Sign up to set email alerts
|

Gauging Biases in Various Deep Learning AI Models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 12 publications
0
1
0
Order By: Relevance
“…To further understand the model, the researchers then proceeded with bias analyses. The analysis was built upon previously developed strategies and applied to Meta's DETR transformer families [55,56].…”
Section: Model Trainingmentioning
confidence: 99%
“…To further understand the model, the researchers then proceeded with bias analyses. The analysis was built upon previously developed strategies and applied to Meta's DETR transformer families [55,56].…”
Section: Model Trainingmentioning
confidence: 99%
“…Huang et al's work on breast cancer using transformers adds valuable insights into early detection methods [14][15]. Addressing bias in Transformers, crucial across neural networks, has been intensively tackled with models like DETR and Deformable DETR [16][17][18]. Kumar et al's testFAILS framework guides responsible LLM apps [19].…”
Section: Related Workmentioning
confidence: 99%
“…Research on chatbot evaluation capitalizes on a profound background in Natural Language Processing (NLP) and an all-encompassing understanding of AI models, particularly within the Python programming ecosystem [13,14]. Recently, focus has veered towards Transformer Neural Networks, with the aim of unearthing and comprehending the biases embedded within their computational layers [15,16]. The emergence of Transformer Neural Networks has been recognized as a new frontier in both natural language processing and computer vision.…”
Section: Research Background and Related Workmentioning
confidence: 99%