2023 ACM Conference on Fairness, Accountability, and Transparency 2023
DOI: 10.1145/3593013.3593982
|View full text |Cite
|
Sign up to set email alerts
|

In the Name of Fairness: Assessing the Bias in Clinical Record De-identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 21 publications
0
2
0
Order By: Relevance
“…We have released a related implementation using the targets R package for evaluating algorithmic bias in deidentification systems [ 37 ]. Xiao et al [ 38 ] provide an alternate approach using 100 synthetic templates imitating realistic contexts for PII as it occurs in unstructured clinical notes. For researchers who do not have the infrastructure or privacy needs of data sets with PII, other reproducible pipelines and shared evaluation frameworks, such as NLP Sandbox, exist [ 39 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…We have released a related implementation using the targets R package for evaluating algorithmic bias in deidentification systems [ 37 ]. Xiao et al [ 38 ] provide an alternate approach using 100 synthetic templates imitating realistic contexts for PII as it occurs in unstructured clinical notes. For researchers who do not have the infrastructure or privacy needs of data sets with PII, other reproducible pipelines and shared evaluation frameworks, such as NLP Sandbox, exist [ 39 ].…”
Section: Discussionmentioning
confidence: 99%
“…Finally, our own work and many of the other researchers and developers cited in this study have relied on the same standard deidentification data sets released via the i2b2 or CEGS N-GRID shared tasks. For less common but still publicly available data sets, the developers of NeuroNER evaluated their system against the CoNLL 2003 shared task on named entity recognition [ 23 ], and Xiao et al [ 38 ] recently released a new smaller set of notes based on MIMIC-IV. We are not aware of any other study with a publicly released evaluation framework that includes all steps from initial corpus processing through plotting evaluation results.…”
Section: Discussionmentioning
confidence: 99%