2019
DOI: 10.1016/j.csl.2019.06.001
|View full text |Cite
|
Sign up to set email alerts
|

Preserving privacy in speaker and speech characterisation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
65
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 99 publications
(65 citation statements)
references
References 48 publications
0
65
0
Order By: Relevance
“…But since inferred information can be misused in countless ways [17,18], robust data protection mechanisms are needed in order to reap the benefits of voice and speech analysis in a socially acceptable manner. At the technical level, many approaches have been developed for privacy protection at different stages of the data life cycle, including operations over encrypted data, differential privacy, data anonymization, secure multiparty computation, and privacy-preserving data processing on edge devices [46,72,106]. Various privacy safeguards have been specifically designed or adjusted for audio mining applications.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…But since inferred information can be misused in countless ways [17,18], robust data protection mechanisms are needed in order to reap the benefits of voice and speech analysis in a socially acceptable manner. At the technical level, many approaches have been developed for privacy protection at different stages of the data life cycle, including operations over encrypted data, differential privacy, data anonymization, secure multiparty computation, and privacy-preserving data processing on edge devices [46,72,106]. Various privacy safeguards have been specifically designed or adjusted for audio mining applications.…”
Section: Discussionmentioning
confidence: 99%
“…These include voice binarization, hashing techniques for speech data, fully homomorphic inference systems, differential private learning, the computation of audio data in separate entrusted units, and speaker de-identification by voice transformation [72,73]. A comprehensive review of cryptography-based solutions for speech data is provided in [72]. Privacy risks can also be moderated by storing and processing only the audio data required for an application's functionality.…”
Section: Discussionmentioning
confidence: 99%
“…Learning privacy preserving representations in speech data is relatively unexplored [46]. In [61] Nautsch et al investigate the importance of the development of privacy-preserving technologies to protect speech signals and highlight the importance of applying these technologies to protect speakers and speech characterization in recordings. Some recent works have sought to protect speaker identity [67], gender identity [33] and emotion [2].…”
Section: Related Workmentioning
confidence: 99%
“…This method contains a language model during the hybrid ASR system's training, on which the userlevel DP can be applied to obscure the identity, at the sacrifice of transcription performance. The speaker and speech characterization process is protected in [23] by inserting noise during the learning process. However, due to strict black-box access and the lack of output probability information, our auditor's performance remains unknown on auditing the ASR model with DP.…”
Section: Threats To Auditors' Validitymentioning
confidence: 99%