2023
DOI: 10.3233/faia230614
|View full text |Cite
|
Sign up to set email alerts
|

Privacy-Enhanced AI Assistants Based on Dialogues and Case Similarity

Xiao Zhan,
Ştefan Sarkadi,
Jose Such

Abstract: Personal assistants (PAs) such as Amazon Alexa, Google Assistant and Apple Siri are now widespread. However, without adequate safeguards and controls their use may lead to privacy risks and violations. In this paper, we propose a model for privacy-enhancing PAs. The model is an interpretable AI architecture that combines 1) a dialogue mechanism for understanding the user and getting online feedback from them, with 2) a decision-making mechanism based on case-based reasoning considering both user and scenario s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 39 publications
0
1
0
Order By: Relevance
“…Researchers are actively working on a range of solutions, such as mechanisms for providing user with meaningful verbal consent [31,162] and automated systems to identify policy breaches [52,207]. Additionally, significant efforts are being directed towards creating models that aid users in making privacy decisions consistent with their preferences [208,209] and crafting explanations that incorporate elements of trust and privacy, which are intended to diminish user concerns and strengthen trust in VAs [163,174].Moreover, how users' perceive security and privacy of these underlying platforms is very important for HVAs, as users consider healthcare-related data more sensitive and prefer to share less information with voice assistants about health [3]. As the information they share with them is often protected by additional regulation (such as HIPAA in the U.S.), the security of those HVAs is therefore highly dependent on the security of the voice assistants on top of which they are built.…”
Section: Implications Unique To Healthcare Voice Assistantsmentioning
confidence: 99%
“…Researchers are actively working on a range of solutions, such as mechanisms for providing user with meaningful verbal consent [31,162] and automated systems to identify policy breaches [52,207]. Additionally, significant efforts are being directed towards creating models that aid users in making privacy decisions consistent with their preferences [208,209] and crafting explanations that incorporate elements of trust and privacy, which are intended to diminish user concerns and strengthen trust in VAs [163,174].Moreover, how users' perceive security and privacy of these underlying platforms is very important for HVAs, as users consider healthcare-related data more sensitive and prefer to share less information with voice assistants about health [3]. As the information they share with them is often protected by additional regulation (such as HIPAA in the U.S.), the security of those HVAs is therefore highly dependent on the security of the voice assistants on top of which they are built.…”
Section: Implications Unique To Healthcare Voice Assistantsmentioning
confidence: 99%