2021
DOI: 10.1145/3478101
|View full text |Cite
|
Sign up to set email alerts
|

Demystifying the Vetting Process of Voice-controlled Skills on Markets

Abstract: Smart speakers, such as Google Home and Amazon Echo, have become popular. They execute user voice commands via their built-in functionalities together with various third-party voice-controlled applications, called skills. Malicious skills have brought significant threats to users in terms of security and privacy. As a countermeasure, only skills passing the strict vetting process can be released onto markets. However, malicious skills have been reported to exist on markets, indicating that the vetting process … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 40 publications
0
3
0
Order By: Relevance
“…Work looking at voice assistant platforms suggests that there are also problems with the way that skills are certified and moderated. Current work on the certification process suggests that initial checks miss much of the skill conversation tree, and that skills can be crafted to minimise testing coverage by human and automated checks [50]. Similar work examining the efficacy of skill certification showed that policy-violating skills were approved for public use in over 60% cases across the Alexa and Google Assistant skill stores [7].…”
Section: Alexamentioning
confidence: 99%
“…Work looking at voice assistant platforms suggests that there are also problems with the way that skills are certified and moderated. Current work on the certification process suggests that initial checks miss much of the skill conversation tree, and that skills can be crafted to minimise testing coverage by human and automated checks [50]. Similar work examining the efficacy of skill certification showed that policy-violating skills were approved for public use in over 60% cases across the Alexa and Google Assistant skill stores [7].…”
Section: Alexamentioning
confidence: 99%
“…However, it is apparent that testing of third-party applications' back-end code running on developers' servers is limited to dynamic black-box testing, as providers do not have direct access to this code. Thus, the security of these vetting processes are inadequate [2,10,17,21,43]. Aside from issues around the effectiveness of security vetting processes for third-party voice applications, developers can also modify the back-end code of the voice applications to implement malicious functionality after such applications have been vetted and published [10,27,39].…”
Section: Threat Modelmentioning
confidence: 99%
“…For example, skills should not have advertisements or promote alcohol. Although these policies are checked during the skill certification process (which rejects a skill if it violates any of the pre-defined policies), prior work demonstrated the ease of policyviolating skills being certified [32,55]. Several recent works [39,41,42,53,59] developed tools to measure the policy compliance of skills on the Amazon Alexa platform through a dynamic analysis approach (i.e., by exploring the outcomes of skills).…”
Section: Introductionmentioning
confidence: 99%