Proceedings of the 5th International Conference on Information Systems Security and Privacy 2019
DOI: 10.5220/0007309500750087
|View full text |Cite
|
Sign up to set email alerts
|

Nonsense Attacks on Google Assistant and Missense Attacks on Amazon Alexa

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 14 publications
(18 citation statements)
references
References 0 publications
0
18
0
Order By: Relevance
“…Carlini and Wagner also demonstrate that it is possible using the same technique to hide target transcriptions in music recordings. Unlike the attack using white noise presented by Carlini et al [14] and the attacks using nonsensical word sounds presented by Bispham et al [8], the attacks demonstrated by Carlini and Wagner are demonstrated in relation to a separate speech transcription system rather than a voice-controlled system as such. Also unlike the other two attacks, these attacks are shown to be effective only as audio file input to the target system rather than as over-the-air input via a microphone, and are white-box attacks requiring inside knowledge of the target system, rather than black-box attacks.…”
Section: Prior Work On the Security Of The Speech Interfacementioning
confidence: 82%
See 2 more Smart Citations
“…Carlini and Wagner also demonstrate that it is possible using the same technique to hide target transcriptions in music recordings. Unlike the attack using white noise presented by Carlini et al [14] and the attacks using nonsensical word sounds presented by Bispham et al [8], the attacks demonstrated by Carlini and Wagner are demonstrated in relation to a separate speech transcription system rather than a voice-controlled system as such. Also unlike the other two attacks, these attacks are shown to be effective only as audio file input to the target system rather than as over-the-air input via a microphone, and are white-box attacks requiring inside knowledge of the target system, rather than black-box attacks.…”
Section: Prior Work On the Security Of The Speech Interfacementioning
confidence: 82%
“…The attack demonstrated by Carlini et al was a black-box attack requiring no inside knowledge of the target system, and was shown to be effective when played over the air to the Google Now assistant on a smartphone. Bispham et al [8] demonstrate another type of attack in which malicious voice commands are masked in nonsensical word sounds that rhyme with words of a target command. The authors show that the nonsensical word sounds are recognised by Google Assistant as a valid command, whilst human listeners do not detect the target command in the nonsensical word sounds when hearing them out of context.…”
Section: Prior Work On the Security Of The Speech Interfacementioning
confidence: 99%
See 1 more Smart Citation
“…These adversarial utterances are crafted by embedding homophones of target command words in a different sense context. This paper is an extended version of an earlier paper that presented the results of a pilot experiment and of a proof-of-concept study [3]. The pilot experiment presented in the earlier paper represented initial results on attacks on speech recognition in Google Assistant using nonsensical word sounds.…”
Section: Specifically Our Experimental Work Demonstrates An Attack Omentioning
confidence: 99%
“…2 English has around 44 phonemes. 3 The line between phoneme combinations that carry meaning within a language and phoneme combinations that are meaningless is subject to change over time and place, as new words evolve and old words fall out of use (see Nowak and Krakauer [6]). The space of meaningful word sounds within a language at a given point in time is generally confirmed by the inclusion of words in an established reference work, such as, in the case of English, the Oxford English Dictionary.…”
Section: Description and Contextmentioning
confidence: 99%