Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-industry.17
|View full text |Cite
|
Sign up to set email alerts
|

FPI: Failure Point Isolation in Large-scale Conversational Assistants

Abstract: Large-scale conversational assistants such as Cortana, Alexa, Google Assistant and Siri process requests through a series of modules for wake word detection, speech recognition, language understanding and response generation. An error in one of these modules can cascade through the system. Given the large traffic volumes in these assistants, it is infeasible to manually analyze the data, identify requests with processing errors and isolate the source of error. We present a machine learning system to address th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 11 publications
0
2
0
Order By: Relevance
“…Such similarities motivate us to draw parallels between the NLP robustness literature and HCI perspectives of system failures. By understanding how different types of failures affect trust in voice assistants overall, we can then try to pinpoint the underlying NLP components that are the root cause of the most critical failures that erode trust [30]. Technical solutions can then be leveraged to improve the robustness of the most critical parts of the system in order to increase user trust and long-term engagement most efficiently.…”
Section: Nlp Approaches To Voice Assistant Failuresmentioning
confidence: 99%
“…Such similarities motivate us to draw parallels between the NLP robustness literature and HCI perspectives of system failures. By understanding how different types of failures affect trust in voice assistants overall, we can then try to pinpoint the underlying NLP components that are the root cause of the most critical failures that erode trust [30]. Technical solutions can then be leveraged to improve the robustness of the most critical parts of the system in order to increase user trust and long-term engagement most efficiently.…”
Section: Nlp Approaches To Voice Assistant Failuresmentioning
confidence: 99%
“…the survey in (Hedderich et al, 2020)). A number of works identify utterances with processing errors through offline analysis (Sethi et al, 2021;Gupta et al, 2021;Chada et al, 2021;Khaziev et al, 2022). These approaches however still need human annotation in an active learning loop to improve production models.…”
Section: Related Workmentioning
confidence: 99%