Avaya Conversational Intelligence™ (ACI) is an end-to-end, cloud-based solution for real-time Spoken Language Understanding for call centers. It combines large vocabulary, realtime speech recognition, transcript refinement, and entity and intent recognition in order to convert live audio into a rich, actionable stream of structured events. These events can be further leveraged with a business rules engine, thus serving as a foundation for real-time supervision and assistance applications. After the ingestion, calls are enriched with unsupervised keyword extraction, abstractive summarization, and businessdefined attributes, enabling offline use cases, such as business intelligence, topic mining, full-text search, quality assurance, and agent training. ACI comes with a pretrained, configurable library of hundreds of intents and a robust intent training environment that allows for efficient, cost-effective creation and customization of customer-specific intents.
Natural language processing of conversational speech requires the availability of high-quality transcripts. In this paper, we express our skepticism towards the recent reports of very low Word Error Rates (WERs) achieved by modern Automatic Speech Recognition (ASR) systems on benchmark datasets. We outline several problems with popular benchmarks and compare three state-of-the-art commercial ASR systems on an internal dataset of real-life spontaneous human conversations and HUB'05 public benchmark. We show that WERs are significantly higher than the best reported results. We formulate a set of guidelines which may aid in the creation of real-life, multi-domain datasets with high quality annotations for training and testing of robust ASR systems.
Recommender systems have become ubiquitous over the last decade, providing users with personalized search results, video streams, news excerpts, and purchasing hints. Human emotions are widely regarded as important predictors of behavior and preference. They are a crucial factor in decision making, but until recently, relatively little has been known about the effectiveness of using human emotions in personalizing real-world recommender systems. In this paper we introduce the Emotion Aware Recommender System (EARS), a large scale system for recommending news items using user's self-assessed emotional reactions. Our original contribution includes the formulation of a multi-dimensional model of emotions for news item recommendations, introduction of affective item features that can be used to describe recommended items, construction of affective similarity measures, and validation of the EARS on a large corpus of real-world Web traffic. We collect over 13,000,000 page views from 2,700,000 unique users of two news sites and we gather over 160,000 emotional reactions to 85,000 news articles. We discover that incorporating pleasant emotions into collaborative filtering recommendations consistently outperforms all other algorithms. We also find that targeting recommendations by selected emotional reactions presents a promising direction for further research. As an additional contribution we share our experiences in designing and developing a real-world emotion-based recommendation engine, pointing to various challenges posed by the practical aspects of deploying emotion-based recommenders.
A CSS-sprite packing problem is considered in this article. CSS-sprite is a technique of combining many pictures of a web page into one image for the purpose of reducing network transfer time. The CSS-sprite packing problem is formulated here as an optimization challenge. The significance of geometric packing, image compression and communication performance is discussed. A mathematical model for constructing multiple sprites and optimization of load time is proposed. The impact of PNG-sprite aspect ratio on file size is studied experimentally. Benchmarking of real user web browsers communication performance covers latency, bandwidth, number of concurrent channels as well as speedup from parallel download. Existing software for building CSS-sprites is reviewed. A novel method, called Spritepack , is proposed and evaluated. Spritepack outperforms current software.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.