Virtual Personal Assistant (VPA) is one of the most successful results of Artificial Intelligence, which has given a new way for the human to have its work done from a machine. This paper gives a brief survey on the methodologies and concepts used in making of an Virtual Personal Assistant (VPA) and thereby going on to use it in different software applications. Speech Recognition Systems, also known as Automatic Speech Recognition (ASR), plays An important role in virtual assistants in order to help user have a conversation with the system. In this project, we are trying to make a Virtual Personal Assistant ERAA which will include the important features that could help in assisting ones’ needs. Keeping in mind the user experience, we will make it as appealing as possible, just like other VPAs. Various Natural Language Understanding Platforms like IBM Watson and Google Dialogflow were studied for the same. In our project, we have used Google Dialogflow as the NLU Platform for the implementation of the software application. The User-Interface for the application is designed with the help of Flutter Software Platform. All the models used for this VPA will be designed in a way to work as efficient as possible. Some of the common features which are available in most of the VPAs will be added. We will be implementing ERAA via a smartphone application, and for future scope, our aim will be to implement it on the desktop environment. The following Paper ensure to provide the methodologies used for development of the application. It provides the obtained outcomes of the features developed within the application. It shows how the available natural language understanding platforms can reduce the burden of the user, and therefore going on to develop a robust software application.
One of the major issues faced by Blind people is detecting and recognizing an object. The objective of this project is to help the blind people because mobility of blind people is always a great problem. The mobility of blind people in unknown environment seems impossible without external help, because they don’t have any proper idea about their surroundings. So, we are developing a electronic eye which helps them to know about their surroundings and also guide them during travelling. Developing a system based on image processing using DNN algorithm which is able to labeling objects with the help of OpenCV and Tensor flow libraries and converting the labeled text in to speech and producing output in the form of audio to make the blind person aware of the object in front of him or her. The scope of this system is also measuring the distance of the object from the person and reporting the same Object detection using image processing and Machine Learning. It searches the object. We want to innovate our system the possibility of using the hearing sense to understand real time objects. For the security purpose track blind people in real time environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.