The subject of the study is the features of the organization of virtual reality interfaces. The author examines in detail such aspects of the topic as user involvement in the virtual environment, various ways and scenarios of user interaction with virtual reality, user security in the virtual environment, as well as such a phenomenon as cyberbullying and ways to prevent it. The study also considers the use of voice control as an alternative to manual. Particular attention in this study is paid to the classification of virtual reality interfaces, among which sensory interfaces, interfaces based on user motor skills, sensorimotor interfaces, interfaces for modeling and developing virtual reality are distinguished and considered in detail. The main conclusion of the study is that the virtual reality interface should be designed taking into account the ergonomics of users to prevent muscle fatigue and cyber-pain. In addition, it is very important to ensure the user's safety when designing virtual environment interfaces: using the virtual reality interface should not lead to injury to the user. To create an ergonomic and secure virtual reality interface, a combination of different types of interfaces is often required, through which the user can access an alternative control method or improved navigation. A special contribution of the author to the study of the topic is the description of the classification of virtual reality interfaces.
The subject of this research is the key methods for creating the architecture of information aggregators, methods for increasing scalability and effectiveness of such systems, methods for reducing the delay between the publication of new content by the source and emergence of its copy in the information aggregator. In this research, the content aggregator implies the distributed high-load information system that automatically collects information from various sources, process and displays it on a special website or mobile application. Particular attention is given to the basic principles of content aggregation: key stages of aggregation and criteria for data sampling, automation of aggregation processes, content copy strategies, and content aggregation approaches. The author's contribution consists in providing detailed description of web crawling and fuzzy duplicate detection systems. The main research result lies in the development of high-level architecture of the content aggregation system. Recommendations are given on the selection of the architecture of styles and special software regime that allows creating the systems for managing distributed databases and message brokers. The presented architecture aims to provide high availability, scalability for high query volumes, and big data performance. To increase the performance of the proposed system, various caching methods, load balancers, and message queues should be actively used. For storage of the content aggregation system, replication and partitioning must be used to improve availability, latency, and scalability. In terms of architectural styles, microservice architecture, event-driven architecture, and service-based architecture are the most preferred architectural approaches for such system.
The subject of the study is the architecture of the RSS feed aggregation system. The author considers in detail such aspects of the topic as choosing the right data aggregation strategy, an approach to scaling a distributed system, designing and implementing the main modules of the system, such as an aggregation strategy definition module, a content aggregation module, a data processing module, a search module. Particular attention in this study is given to a detailed description of the libraries and frameworks chosen for the implementation of the system under consideration, as well as databases. The main part of the system under consideration is implemented in the C# programming language (.Net Core) and is cross-platform. The study describes the interaction with the main data stores used in the development of the aggregation system, which are PostgreSQL and Elasticsearch. The main conclusion of the study is that before developing an aggregation system, it is necessary to analyze the publication activity of data sources, on the basis of which it is possible to form an acceptable strategy for updating the search index, saving a significant amount of resources. computing power. Content aggregation systems, such as the one considered in this study, should be distributed, built on the basis of event-driven and microservice architectures. This approach will make the system resistant to high loads and failures, as well as easily expandable. The author's special contribution to the study of the topic is a detailed description of the high-level architecture of the RSS aggregator, designed to process 50,000 channels.
The subject of this research is the development of the architecture of expert system for distributed content aggregation system, the main purpose of which is the categorization of aggregated data. The author examines the advantages and disadvantages of expert systems, toolset for development of expert systems, classification of expert systems, as well as application of expert systems for categorization of data. Special attention is given to the description of architecture of the proposed expert system, which consists of spam filter, component for determination of the main category for each type of the processed content, and components for determination of subcategories, one of which is based on the domain rules, and the other uses the methods of machine learning methods and complements the first one. The conclusion is made that expert system can be effectively applied for solution of the problems of categorization of data in the content aggregation systems. The author establishes that hybrid solutions, which combine an approach based on the use of knowledge base and rules with implementation of neural networks allow reducing the cost of the expert system. The novelty of this research lies in the proposed architecture of the system, which is easily extensible and adaptable to workloads by scaling existing modules or adding new ones. The proposed module for spam detection leans on adapting the behavioral algorithm for detecting spam in emails; the proposed module for determination of the key categories of content uses two types of algorithms: fuzzy fingerprints and Twitter topic fuzzy fingerprints that was initially applied for categorization of messages in the social network Twitter. The module that determine subcategory based on the keywords functions in interaction with the thesaurus database. The latter classifier uses the reference vector algorithm for the final determination of subcategories.
The subject of the research is the methods of building the user interface of university websites based on the intended purpose, the needs of the user audience and user limitations, including sensory-motor and cognitive-psychological limitations. As a starting point for the study of the target audience and compliance with accessibility standards, an analysis of eight university websites is carried out, based on data from open sources. The main violations that prevent the use of the University's website to varying degrees are considered, as well as the most well-known and often used approaches in interface design and design that make interfaces more convenient without overloading the user's short-term memory and without causing premature fatigue. As a result of the research, the basic requirements for the design of the interface of the university's website are formed. According to the main conclusion of this study, in order to adapt the University's website to the limitations of users' capabilities, it is necessary to follow the main standards of usability and accessibility considered, such as GOST R 52872-2019, WCAG 2.1 and GOST R ISO 9241-20-2014, and also take into account the peculiarities of legislation that affect the formation of sections of the site and its accessibility for people with disabilities. It is necessary to adhere to such principles of interface organization and information presentation as: Hick's Law, Gestalt Principles, Miller's Law, Jacob's Law and Heuristics. A special contribution of the author to the research of the topic is the analysis of the verification of eight sites of Russian universities for compliance with accessibility standards. This analysis showed that even the visually impaired versions of the sites reviewed do not meet accessibility standards, which makes it difficult for people with disabilities to access information and emphasizes the importance of the study. The novelty of the research lies in the formation of the basic requirements for the user interface of university websites. The results of the study can be further used in the construction of such information systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.