We propose a perceptually based system for pattern retrieval and matching. The central idea is that similarity judgment has to be modeled along perceptual dimensions. Hence, we detect basic visual categories that people use in their judgment of similarity, and design a computational model that accepts patterns as input and, depending on the query, produces a set of choices that follow human behavior in pattern matching. There are two major research aspects to our work. The first one addresses the issue of how humans perceive and measure similarity within the domain of color patterns. To understand and describe this mechanism, we performed a subjective experiment which yielded five perceptual criteria used in comparison between color patterns (vocabulary), as well as a set of rules governing the use of these criteria in similarity judgment (grammar). The second research aspect is the implementation of the perceptual criteria and rules in an image retrieval system. Following the processing typical for human vision, we design a system to: (1) extract perceptual features from the vocabulary and (2) perform the comparison between the patterns according to the grammar rules. The modeling of human perception of color patterns is new--starting with a new color codebook design, compact color representation, and texture description through multi-scale edge distribution along different directions. Moreover, we propose new color and texture distance functions that correlate with human performance. The performance of the system is illustrated with numerous examples from image databases from different application domains.
As part of a perennial project, our team is actively engaged in developing new synthetic assistant (SA) technologies to assist in training combat medics and medical first responders. It is critical that medical first responders be well trained to deal with emergencies more effectively. This would require real-time monitoring and feedback for each trainee. Therefore, we introduced a voice-based SA to augment the training process of medical first responders and enhance their performance in the field. The potential benefits of SAs include a reduction in training costs and enhanced monitoring mechanisms. Despite the increased usage of voice-based personal assistants (PAs) in day-today life, the associated effects are commonly neglected for a study of human factors. Therefore, this paper focuses on performance analysis of the developed voice-based SA in emergency care provider training for a selected emergency treatment scenario. The research discussed in this paper follows design science in developing proposed technology; at length, we discussed architecture and development and presented working results of voice-based SA. The empirical testing was conducted on two groups as user study using statistical analysis tools, one trained with conventional methods and the other with the help of SA. The statistical results demonstrated the amplification in training efficacy and performance of medical responders powered by SA. Furthermore, the paper also discusses the accuracy and time of task execution (t) and concludes with the guidelines for resolving the identified problems.
We determine the basic categories and the hierarchy of rules used by humans in judging similarity and matching of color patterns. The categories are: (1) overall color; (2) directionality and orientation; (3) regularity and placement; (4) color purity; (5) complexity and heaviness. These categories form the pattern vocabulary which is governed by the grammar rules. Both the vocabulary and the grammar were obtained as a result of a subjective experiment. Experimental data were interpreted using multidimensional scaling techniques yielding the vocabulary and the hierarchical clustering analysis, yielding the grammar rules. Finally, we give a short overview of the existing techniques that can be used to extract and measure the elements of the vocabulary.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.