To recognise just the same human reaction (for example, a strong excitement) in different contexts, customary behaviours in these contexts have to be taken into account; e.g. a happy sport audience may be cheering for long time, while a happy theatrical audience may produce only short bursts of laughter in order to not interrupt the performance. Tailoring recognition algorithms to contexts can be achieved by building either a context-specific or a generic system. The former is individually trained for each context to recognise sets of characteristic responses, whereas the latter-in contrast to the context-specific one-adapts to the context via significantly more lightweight modification of parameters. This paper follows the latter way and proposes a simple modification of a hidden Markov model (HMM) classifier that enables end users to adapt the generic system to a context or a personal perception of an annotator by labelling a fairly small number of data samples of each context. For better adaptability to the limited number of the user's annotations, the proposed semisupervised HMM classifier employs the maximum posterior marginal, rather than the more conventional maximum a posteriori decision rule. The proposed user-and contextadaptable semi-supervised HMM classifier was tested on recognising excitement of a show audience in three contexts (a concert hall, a circus, and a sport event), differing in how the excitement is expressed. In our experiments the proposed classifier recognised reactions of a non-neutral audience with 10% higher accuracy than the conventional HMM and support vector machine based classifiers.
A male infant with mosaic interstitial deletion of 15q is described. He had some dys‐morphic features and complicated congenital right side heart disease.
State-of-the-art automatic analysis tools for personal audio content management are discussed in this paper. Our main target is to create a system, which has several co-operating management tools for audio database and which improve the results of each other. Bayesian networks based audio classification algorithm provides classification into four main audio classes (silence, speech, music, and noise) and serves as a first step for other subsequent analysis tools. For speech analysis we propose an improved Bayesian information criterion based speaker segmentation and clustering algorithm applying also a combined gender and emotion detection algorithm utilizing prosodic features. For the other main classes it is often hard to device any general and well functional pre-categorization that would fit the unforeseeable types of user recorded data. For compensating the absence of analysis tools for these classes we propose the use of efficient audio similarity measure and query-by-example algorithm with database clustering capabilities. The experimental results show that the combined use of the algorithms is feasible in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.