Local space-time features capture local events in video and can be adapted to the size, the frequency and the velocity of moving patterns. In this paper we demonstrate how such features can be used for recognizing complex motion patterns. We construct video representations in terms of local space-time features and integrate such representations with SVM classification schemes for recognition. For the purpose of evaluation we introduce a new video database containing 2391 sequences of six human actions performed by 25 people in four different scenarios. The presented results of action recognition justify the proposed method and demonstrate its advantage compared to other relative approaches for action recognition
In this paper we address the problem in motion recognition using event-based local motion representations. We assume that similar patterns of motion contain similar events with consistent motion across image sequences. Using this assumption, we formulate the problem of motion recognition as a matching of corresponding events in image sequences. To enable the matching, we present and evaluate a set of motion descriptors exploiting the spatial and the temporal coherence of motion measurements between corresponding events in image sequences. As motion measurements may depend on the relative motion of the camera, we also present a mechanism for local velocity adaptation of events and evaluate its influence when recognizing image sequences subjected to different camera motions.When recognizing motion, we compare the performance of nearest neighbor (NN) classifier with the performance of support vector machine (SVM). We also compare event-based motion representations to motion representations by global histograms. An experimental evaluation on a large video database with human actions demonstrates the advantage of the proposed scheme for event-based motion representation in combination with SVM classification. The particular advantage of event-based representations and velocity adaptation is further emphasized when recognizing human actions in unconstrained scenes with complex and non-stationary backgrounds.
Abstract-Adaptive filters for echo cancellation generally need update control schemes to avoid divergence in case of significant disturbances. The two-path algorithm avoids the problem of unnecessary halting of the adaptive filter when the control scheme gives an erroneous output. Versions of this algorithm have previously been presented for echo cancellation. This paper presents a transfer logic which improves the convergence speed of the twopath algorithm for acoustic echo cancellation, while retaining the robustness. Results from simulations show an improved performance, and a fixed-point DSP implementation verifies the performance in real-time.
Parallel adaptive filters have been proposed for echo cancellation to solve the dead-lock problem, occurring when the echo is detected as near-end speech after a severe echo-path change; causing the updating of the adaptive filter to halt. To control the parallel filters and monitor their performance, estimates of the filter deviation (i.e. the squared norm of the filter mismatch vector) are typically used.This paper presents a modification of a filter mismatch estimator. The proposed modification requires slightly more computational resources than the original measure, but provides a significant improvement in terms of robustness during double-talk. This is shown both analytically and through simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.