The Visual Object Tracking challenge VOT2018 is the sixth annual tracker benchmarking activity organized by the VOT initiative. Results of over eighty trackers are presented; many are state-of-the-art trackers published at major computer vision conferences or in journals in the recent years. The evaluation included the standard VOT and other popular methodologies for short-term tracking analysis and a "real-time" experiment simulating a situation where a tracker processes images as if provided by a continuously running sensor. A long-term tracking subchallenge has been introduced to the set of standard VOT sub-challenges. The new subchallenge focuses on long-term tracking properties, namely coping with target disappearance and reappearance. A new dataset has been compiled and a performance evaluation methodology that focuses on long-term tracking capabilities has been adopted. The VOT toolkit has been updated to support both standard short-term and the new longterm tracking subchallenges. Performance of the tested trackers typically by far exceeds standard baselines. The source code for most of the trackers is publicly available from the VOT page. The dataset, the evaluation kit and the results are publicly available at the challenge website 60 .
The Visual Object Tracking challenge 2015, VOT2015, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 62 trackers are presented. The number of tested trackers makes VOT 2015 the largest benchmark on shortterm tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2015 challenge that go beyond its VOT2014 predecessor are: (i) a new VOT2015 dataset twice as large as in VOT2014 with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2014 evaluation methodology by introduction of a new performance measure. The dataset, the evaluation kit as well as the results are publicly available at the challenge website 1 .
A PAR-1–mediated bias in microtubule organization in the Drosophila oocyte underlies posterior-directed mRNA transport.
In this paper, we study a general optimization model, which covers a large class of existing models for many applications in imaging sciences. To solve the resulting possibly nonconvex, nonsmooth and non-Lipschitz optimization problem, we adapt the alternating direction method of multipliers (ADMM) with a general dual step-size to solve a reformulation that contains three blocks of variables, and analyze its convergence. We show that for any dual step-size less than the golden ratio, there exists a computable threshold such that if the penalty parameter is chosen above such a threshold and the sequence thus generated by our ADMM is bounded, then the cluster point of the sequence gives a stationary point of the nonconvex optimization problem. We achieve this via a potential function specifically constructed for our ADMM. Moreover, we establish the global convergence of the whole sequence if, in addition, this special potential function is a Kurdyka-Lojasiewicz function. Furthermore, we present a simple strategy for initializing the algorithm to guarantee boundedness of the sequence. Finally, we perform numerical experiments comparing our ADMM with the proximal alternating linearized minimization (PALM) proposed in [5] on the background/foreground extraction problem with real data. The numerical results show that our ADMM with a nontrivial dual step-size is efficient.1. bridge penalty [27,28]The bridge penalty and the logistic penalty have also been considered in [13]. Finally, the linear map A can be suitably chosen to model different scenarios. For example, A can be chosen to be the identity map for extracting L and S from a noisy data D, and the blurring map for a blurred data D. The linear map B can be the identity map or some "dictionary" that spans the data space (see, for example, [34]), and C can be chosen to be the identity map or the inverse of certain sparsifying transform (see, for example, [40]). More examples of (1.1) can be found in [8-10, 13, 41, 47].One representative application that is frequently modeled by (1.1) via a suitable choice of Φ, Ψ, A, B and C is the background/foreground extraction problem, which is an important problem in video processing; see [6,7] for recent surveys. In this problem, one attempts to separate the relatively static information called "background" and the moving objects called "foreground" in a video. The problem can be modeled by (1.1), and such models are typically referred to as RPCA-based models. In these models, each image is stacked as a column of a data matrix D, the relatively static background is then modeled as a low rank matrix, while the moving foreground is modeled as sparse outliers. The data matrix D is then decomposed (approximately) as the sum of a low rank matrix L ∈ R m×n modeling the background and a sparse matrix S ∈ R m×n modeling the foreground. Various approximations are then used to induce low rank and sparsity, resulting in different RPCA-based models, most of which take the form of (1.1). One example is to set Ψ to be the nuclear norm of L, ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.