Modern methods mainly regard lane detection as a problem of pixel-wise segmentation, which is struggling to address the problems of efficiency and challenging scenarios like severe occlusions and extreme lighting conditions. Inspired by human perception, the recognition of lanes under severe occlusions and extreme lighting conditions is mainly based on contextual and global information. Motivated by this observation, we propose a novel, simple, yet effective formulation aiming at ultra fast speed and the problem of challenging scenarios. Specifically, we treat the process of lane detection as an anchor-driven ordinal classification problem using global features. First, we represent lanes with sparse coordinates on a series of hybrid (row and column) anchors. With the help of the anchor-driven representation, we then reformulate the lane detection task as an ordinal classification problem to get the coordinates of lanes. Our method could significantly reduce the computational cost with the anchor-driven representation. Using the large receptive field property of the ordinal classification formulation, we could also handle challenging scenarios. Extensive experiments on four lane detection datasets show that our method could achieve state-of-the-art performance in terms of both speed and accuracy. A lightweight version could even achieve 300+ frames per second(FPS). Our code is at https://github.com/cfzd/Ultra-Fast-Lane-Detection-v2.
Gait recognition has a rapid development in recent years. However, current gait recognition focuses primarily on ideal laboratory scenes, leaving the gait in the wild unexplored. One of the main reasons is the difficulty of collecting in-the-wild gait datasets, which must ensure diversity of both intrinsic and extrinsic human gait factors. To remedy this problem, we propose to construct a large-scale gait dataset with the help of controllable computer simulation. In detail, to diversify the intrinsic factors of gait, we generate numerous characters with diverse attributes and associate them with various types of walking styles. To diversify the extrinsic factors of gait, we build a complicated scene with a dense camera layout. Then we design an automatic generation toolkit under Unity3D for simulating the walking scenarios and capturing the gait data. As a result, we obtain a dataset simulating towards the in-the-wild scenario, called VersatileGait, which has more than one million silhouette sequences of 10,000 subjects with diverse scenarios. VersatileGait possesses several nice properties, including huge dataset size, diverse pedestrian attributes, complicated camera layout, high-quality annotations, small domain gap with the real one, good scalability for new demands, and no privacy issues. By conducting a series of experiments, we first explore the effects of different factors on gait recognition. We further illustrate the effectiveness of using our dataset to pre-train models, which obtain considerable performance gain on CASIA-B, OU-MVLP, and CASIA-E. Besides, we show the great potential of the fine-grained labels other than the ID label in improving the efficiency and effectiveness of models. Our dataset and its corresponding generation toolkit are available at https://github.com/peterzpy/VersatileGait.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.