Computer vision tasks, such as motion estimation, depth estimation, object detection, etc., are better suited to light field images with more structural information than traditional 2D monocular images. However, since costly data acquisition instruments are difficult to calibrate, it is always hard to obtain real-world scene light field images. The majority of the datasets for static light field images now available are modest in size and cannot be used in methods such as transformer to fully leverage local and global correlations. Additionally, studies on dynamic situations, such as object tracking and motion estimates based on 4D light field images, have been rare, and we anticipate a superior performance. In this paper, we firstly propose a new static light field dataset that contains up to 50 scenes and takes 8 to 10 perspectives for each scene, with the ground truth including disparities, depths, surface normals, segmentations, and object poses. This dataset is larger scaled compared to current mainstream datasets for depth estimation refinement, and we focus on indoor and some outdoor scenarios. Second, to generate additional optical flow ground truth that indicates 3D motion of objects in addition to the ground truth obtained in static scenes in order to calculate more precise pixel level motion estimation, we released a light field scene flow dataset with dense 3D motion ground truth of pixels, and each scene has 150 frames. Thirdly, by utilizing the DistgDisp and DistgASR, which decouple the angular and spatial domain of the light field, we perform disparity estimation and angular super-resolution to evaluate the performance of our light field dataset. The performance and potential of our dataset in disparity estimation and angular super-resolution have been demonstrated by experimental results.
Metaverses have caused significant changes in the industry and their academic foundation can be traced back to the term cyber-physical-social systems (CPSS), which was proposed in 2010. Radar is an important sensor in sensing systems that are widely applied in many fields, especially, in autonomous driving. To deal with the complex environment, smart radars with realtime information processing capabilities are required. Human factors play a critical role in the operation and management of radar systems, thus, digital twins' radars in cyber-physical systems (CPS) are unable to achieve intelligence in CPSS due to an incomplete consideration of human involvement. For this consideration, we propose a novel framework of RadarVerses for smart radars in metaverses based on ACP-based parallel intelligence, which is also known as cyber-physical-social intelligence (CPSI). RadarVerses consist of five main parts which are physical radars, descriptive radars, predictive radars, prescriptive radars, and deep radars. To construct RadarVerses at the technical level, we introduce four main technical foundations: 1) communication technology; 2) scenarios engineering; 3) foundation models; and 4) digital workers. In addition, we also provide a case study about LiDARs' predictive maintenance of accumulated snow in RadarVerses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.