Synthetic imagery used for training and evaluating visual search and detection tasks should result in the same observer performance as obtained in the field. The generation of synthetic imagery generally involves a range of computational approximations and simplifications of the physical processes involved in the image formation, in order to meet the update rates in real-time systems or simply to achieve reasonable computation times. These approximations reduce the fidelity of the resulting imagery. This in turn affects observer performance. We have recently introduced visual conspicuity as an efficient task-related measure that can be deployed to calibrate synthetic imagery for use in human visual search and detection tasks. Target conspicuity determines mean visual search time. Targets in synthetic imagery with the same visual conspicuity as their real world counterparts will give rise to an observer performance in simulated search and detection tasks that is similar to the performance in equivalent real world scenarios. In the present study we compare the conspicuity and the detection ranges of real and simulated targets with different degrees of shading. When ambient occlusion is taken into account, and when the contrast ratios in a scene are calibrated, the detection ranges and conspicuity values of simulated targets are equivalent to those of their real-world counterparts, for different degrees of shading. When no or incorrect shading is applied in the simulation, this is not the case, and the resulting imagery can not be deployed for training visual search and detection tasks.
Procedural generation of virtual worlds is a promising alternative to classical manual modelling approaches, which usually require a large amount of effort and expertise. However, it suffers from a number of issues; most importantly, the lack of user control over the generation process and its outcome. Because of this, the result of a procedural method is highly unpredictable, rendering it almost unusable for virtual world designers.This paper focuses on providing user control to deliver an outcome consistent with designer's intent. For this, we introduce semantic constraints, a flexible concept to express high-level designer's intent in intuitive terms as e.g. line of sight. Our constraint evaluation method is capable of detecting the context in which such a constraint is specified, automatically adapting to surrounding features of the virtual world. From experiments performed within our prototype modelling system, we can conclude that semantic constraints are another step forward in making procedural generation of virtual worlds more controllable and accessible to nonspecialist designers.
This paper discusses a rapid workflow for the automated generation of geospecific terrain databases for military simulation environments. Starting from photogrammetric data products of an oblique aerial camera, the process comprises deterministic terrain extraction from digital surface models and semantic building reconstruction from 3D point clouds. Further, an efficient supervised technique using little training data is applied to recover land classes from the true-orthophoto of the scene, and visual artifacts from parked vehicles to be separately modeled are suppressed through inpainting based on generative adversarial networks. As a proof-of-concept for the proposed pipeline, a data set of the Altmark/Schnoeggersburg training area in Germany was prepared and transformed into a ready-to-use environment for the commercial Virtual Battlespace Simulator (VBS). The obtained result got compared to another automatedly derived database and a semi-manually crafted scene regarding visual accuracy, functionality and necessary time effort.
While the potential of Virtual Environments (VE's) for traming simulators has been recognized right from the start of the emergence of the technology, to date most VE systems that claim to be training simulators have been developed in an adhoc fashion. Based on requirements of die Royal Netherlands Army and Air Force, we have recentiy developed VE based training simulators following basic systems engineering practice. This paper reports on our approach in general, and specifically focusses on two examples. The first is a distributed VE system for training Forward Air Controllers (FAC's). This system comprises an immersive VE for the FAC trainee, as well as a number of other components, all interconnected in a network m&astructure utilizing the DIS/HLA standard protocols for distributed simulation. The prototype VE FAC simulator is currentiy being used in the training program of the Netherlands Integrated Aù/Ground Operations School. Feedback from the users is being collected as input for a follow-on development activity. A second development is aimed at the evaluation of VE technology for training gunnery procedures with the Stinger man-portable air-defense system. In tiiis project, a system is being developed that enables us to evaluate a niunber of different configurations with respect to both human and systems performance characteristics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.