A key challenge to tomorrow's real-time 3D rendering engine has beenthe memorybandwidthbarrierthat limits how fast an image canbe painted onto image memory. Anotherhas beenthe floating pointthroughputthat limits howmuch world coordinate data can be mapped to the screen. Other traditional challenges remain. The 3D engine must be small, modular, and affordable so it can be integratedintotomorrow's visionsystems easily. These barriers arebeingfranscendedand goals realizedbyimplementingnovel3D renderer prototypes on a new "software breadboard." The breadboard, implemented in C and VHDL, permits rapid evaluation of promising concepts using off-the-shelfmodels and high level structures. With a processor at each pixel in an array ofsmall image memory tiles, the 3D system model, is a streamlined version ofUNC's Pixel Planes designed to render more than 500,000 smooth shaded triangles per simulated second on a 1280 x 1024 pixel screen. It texture-maps vertices to generate digital maps and stores previouslycomputedtransformationsinasmallbufferto avoidredundantoperations. Theeffectis anefficient augmentation of scene detail. The breadboard, being developed under the auspices ofWright Laboratory, will help determine which functions should be implemented using ASICs. All system parameters, including the anti-aliasing method and the depth ofthe Z and color buffers, are programmable. Breadboardbenchmarks and associated analyses showthata single card, capable ofrendering useful real-time "outthe-windoW' scenes, is feasible today. The "software breadboard" is being used to design tomorrow's real-time 3D renderers. DISPLAY SYSTEM AND GRAPIIICS TECBNOLOGY TRENDSA review ofthe capabilities on pastprograms reveals that the mixing and merging ofailtypes ofdata has been a traditional task for cockpit display systems. Since electronic multifunction displays were introduced into the cockpit in the late sixties and early seventies, the electronic display system has become the primary information funnelto the pilot/operator for charts, maps, manuals, television. radar, infrared, instrumentation, and other supplies. Much ofthis information was drawn in the form of2D symbolic presentations.Through lines, arcs, alphanumerics, icons, and instrumentation readings are effective for communicating various classes of information, the aerospace communityhas indicatedaneedforcommunicatingthe surroundmore realisticallyto the pilot/operator." 2, 3 This is especially true when visual or sensor acquired scenes, normally in use, are prevented by mission requirements or hindered by events such as weather. Where terrain and other potential incursionary objects are positioned with respect to the viewer can be conveyed more effectively in many instances by using images rendered in sun-angle shaded perspective ratherthan inthe symbolic form. This eliminates a major part ofthe viewer workload associated with spatially manipulating, overlaying, integrating, and interpreting 2D images inside the viewer's mind. O-8194-1523-5/94/$6.OO SPIE Vol. 2219 Cockpit Display...
A critical part of automatic classification algorithms is the extraction of features which distinguish targets from background noise and clutter. The focus of this paper is the use of variational methods for improving the classification of sea mines from both side-scan sonar and laser line-scan images. These methods are based on minimizing a functional of the image intensity. Examples include Total Variation Minimization (TVM) which is very effective for reducing the noise of an image without compromising its edge features, and Mumford-Shah segmentation, which in its simplest form, provides an optimal piecewise constant partition of the image. For the sonar side-scan images it is shown that a combination of these two variational methods, (first reducing the noise using TVM, then using segmentation) outperforms the use of either one individually for the extraction of minelike features. Multichannel segmentation based on a wavelet decomposition is also effectively used to declutter a sonar image. Finally, feature extraction and classification using segmentation is demonstrated on laser line-scan images ofmines in a cluttered sea floor.
Non-pictorial computer image generation has been used to amplify vision to see through fog, rain and other poor visibility conditions. Recent human factors research states there is a need for pictorial computer image generation (CIG) and that the need is highest during peak workload phases of flight. However, because even modest levels of image fidelity consume hundreds of MIPS and MFLOPS, the benefits of using CIG in the cockpit are often overshadowed by costs. This paper presents a low cost alternative for presenting photo-realistic imagery during the final approach, which often is a peak workload phase of flight. The method capitalizes on "a priori" information. It accesses out-the-window "snapshots" from a mass storage device, selecting the snapshots that deliver the best match for a given aircraft position and runway scene. It then warps the snapshots to align them more closely with the current viewpoint. The individual snapshots, stored as highly compressed images, are decompressed and interpolated to produce a "clear-day" video stream. The paper shows how this warping, when combined with other compression methods, saves considerable amounts of storage; compression factors from 1000 to 3000 were achieved. Thus, a CD-ROM today can store reference snapshots for thousands of different runways. Dynamic scene elements not present in the snapshot database can be inserted as separate symbolic or pictorial images. When underpinned by an appropriate suite of sensor technologies, the methods discussed indicate an all-weather virtual video camera is possible. BACKGROUNDImaging sensor technologies, like radar and infrared, have shown excellent capabilities for penetrating all manner of foul weather conditions. But each has been shown to have serious drawbacks. Infrared cameras do not penetrate some common fog conditions. Crisp, clear images also disappear when temperatures have had time to become uniform.Ground mapped radar images, on the other hand, tend to be noisy and lack detail. Both types of sensor technologies may require considerable practice and deliberation to interpret what is happening. The unfamiliar contrast, noise and lighting effects, and in the case of radar, loss of detail, confuse viewers.Because of sensor difficulties like these, many in the industry have felt compelled to embrace a combination of technologies for solving Synthetic Vision System (SVS) problems. One approach combines a sensor suite with the graphics generator. The idea is to use sensors to ascertain precise aircraft position and attitude and to detect any incursionary vehicles or objects as they move through the field-of-view or field-of-regard. The sensors can be imaging or nonimaging. A graphics generator can be used to show a refined and fused version of what the sensors tell. The scene presented may range from minimalist stroke formats to highly realistic raster images.Today, simple calligraphic (stroke) formats are preferred for see-through display technologies, specifically, the Head-Up Display (HUD) and the Head Mounted Displ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.