The standard procedure for wireframe drawing with hidden line removal on a graphics card has not changed for a long time: First the filled polygons are drawn, laying down the depth buffer. Next, the polygon edges are drawn as lines with a small depth offset to ensure that polygons do not occlude their own edges.The depth offset is required because the procedure for rasterizing lines is not exactly the same as the one for rasterizing polygons. Consequently, when rasterizing a polygon edge as a line, a given fragment may have a depth value that is different from when the corresponding polygon itself is rasterized. This leads to stippling artefacts. However, adding an offset is not an ideal solution since this offset can result in disocclusions of lines that should be hidden. Moreover, there is usually stippling in any case near steep slopes in the mesh where a very large offset is sometimes required. The only real fix is a slope dependent offset but that tends to make disocclusion problems much worse. A few authors have proposed improved techniques, but either these are not intended for modern graphics hardware [Wang et al.] or they incur a performance hit [Herrel et al.].Our solution does not use the line primitive at all. Instead polygon edges are drawn directly as a part of polygon rasterization. For each fragment, we compute the shortest distance to the edges of the polygon and map that distance to an intensity value, I, as shown in the figure (right). This mapping is not just a step function but a smooth function, I = exp 2 (−2d 2 ) which amounts to antialiasing by prefiltering.This method works for convex polygons (triangles or quads most likely), and it suffers from none of the artefacts associated with the offset based methods, but it does have one drawback. If a polygon does not have a neighbouring polygon (e.g. near a hole or a silhouette) the line is drawn from one side only. This means that silhouette lines are thinner and not antialiased. However, in practice the quality is still far better than using the offset based method, and the performance is almost invariably better. On a Geforce 7800 GTX 1 (jab|njc)@imm.dtu.dk 2 steen@flux-studios.com 3 (mig|bdl)@dalux.dk the Happy Buddha mesh was rendered at 25 fps using our method and only 5 fps using the offset based method. Thus, our method seems to be particularly well suited for the rendering of dense triangle meshes such as the increasingly common laser scanned models. Furthermore, many variations of the method are possible. In fact, the two images in the figure show the method using attenuation of line intensity (left) and thickness (center). In the center image, alpha testing was used to remove the interior of the quads. ImplementationObserve that the 2D distance from a point to a polygon edge is an affine function. Such functions are reproduced by linear interpolation. For this reason, we can compute the distance at each vertex and simply interpolate linearly. Thus, for each vertex of, say, a triangle we must send the other two vertices as attribut...
Figure 1: The left image shows Stanford Bunny drawn using (a) the offset method with a constant offset, and (b) our single pass method. The right image shows the Utah Teapot drawn using (c) a constant offset, (d) our ID buffer method and (e) a slope dependent offset. The white arrows indicate artifacts introduced by the slope dependent offset. Notice the stippling in the constant offset images. AbstractTwo novel and robust techniques for wireframe drawing are proposed. Neither suffer from the well-known artifacts associated with the standard two pass, offset based techniques for wireframe drawing. Both methods draw prefiltered lines and produce high-quality antialiased results without super-sampling. The first method is a single pass technique well suited for convex N-gons for small N (in particular quadrilaterals or triangles). It is demonstrated that this method is more efficient than the standard techniques and ideally suited for implementation using geometry shaders. The second method is completely general and suited for arbitrary N-gons which need not be convex. Lastly, it is described how our methods can easily be extended to support various line styles.
Distributed VR promises to change the way we work and collaborate [Singhal and Zyda 1999]. In this sketch we will extend the accessibility of the virtual world originally developed in [Larsen and Eriksen 1998] by introducing the use of the modern cellular phone as a platform for primitive interfaces to VR applications. We believe that our use of a cellular phone has led to the first completely pocketable platform for VR user interfaces.While a cellular phone will offer only primitive access to the virtual environment, we expect that being able to access the VE at all times will spawn new and enhance existing applications of VR. For instance, existing online multi-user games may provide limited access through a cellular phone enabling people who either in transit or just not near a PC to participate through a more limited interface. An important question is "how limited?". We believe that a "cellular interaction mode" need not be very restrictive as long as careful use is made of the technology available.Unfortunately, existing cellular phones have no hardware support for 3D graphics and the screen is much smaller (Typically 176 × 208 or 208 × 320 pixels). Furthermore, the only input is from a limited number of push buttons. But a cellular phone has a high-speed TCP/IP connection to any server on the Internet using the current GPRS (General Packet Radio Service) protocol (40.2 kb/s) or the upcoming and much faster UMTS (Universal Mobile Telecommunications System) protocol.Currently, the producers of cellular phones have chosen no common operating system standards. However, many of these mobile devices support a subset of Java (J2ME-Java 2 Micro Edition). This contains no floating-point math and the only supported network protocol is HTTP which implies that the only way to obtain duplex data streams on the cellular phone is if it continuously polls the server. We have chosen to construct a LEGO TM building application and our task was to enable people to build LEGO TM structures in collaboration using either a traditional workstation or a cellular phone.The system is based on a client-server model. The server contains the 3D models and maintains connections to clients. In our case, the clients may be either workstations or cellular devices. In the following discussion, we focus on the case where the client is a cellular device. * e-mail:(bdl|jab|njc)@imm.dtu.dk The lack of support for a 3D graphics API and floating point arithmetic makes it very important to divide work between the server and the cellular device in an intelligent way. In our system, we have chosen to perform high quality rendering on the server which uploads images to the device on demand. Since LEGO TM bricks manipulated interactively cannot be rendered locally in realtime, they are rendered in wire-frame. In the absence of floating point numbers, we have implemented a simple facility for fixed point arithmetic.As the user moves in the environment new images on the server are generated and transferred to the cellular phone. When the user selects ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.