Today's CPUs are capable of supporting realtime audio for many popular applications, but some compute-intensive audio applications require hardware acceleration. This article looks at some realtime sound-synthesis applications and shares the authors' experiences implementing them on GPUs (graphics processing units). APPROACHES TO SOFTWARE SYNTHESIS Software synthesizers, which use software to generate audio in realtime, have been around for decades. They allow the use of computers as virtual instruments, to supplement or replace acoustic instruments in performance. Groups of software instruments can be aggregated into virtual orchestras. Similarly, in a video game or virtual environment with multiple sound sources, software synthesizers can generate each of the sounds in an auditory scene. For example, in a racing game, each discrete software synthesizer may generate the sound of a single car, with the resulting sounds combined to construct the auditory scene of the race. Traditionally, because of limited computing power, approaches to realtime audio synthesis have focused on techniques to compute simple waveforms directly (e.g., additive, FM synthesis), using sampling and playback (e.g., wavetable synthesis) or applying spectral modeling techniques (e.g., modal synthesis) to generate audio waveforms. While these techniques are widely used and understood, they work primarily with a model of the abstract sound produced by an instrument or object, not a model of the instrument or object itself. A more recent approach is physical modelingbased audio synthesis, where the audio waveforms are generated using detailed numerical simulation of physical objects or instruments. In physical modeling, a detailed numeric model of the behavior of an instrument or soundproducing object is built in software and then virtually "played" as it would be in the real world: the "performer" applies an excitation to the modeled object, analogous, for example, to a drumstick striking a drumhead. This triggers the computer to compute detailed simulation steps and generate the vibration waveforms that represent the output sound. By simulating the physical object and parameterizing the physical properties of how it produces sound, the same model can capture the realistic sonic variations that result from changes in the object's geometry, construction materials, and modes of excitation. Suppose you are simulating a metallic plate to generate gong or cymbal-like sounds. Varying a parameter that corresponds to the stiffness of the material may allow you to produce sounds ranging from a thin, flexible plate to a thicker, stiffer one. By changing the surface area for the same object, you can generate sound corresponding to cymbals or gongs of different sizes. Using the same model, you may also vary the way in which you excite the metallic plate-to generate sounds that result from hitting the plate with a soft mallet, a hard drumstick, or from bowing. By changing these LOVE IT, HATE IT? LET US KNOW