Computers are often used in performance of popular music, but most often in very restricted ways, such as keyboard synthesizers where musicians are in complete control, or pre-recorded or sequenced music where musicians follow the computer's drums or click track. An interesting and yet little-explored possibility is the computer as highly autonomous performer of popular music, capable of joining a mixed ensemble of computers and humans. Considering the skills and functional requirements of musicians leads to a number of predictions about future human-computer music performance (HCMP) systems for popular music. We describe a general architecture for such systems and describe some early implementations and our experience with them.Sound and music computing research has made a tremendous impact on music through sound synthesis, audio processing, and interactive systems. In "high art" experimental music, we have also seen important advances in interactive music performance, including computer accompaniment, improvisation, and reactive systems of all sorts. In contrast, few if any systems can claim to support "popular" music performance in genres such as rock, jazz, and folk. Here, we see digital instruments, sequencers, and audio playback, but not autonomous interactive machine performers. Although the practice of popular music may not be a very active topic for research in music technology, it is arguably the dominant form of live music. For example, in a recent weekly listing of concerts in Pittsburgh, there are 24 "classical" concerts, 1 experimental or electro-acoustic performance, and 98 listings for rock, jazz, open stage, and acoustic music.In this article, we explore the approaches to interactive popular music performance with computers. We present a vision for such systems in the form of predictions about future performance