This thesis describes three interrelated projects that cut across the author's interests in musical information representation and retrieval, programming language theory, machine learning and human/computer interaction.I. Optical music recognition. This first part introduces an optical music interpretation (OMI) system that derives musical information from the symbols on sheet music.The first chapter is an introduction to OMI's parent field of optical music recognition (OMR), and to the present implementation as created for the Levy project.It is important that OMI has a representation standard in which to create its output. Therefore, the second chapter is a somewhat tangential but necessary study of computer-based musical representation languages, with particular emphasis on GUIDO and Mudela.The third and core chapter describes the processes involved in the present optical music interpretation system. While there are some details related to its implementation in the Python programming language, most of the material involves issues surrounding music notation rather than computer programming.The fourth chapter demonstrates how the logical musical data generated by the OMI system can be used as part of a musical search engine.II. Tempo extraction. The second part presents a system to automatically obtain the tempo and rubato curves from recorded performances, by aligning them to strict-tempo MIDI renderings of the same piece of music. The usefulness of such a system in the context of current musicological research is explored.III. Realtime digital signal processing programming environment. Lastly, a portable and flexible system for realtime digital signal processing (DSP) is presented. This system is both easyto-use and powerful, in large part because it takes advantage of existing mature technologies. This framework provides the foundation for easier experimentation in new directions in audio and video processing, including physical modeling and motion tracking.iii