Music is an integral part of high school students' daily lives, and most use digital music devices and services. The oneweek Summer Music Technology (SMT) program at Drexel University introduces underclassmen high school students to music technology to reveal the influence and importance of engineering, science, and mathematics. By engaging participants' affinity for music, we hope to motivate and catalyze curiosity in science and technology. The curriculum emphasizes signal processing concepts, tools, and methods through hands-on activities and individual projects and leverages computer-based learning and open-source software in most activities. Since the program began in 2006, SMT has enrolled nearly 100 high school students and further developed the communication and teaching skills of nearly 20 graduate and undergraduate engineering students serving as core instructors. The program also serves to attract students from backgrounds under-represented in engineering, math, and science who may not have considered these fields.
In this work we introduce the concept of modeling musical instrument tones as dynamic textures. Dynamic textures are multidimensional signals, which exhibit certain temporal-stationary characteristics such that they can be modeled as observations from a linear dynamical system (LDS). Previous work in dynamic textures research has shown that sequences exhibiting such characteristics can in many cases be re-synthesized by an LDS with high accuracy. In this work we demonstrate that short-time Fourier transform (STFT) coefficients of certain instrument tones (e.g. piano, guitar) can be well-modeled under this requirement. We show that these instruments can be re-synthesized using an LDS model with high fidelity, even using low-dimensional models. In looking to ultimately develop models which can be altered to provide control of pitch and articulation, we analyze the connections between such musical qualities as articulation with linear dynamical system model parameters. Finally, we provide preliminary experiments in the alteration of such musical qualities through model re-parameterization.
Access to hardware and software tools for producing music has become commonplace in the digital landscape. While the means to produce music have become widely available, significant time must be invested to attain professional results. Mixing multi-channel audio requires techniques and training far beyond the knowledge of the average music software user. Achieving balance and clarity in a mixture comprising a multitude of instrument layers requires experience in evaluating and modifying the individual elements and their sum. Creating a mix involves many technical concerns (level balancing, dynamic range control, stereo panning, spectral balance) as well as artistic decisions (modulation effects, distortion effects, side-chaining, etc.). This work proposes methods to model the relationships between a set of multi-channel audio tracks based on short-time spectraltemporal characteristics and long term dynamics. The goal is to create a parameterized space based on high level perceptual cues to drive processing decisions in a multi-track audio setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.