Melody generation from lyrics has been a challenging research issue in the field of artificial intelligence and music, which enables us to learn and discover latent relationships between interesting lyrics and accompanying melodies. Unfortunately, the limited availability of a paired lyrics–melody dataset with alignment information has hindered the research progress. To address this problem, we create a large dataset consisting of 12,197 MIDI songs each with paired lyrics and melody alignment through leveraging different music sources where alignment relationship between syllables and music attributes is extracted. Most importantly, we propose a novel deep generative model, conditional Long Short-Term Memory (LSTM)–Generative Adversarial Network for melody generation from lyrics, which contains a deep LSTM generator and a deep LSTM discriminator both conditioned on lyrics. In particular, lyrics-conditioned melody and alignment relationship between syllables of given lyrics and notes of predicted melody are generated simultaneously. Extensive experimental results have proved the effectiveness of our proposed lyrics-to-melody generative model, where plausible and tuneful sequences can be inferred from lyrics.
There has been a recent surge in methods targeting the recovery of a speed-of-sound map from pulse-echo ultrasound measurements. We focused in this work on a particular technique and identified a drawback shared by similar methods -namely the necessity of a high number of insonification -that we aim to circumvent. To do so, we first propose a beamforming procedure based on the transformation of radio-frequency data into a receive plane-wave basis. A method for extracting the local aberration phase difference caused by speed-of-sound variations is then presented. The speed-of-sound map can ultimately be recovered by inverting a measurement model. Preliminary results show that the proposed method is able to recover quantitatively similar results -a slight reduction of root-mean-square error from 9.35 m/s to 6.95 m/s on simulated data compared to an implementation of the reference method -while requiring 21 plane waves instead of 111.Index Terms-pulse-echo ultrasound, speed-of-sound recovery, beamforming, inverse problem
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.