Designing intelligent expert systems capable of answering different human queries is a challenging and emerging area of research. A huge amount of web data is available online and majority of which are in the form of unstructured documents covering articles, online news, corporate reports, medical records, social media communication data, etc. A user in need of certain information has to assess all the relevant documents to obtain the exact answer of their queries which is a time consuming and tedious work. Also, sometimes it becomes quite difficult to obtain the exact information from a list of documents quickly as and when required unless the whole document is read. This paper presents a rule-based information extraction system for unstructured web data that access the document contents quickly and provides the relevant answers to the user queries in a structured format. A number of tests were conducted to determine the overall performance of the proposed model and the results obtained in all the experiments performed shows the effectiveness of the model in providing required answers to different user queries quickly.
Generating music is an interesting and challenging problem in the field of machine learning. Mimicking human creativity has been popular in recent years, especially in the field of computer vision and image processing. With the advent of GANs, it is possible to generate new similar images, based on trained data. But this cannot be done for music similarly, as music has an extra temporal dimension. So it is necessary to understand how music is represented in digital form. When building models that perform this generative task, the learning and generation part is done in some high-level representation such as MIDI (Musical Instrument Digital Interface) or scores. This paper proposes a bi-directional LSTM (Long short-term memory) model with attention mechanism capable of generating similar type of music based on MIDI data. The music generated by the model follows the theme/style of the music the model is trained on. Also, due to the nature of MIDI, the tempo, instrument, and other parameters can be defined, and changed, post generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.