This project is an implementation of a Music Generator using LSTM networks. The implementation is done using Keras and the Music21 library.
The Music Generator uses a Long Short-Term Memory (LSTM) network to generate new musical pieces based on an existing corpus of music. The model is trained on a dataset of MIDI files, which are then converted to music21 streams for processing.
The generated music can be saved as a MIDI file or converted to an audio file using music21. A demo of the generated music can be found in the music.mp3
file.
To run this project, you will need the following software:
- Python 3.6 or later
- Keras
- TensorFlow
- Music21
- NumPy
Note: you have to convert MIDI fiile to the MP3 file if your player doesnt support it. you can convert online for free.