Music emotion classification using a hybrid CNN-LSTM model
DOI:
https://doi.org/10.15276/aait.06.2023.28Keywords:
Deep learning, music emotion classification, neural network, spectrum analysis, convolutional neural networkAbstract
The emotional content of music, interwoven with the intricacies of human affect, poses a unique challenge for computational recognition and classification. With the digitalization of music libraries expanding exponentially, there is a pressing need for precise, automated tools capable of navigating and categorizing vast musical repositories based on emotional contexts. This study advances music emotion classification in the field of music information retrieval by developing a deep learning model that accurately predicts emotional categories in music. The goal of this research is to advance the field of music emotion classification by leveraging the capabilities of convolutional neural networks combined with long short-term memory within deep learning frameworks. The contribution of this study is to provide a refined approach to music emotion classification, combining the power of convolutional neural networks and long short-term memory architectures with sophisticated preprocessing of the Emotify dataset for a deeper and more accurate analysis of musical emotions. The research introduces a novel architecture combining Convolutional Neural Networks and Long Short-Term Memory networks designed to capture the intricate emotional nuances in music. The model leverages convolutional neural networks for robust feature detection and Long Short-Term Memory networks for effective sequence learning, addressing the temporal dynamics of musical features. Utilizing the Emotify dataset, comprising tracks annotated with nine emotional features, the study expands the dataset by segmenting each track into 20 parts, thereby enriching the variety of emotional expressions. Techniques like the synthetic minority oversampling technique were implemented to counter dataset imbalance, ensuring equitable representation of various emotions. The spectral characteristics of the samples were analyzed using the Fast Fourier Transform, contributing to a more comprehensive understanding of the data. Through meticulous fine-tuning, including dropout implementation to prevent overfitting and learning rate adjustments, the developed model achieved a notable accuracy of 94.7 %. This high level of precision underscores the model's potential for application in digital music services, recommendation systems, and music therapy. Future enhancements to this music emotion classification system include expanding the dataset and refining the model architecture for even more nuanced emotional analysis.