Percorrer por autor "Malheiro, Ricardo"
A mostrar 1 - 14 de 14
Resultados por página
Opções de ordenação
- ItemBi-modal music emotion recognition: Novel lyrical features and dataset(9th International Workshop on Music and Machine Learning – MML’2016 – in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases – ECML/PKDD 2016, October 2016, 2016) Malheiro, Ricardo; Panda, Renato; Gomes, Paulo; Paiva, RuiThis research addresses the role of audio and lyrics in the music emotion recognition. Each dimension (e.g., audio) was separately studied, as well as in a context of bimodal analysis. We perform classification by quadrant categories (4 classes). Our approach is based on several audio and lyrics state-of-the-art features, as well as novel lyric features. To evaluate our approach we create a ground-truth dataset. The main conclusions show that unlike most of the similar works, lyrics performed better than audio. This suggests the importance of the new proposed lyric features and that bimodal analysis is always better than each dimension.
- ItemClassification and regression of music lyrics: Emotionally-significant features(8th International Conference on Knowledge Discovery and Information Retrieval, 2016-01) Malheiro, Ricardo; Panda, Renato; Gomes, Paulo; Paiva, Rui PedroThis research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell’s emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 68.2%, 79.6% and 84.2% to 77.1%, 86.3% and 89.2%, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate between arousal hemispheres and valence meridians. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6% F-measure in the classification by quadrants. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.
- ItemClassification of Recorded Classical Music: a methodology and a comparative study(University of Stirling, 2004-09) Malheiro, Ricardo; Paiva, R. P.; Mendes, A. J.; Mendes, T.; Cardoso, A.As a result of recent technological innovations, there has been a tremendous growth in the Electronic Music Distribution industry. Consequently, tasks such as automatic music genre classification address new and exciting research challenges. Automatic music genre recognition involves issues like feature extraction and development of classifiers using the obtained features. We use the number of zero crossings, loudness, spectral centroid, bandwidth and uniformity for feature extraction. These features are statistically manipulated, making a total of 40 features. Regarding the task of genre modeling, we follow three approaches: the K-Nearest Neighbors (KNN) classifier, Gaussian Mixture Models (GMM) and feedforward neural networks (FFNN). A taxonomy of sub-genres of classical music is used. We consider three classification problems: in the first one, we aim at discriminating between music for flute, piano and violin; in the second problem, we distinguish choral music from opera; finally, in the third one, we seek to discriminate between all five genres. The best results were obtained using FFNNs: 85% classification accuracy in the three-class problem, 90% in the two-class problem and 76% in the five-class problem. These results are encouraging and show that the presented methodology may be a good starting point for addressing more challenging tasks.
- ItemClassification of Recorded Classical Music: a methodology and a comparative study(BICS, 2004-08) Malheiro, Ricardo; Paiva, R.; Mendes, A.; Mendes, T.; Cardoso, A.As a result of recent technological innovations, there has been a tremendous growth in the Electronic Music Distribution industry. In this way, tasks such us automatic music genre classification appear as new and exciting research challenges. Automatic music genre recognition involves issues like feature extraction and development of classifiers using the obtained features. As for feature extraction, we use the number of zero crossings, loudness, spectral centroid, bandwidth and uniformity. These features are statistically manipulated, making a total of 40 features. Regarding the task of genre modeling, we train a feedforward neural network (FFNN) with the Levenberg-Marquardt algorithm. A taxonomy of subgenres of classical music is used. We consider three classification problems: in the first one, we aim to discriminate between music for flute, piano and violin; in the second problem, we distinguish choral music from opera; finally, in the third one, we aim to discriminate between all the abovementioned five genres together. We obtained 85% classification accuracy in the three-class problem, 90% in the two-class problem and 76% in the five-class problem. These results are encouraging and show that the presented methodology may be a good starting point for addressing more challenging tasks.
- ItemEmotion-based analysis and classification based on music lyrics(Universidade de Coimbra, 2016-08) Malheiro, RicardoMusic emotion recognition (MER) is gaining significant attention in the Music Information Retrieval (MIR) scientific community. In fact, the search of music through emotions is one of the main criteria utilized by users. Real-world music databases from sites like AllMusic or Last.fm grow larger and larger on a daily basis, which requires a tremendous amount of manual work for keeping them updated. Unfortunately, manually annotating music with emotion tags is normally a subjective process and an expensive and time-consuming task. This should be overcome with the use of automatic systems. Besides automatic music classification, MER has several applications related to emotion-based retrieval tools such as music recommendation or automatic playlist generation. MER is also used in areas such as game development, cinema, advertising and health. Most of early-stage automatic MER systems were based on audio content analysis. Later on, researchers started combining audio and lyrics, leading to bimodal MER systems with improved accuracy.This research addresses the role of lyrics in the music emotion recognition process. Feature extraction is one of the key stages of the Lyrics Music Emotion Recognition (LMER). We follow a learning-based approach using several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell’s emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. To validate these systems we created a validation dataset composed of 771 song lyrics. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. This research addresses also the role of the lyrics in the context of music emotion variation detection. To accomplish this task, we create a system to detect the predominant emotion expressed by each sentence (verse) of the lyrics. The system employs Russell’s emotion model with four sets of emotions (quadrants). To detect the predominant emotion in each verse, we proposed a novel keyword-based approach, which receives a sentence (verse) and classifies it in the appropriate quadrant. To tune the system parameters, we created a 129-sentence training dataset from 68 songs. To validate our system, we created a separate ground-truth containing 239 sentences (verses) from 44 songs. Finally, we measure the efficiency of the lyric features in a context of bimodal (audio and lyrics) analysis. We used almost all the state of the art features that we are aware of for both dimensions, as well as new lyric features proposed by us.
- ItemEmotionally-relevant features for classification and regression of music lyrics(IEEE TRANSACTIONS ON JOURNAL AFFECTIVE COMPUTING, MANUSCRIPT ID, 2016-08-08) Malheiro, Ricardo; Panda, Renato; Gomes, Paulo; Paiva, Rui PedroThis research addresses the role of lyrics in the music emotion recognition process. Our approach is based on several state of the art features complemented by novel stylistic, structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell’s emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. Comparing to the state of the art features (ngrams - baseline), adding other features, including novel features, improved the F-measure from 69.9%, 82.7% and 85.6% to 80.1%, 88.3% and 90%, respectively for the three classification experiments. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. To further validate these experiments, we built a validation set comprising 771 lyrics extracted from the AllMusic platform, having achieved 73.6% F-measure in the classification by quadrants. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. Regarding regression, results show that, comparing to similar studies for audio, we achieve a similar performance for arousal and a much better performance for valence.
- ItemKeyword-Based Approach for Lyrics Emotion Variation Detection(8th International Conference on Knowledge Discovery and Information Retrieval, 2016-01) Malheiro, Ricardo; Oliveira, Hugo Gonçalo; Gomes, Paulo; Paiva, Rui PedroThis research addresses the role of the lyrics in the context of music emotion variation detection. To accomplish this task we create a system to detect the predominant emotion expressed by each sentence (verse) of the lyrics. The system employs Russell’s emotion model and contains 4 sets of emotions associated to each quadrant. To detect the predominant emotion in each verse, we propose a novel keyword-based approach, which receives a sentence (verse) and classifies it in the appropriate quadrant. To tune the system parameters, we created a 129-sentence training dataset from 68 songs. To validate our system, we created a separate ground-truth containing 239 sentences (verses) from 44 songs annotated manually with an average of 7 annotations per sentence. The system attains 67.4% F-Measure score.
- ItemMulti-Modal Emotion Music Recognition (MER): a new dataset, methodology and comparative analysis(IADIS, 2011-03) Reis, Francisco; Malheiro, RicardoThere are currently two major trends in what higher education learning is concerned. The first is to respond to an increasingly mobile student population including those that are already working and need to improve their skills. The second is to model new pedagogical theories and practices that can contribute to better learning outcomes. Both try to adapt to a more distributed knowledge vision and to take advantage of how collaborative tools are shaping our work and lives in general. Instead of developing new theories, the Umniversity platform was developed so as to evaluate existing ones. Developed tools were motivated by current thinking about the future of learning and hopefully they will contribute to enable innovative practices helping shape future thinking. Four integration vectors are presented to accomplish this.
- ItemMusic Emotion Recognition from Lyrics: a comparative study(6th International Workshop on Machine Learning and Music (MML13). Held in Conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECMLPPKDD13), 2013-09) Malheiro, Ricardo; Panda, Renato; Gomes, Paulo; Paiva, Rui PedroWe present a study on music emotion recognition from lyrics. We start from a dataset of 764 samples (audio+lyrics) and perform feature extraction using several natural language processing techniques. Our goal is to build classifiers for the different datasets, comparing different algorithms and using feature selection. The best results (44.2% F-measure) were attained with SVMs. We also perform a bi-modal analysis that combines the best feature sets of audio and lyrics.The combination of the best audio and lyrics features achieved better results than the best feature set from audio only (63.9% F-Measure against 62.4% F-Measure).
- ItemA Prototype for Classification of Classical Music Using Neural Networks(Proceedings of the Eighth IASTED International Conference, 2004-09) Malheiro, Ricardo; Paiva, Rui Pedro; Mendes, A. J.; Mendes, T.; Cardoso, A.As a result of recent technological innovations, there has been a tremendous growth in the Electronic Music Distribution industry. In this way, tasks such us automatic music genre classification address new and exciting research challenges. Automatic music genre recognition involves issues like feature extraction and development of classifiers using the obtained features. As for feature extraction, we use features such as the number of zero crossings, loudness, spectral centroid, bandwidth and uniformity. These are statistically manipulated, making a total of 40 features. As for the task of genre modeling, we train a feedforward neural network (FFNN). A taxonomy of subgenres of classical music is used. We consider three classification problems: in the first one, we aim at discriminating between music for flute, piano and violin; in the second problem, we distinguish choral music from opera; finally, in the third one, we aim at discriminating between all five genres. Preliminary results are presented and discussed, which show that the presented methodology may be a good starting point for addressing more challenging tasks, such as using a broader range of musical categories.
- ItemSistemas de Classificação Automática em Géneros Musicais(Engenharia Informática, Universidade de Coimbra, 2004-04) Malheiro, RicardoComo resultado da massificação do computador, do aumento generalizado da largura de banda disponível e da universalização da Internet, a indústria da distribuição electrónica de música teve um enorme crescimento nos últimos anos. Esse crescimento está também relacionado com a facilidade com que à velocidade de um clique se pode aceder a bases de dados de música de grandes dimensões. Essas bases de dados têm de estar sempre actualizadas com toda a música que é produzida diariamente e têm de estar organizadas de acordo com as taxonomias definidas para poder responder da melhor maneira às pesquisas dos utilizadores. A catalogação de peças musicais com base nas taxonomias utilizadas, é um processo cada vez mais difícil de realizar de uma forma manual, devido a questões de tempo e de eficiência de quem as faz. Surgiu portanto a necessidade da utilização do computador para a criação de sistemas de classificação automáticos. Este tipo de sistemas envolve tarefas como a extracção de características de cada música e o desenvolvimento de classificadores que utilizem as características extraídas. Quanto à extracção de características, utiliza-se neste trabalho o zcr, loudness, centróide, largura de banda e uniformidade. Estas características são estatisticamente manipuladas fazendo um total de 40 características para cada música. Em seguida são utilizados três classificadores: KNN, GMM e MLP. A classificação consistiu em três problemas, todos relacionados com a música clássica. No primeiro pretendeu-se discriminar entre música para flauta, piano e violino. No segundo problema pretendeu-se distinguir música coral de ópera. Finalmente no terceiro classificou-se num dos 5 géneros musicais anteriores.
- ItemSistemas de classificação musical com redes neuronais(2004) Malheiro, Ricardo; Paiva, Rui Pedro; Mendes, António José; Mendes, Teresa; Cardoso, AmílcarComo resultado da evolução e inovação tecnológicas, a indústria da distribuição electrónica de música tem tido um enorme crescimento. Desta forma, tarefas como a classificação automática de géneros musicais tornam-se um forte motivo para o incremento da investigação na área. O reconhecimento automático de géneros musicais envolve tarefas como a extracção de características das músicas e o desenvolvimento de classificadores que utilizem essas características. Neste estudo pretendeu-se, através de 3 problemas de classificação independentes, classificar peças de música clássica. Foi construído um protótipo para um sistema real de classificação, onde de um conjunto de músicas não catalogadas, foram automaticamente extraídos dez segmentos de seis segundos cada. Cada segmento musical foi classificado individualmente utilizando redes neuronais, tendo sido, para tal, extraídas 40 características por segmento. Cada música foi
- ItemUmniversity Virtual World Platform for Massive Open Online Courses University Platform(VII International Conference on ICT in Education, 2011-05) Reis, Francisco; Malheiro, RicardoAs lifelong learning takes off and simultaneously financing for education is reduced, new educational practices are being explored namely through the use of information and communication technologies. Massive Open Online Courses (MOOC) are a new way to achieve two main goals: 1) reach as many potential students as possible; 2) use few resources, namely in what teachers/facilitators is concerned. To achieve this it must rely on appropriate technology and on a coherent pedagogical framework. Umniversity virtual world platform aims to better support the particular needs of MOOC. Prepared to manage hundreds or even thousands of students at each course, with asynchronous as well with synchronous tools, it is in the dynamics of the course and the motivation/evaluation of such large numbers of participants that Umniversity makes a difference, integrating seamlessly learning analytics for student self improvement and relying on a connectivist pedagogical approach.
- ItemUsing Information Retrieval Techniques for Keyword and Evaluation Extraction(IADIS International Conference for Information Systems,, 2008-04) Burrows, Christopher; Malheiro, RicardoWith the growth of online businesses, it is necessary for consumers to have easy access to the desired product. This access is usually achieved through search features which associate lists of keywords to the available products or by browsing through the different categories. Using Information Retrieval techniques like indexing and searching, this paper shows how to create wordlists from the collections of documents sold by an online publisher and compare the lists of associated keywords with the indexes so as to evaluate their completion, and if new keywords are obtained, a proposition will be made to be added to the existing lists. This will be particularly useful for the consumers whose access to the documents will be simplified, and to the business itself who will obtain customer satisfaction.