Department of Computational Perception
Department of
Computational Perception
Johannes Kepler Universit+AOQ-t Linz


Home  –  Mission  –  Teaching  –  People  –  Research  –  Media  –  Impressum


Social Media Mining for Multimodal Music Retrieval

Project Title: Social Media Mining for Multimodal Music Retrieval

Sponsor: Austrian Science Fund (Fonds zur Förderung der wissenschaftlichen Forschung, FWF)

Project Number: P25655

Duration: 48 months (July 2013 June 2017)

Persons involved:

Markus Schedl (Project leader)
Bruce Ferwerda (PostDoc researcher)
Andreu Vall (PhD student)

Marcin Skowron (former PostDoc researcher)
Katayoun Farrahi (former PostDoc researcher)

Abstract

Online social media has been seeing an incredible boom during the last couple of years. Even though they are researched heavily by the Data Mining and Information Retrieval communities, mining, analyzing, and exploiting the user-generated data offered on various platforms such as Twitter, Facebook, or Youtube, is still underrepresented in the context of Music Information Retrieval (MIR). 

Hence, in this project we propose to comprehensively research the use of a large number of different user-generated data sources for several innovative tasks in MIR research. To this end, we identified three goals that will be thoroughly addressed: 

    1. Inferring Geospatial Listening Patterns from User-generated Data

    2. Multimodal Modeling for Serendipitous Music Access 

    3. Building Intelligent, Adaptive User Interfaces to Music Collections 

    In the first challenge we will investigate whether social media can be used as reliable source to derive music listening patterns on a worldwide perspective. Upon identification of such patterns, we will be able to study geospatial aspects of music consumption and in turn look into methods to predict the popularity and trendiness of music items by fusing data from different sources. We will further research methods to detect novelty (e.g., novel releases, up-and-coming artists). 

    In the second challenge, we will develop multimodal approaches to model the user and the music items. Defining and building user models from his or her digital traces in social media will be enabled by machine learning techniques dedicated to user-generated data. Multimodal representations of music items (such as video clips, song lyrics, cover artwork and audio files) will be generated by elaborating novel feature extractors and fusion techniques. 

    Integrating multimodal descriptors of users and music items, we will then research ways to create serendipitous music access models and systems in the third challenge. Incorporating these different modalities will enable to develop personalized retrieval schemes, paying particular attention to the aspects of similarity, diversification, novelty, and familiarity/popularity. Based on the elaborated retrieval models, we will eventually create prototypical intelligent user interfaces to explore music collections.

Expected results of the realization of the project include: 

Publications



last edited by ms on 2017-10-04