Tutorial on "Music Information Retrieval 2.0"
 
held at  34th European Conference on Information Retrieval

Date: 1st April 2012

>> Slides <<

> Motivation
> Learning Objectives
> Schedule
> Speakers


Motivation

Music is an omnipresent topic on the Web, and everyone enjoys listening to his or her favorite tunes. Music information retrieval (MIR) is a research field that aims, among other things, at automatically extracting semantically meaningful information from various representations of music entities, such as a digital audio file, a band's Web page, a song's lyrics, or a tweet about a microblogger's current listening activity.
A key approach in MIR is to describe music via computational features, which can be broadly categorized into three classes: music content, music context, and user context. The music content refers to features extracted from the audio signal, while information about musical entities not encoded in the signal (e.g., image of an artist or political background of a song) are referred to as music context. The user context, in contrast, includes environmental aspects as well as physical and mental activities of the music listener.
MIR research has been seeing a paradigm shift over the last couple of years, as an increasing number of recently published approaches focus on the contextual feature categories, or at least combine "classical" signal-based techniques with data mined from Web sources or the user's context.

In this tutorial, we first summarize the ideas behind the three categories of computational features and discuss advantages and disadvantages of each. We subsequently review briefly some standard content-based feature extraction techniques, before focusing on the contextual aspects of music which are accessible through Web technology.
To this end, we give an introduction to the field of Web-based MIR and an overview of popular data sources (e.g., Web pages, (micro-)blogs, social networks, user tags, lyrics). Then we present approaches to exploit these sources to (a) mine descriptive and relational metadata (e.g., band members and instrumentation, country, album covers, genres, related artists), to (b) construct similarity measures for music artists and songs based on collaborative and cultural knowledge, and to (c) automatically index and retrieve music.

All presented concepts are illustrated and discussed using exemplary applications and case studies. After this tutorial, the participants will have a solid knowledge of current research in MIR with respect to content-based and Web-based methods, its potential and limitations, and future directions.
 

Learning Objectives

The half-day tutorial reports on the state-of-the-art in mining music-related information from the Web and further giving the interested audience an introduction to content-based feature extraction. The main goal is to give a sound and comprehensive, nevertheless easy-to-understand, introduction to exploiting Web- and community-based media in the music domain. The presented approaches are highly valuable for tasks and applications such as automated music playlist generation, personalized Web radio, music recommender systems, and intelligent user interfaces to music. Participants will leave the tutorial with a solid knowledge on current research in MIR with respect to content-based and Web-based methods, its potential and limitations, and future directions.

Tentative Schedule

1. Introduction to the field of MIR
motivation, application scenarios, content vs. context

2. State-of-the art in content-based feature extraction
basics in signal processing, MFCCs, block-level framework, semantic descriptors

3. State-of-the art in context-based similarity estimation from the Web
tags, social media, p2p networks

4. Future Directions
personalization, user-aware recommendation, semantic knowledge extraction from content

Speakers

Markus Schedl
Markus graduated in Computer Science from the Vienna University of Technology. He earned his Ph.D. in Computational Perception from the Johannes Kepler University Linz, where he is employed as assistant professor at the Department of Computational Perception. He further holds a Master's degree in International Business Administration from the Vienna University of Economics and Business Administration. Schedl (co-)authored more than 50 refereed conference papers and journal articles.
Furthermore, he serves on various program committees and reviewed submissions to several conferences and journals. His main research interests include social media mining, music and multimedia information retrieval, information visualization, and intelligent/personalized user interfaces. He is co-founder of the International Workshop on Advances in Music Information Research and co-organizer of the 3rd International Workshop on Search and Mining User-generated Contents. Among other lectures, since four years Markus gives a course on theoretical and practical aspects of music information retrieval, with a focus on Web-based methods.

Peter Knees
Peter is an assistant professor at the Department of Computational Perception at the Johannes Kepler University Linz. He holds a Master's degree in Computer Science from the Vienna University of Technology and a Ph.D. degree from the Johannes Kepler University Linz. Since 2004, he co-authored over 50 peer-reviewed conference and journal publications and serves as program committee member and reviewer for several conferences and journals relevant to the fields of music, multimedia, and text IR. Since 2009, he organizes the International Workshop on Advances in Music Information Research series. In addition to music and Web information retrieval, his research interests include multimedia, user interfaces, and recommender systems. Peter is also engaged in the digital media arts. For the project sound/tracks, he recently received a Jury Recommendation at the 14th Japan Media Arts Festival.


last edited by ms on 2011-11-24