AdMIRe: International Workshop on Advances in Music Information Research 2009 In Conjunction with the IEEE International Symposium on Multimedia 2009 San Diego, California, USA December 16, 2009
|
|
The
International
Workshop on Advances in Music Information Research (AdMIRe) 2009 will
serve as a forum for
theoretical and practical discussions of cutting edge research in the
fields of Web mining for music information extraction, retrieval, and
recommendation as well as in mobile applications and services. Research
on multimodal
extraction, retrieval, and presentation with a focus on the music and
audio domain is especially welcome. So are submissions addressing
concrete implementations of systems and services by both academic
institutions and industrial companies. The Call for Papers is also
available as PDF. |
2009-12-21: Photos of talks now available in Program section |
2009-11-09: Final Program
released |
2009-10-01: Masataka
Goto is going
to give a keynote speech @
AdMIRe |
2009-09-08: List of accepted papers
can be found in Program
section |
2009-08-31: Notifications have been
sent |
2009-08-10: Malcolm
Slaney is going
to give a keynote speech @
AdMIRe |
Music information retrieval (MIR)
as a subfield of multimedia
information retrieval has been a fast growing field of research during
the past decade. In traditional MIR research, music-related information
were extracted from the audio signal using signal processing
techniques. These methods, however, cannot capture semantic information
that is not encoded in the audio signal, but nonetheless essential to
many consumers, e.g., the meaning of the lyrics of a song or the
political motivation or background of a singer. In recent years, the emergence of various Web 2.0 platforms and services dedicated to the music and audio domain, like last.fm, MusicBrainz, or Discogs, has been providing novel and powerful, albeit noisy, sources for high level, semantic information on music artists, albums, songs, and others. The abundance of such information provided by the power of the crowd can therefore contribute to MIR research and development considerably. On the other hand, the wealth of newly available, semantically meaningful information offered on Web 2.0 platforms also poses new challenges, e.g., dealing with the huge amount and the noisiness of this kind of data, various user biases, hacking, or the cold start problem. Another recent trend, not at last addressable to platforms like Apple's iPhone or Google's Android, are intelligent user interfaces to access the large amounts of music usually available on today's mobile music players and the corresponding services. Mobile devices that offer high speed Web access allow for even more music to be consumed via Web services. Dealing with these vast amounts of music requires intelligent services on mobile devices that provide, for example, personalized and context-aware music recommendations. The current emergence and confluence of these challenges make this an interesting field for researchers and industry practitioners alike. |
The
workshop solicits regular technical papers of up to 6 pages (IEEE double-column
format). The proceedings of the workshop will be published
together with the main symposium proceedings by IEEE CS press. Workshop
papers will be official publications of IEEE which will be included in
IEEEXplore and also be printed as part of the conference proceedings.
Papers must be original and not submitted to or accepted by any other
conference or journal. Papers must be submitted
in electronic form as
PDF file. All submissions to this workshop will be peer-reviewed by
three members
of the Program Committee. In order to ensure maximum fairness and objectiveness in the reviewing process, a double-blind review strategy will be adopted. Thus, authors should conceal their identity as well as possible. High quality submissions that have not been published, nor are under review elsewhere, addressing one or more of the following topics are welcome. |
Topics of Interest
Music Information Systems |
Multimodal User Interfaces |
Context-aware Music Applications |
User Modeling and Personalization |
Social Networks and Collaborative Tagging in the Music and Audio Domain |
Web Mining and Information Extraction in the Music Domain |
Combination of Web-based and Signal-based Information Extraction Methods |
Music Recommendation |
Semantic Web, Linking Open Data and Open Web Services for the Music and Audio Domain |
Ontologies, Semantics and Reasoning in the Music and Audio Domain |
Evaluation, Mining of Ground Truth and Data Collections |
Music Indexing and Retrieval Techniques |
Exploration and Discovery in Large Music Collections |
Multimodal Semantic Content Analysis |
Full Paper Submission Deadline: August 2, 2009 |
Notification of Results: August 30, 2009 |
Camera Ready Submission: September 25, 2009 |
Program Chairs
Markus Schedl | Department of Computational Perception, Johannes Kepler University, Linz, Austria |
Òscar Celma | Barcelona Music and Audio Technologies, Barcelona, Spain |
Peter Knees | Department of Computational Perception, Johannes Kepler University, Linz, Austria |
Local Organizer
Luke Barrington | University of California, San Diego, CA, USA |
Program Committee
Submissions will be managed by EasyChair. Please create a user account if you have not already done so, login and follow the instructions to submit a new paper. |
Final Program
Session AdMIRe (1) | Room: Belmont, Chair: Peter Knees |
10:30 - 10:40 | Opening Remarks |
10:40 - 11:40 | Keynote Speech by
Malcolm Slaney ![]() Music at the Speed of the Internet |
11:40 - 12:00 | Geraint Wiggins ![]() Semantic Gap?? Schemantic Schmap!! Methodological Considerations in the Scientific Study of Music |
12:00 - 13:00 | Lunch |
Session AdMIRe (2) | Room: Belmont, Chair: Markus Schedl |
13:00 - 14:00 | Keynote Speech by
Masataka Goto ![]() Augmented Music-Understanding Interfaces: Toward Music Listening in the Future |
14:00 - 14:20 | Sten
Govaerts, Nik Corthaut, Erik Duval ![]() Using Search Engines for Classification: Does it really work? |
14:20 - 14:40 | Ching-Hua Chuan ![]() Pop-Rock Musical Style as Defined by Two-Chord Patterns at Segmentation Points in the Melody and Lyrics |
14:40 - 15:00 | Shih-Chuan
Chiu, Man-Kwan Shan, Jiun-Long Huang ![]() Automatic System for the Arrangement of Piano Reductions |
15:00 - 15:30 | Coffee Break |
Session AdMIRe (3) | Room: Belmont, Chair: Geraint Wiggins |
15:30 - 15:50 | Dmitry
Bogdanov, Joan Serrà, Nicolas Wack, Perfecto Herrera
![]() From Low-level to High-level: Comparative Study of Music Similarity Measures |
15:50 - 16:10 | Yuval
Shavitt, Udi Weinsberg ![]() Songs Clustering Using Peer-to-Peer Co-occurrences |
16:10 - 16:30 | Noam
Koenigstein, Yuval Shavitt, Noa Zilberman ![]() Predicting Billboard Success Using Data-Mining in P2P Networks |
16:30 - 17:00 | Concluding Remarks and Open Discussion |
Malcolm
Slaney Yahoo! Research and Stanford CCRMA |
Music
at the Speed of the Internet Abstract: The wealth of data available on the Internet changes the way we think about audio. Never before has there been so much music data available for training models and answering questions. But these new riches bring with it a change in the problems we must think about. The data is noisy and largely unlabeled --- we must make sense of it, often returning an answer in hundreds of milliseconds. How do we understand our auditory environment, especially when it extends across the world? How do we take into account context and do it at the scale of the Internet? In this talk I'd like to share with you Yahoo's experiences in this brave new world of multimedia everywhere, describe promising new technologies, and discuss open research directions. I will describe the need for better audio models, the kinds of algorithms needed for today's large databases, and how the Internet is changing multimedia retrieval. Biography: Malcolm Slaney is a principal scientist at Yahoo! Research Laboratory. He received his PhD from Purdue University for his work on computed imaging. He is a coauthor, with A. C. Kak, of the IEEE book "Principles of Computerized Tomographic Imaging." This book was recently republished by SIAM in their "Classics in Applied Mathematics" Series. He is coeditor, with Steven Greenberg, of the book "Computational Models of Auditory Function." Before Yahoo!, Dr. Slaney has worked at Bell Laboratory, Schlumberger Palo Alto Research, Apple Computer, Interval Research and IBM's Almaden Research Center. He is also a (consulting) Professor at Stanford's CCRMA where he organizes and teaches the Hearing Seminar. His research interests include auditory modeling and perception, multimedia analysis and synthesis, compressed-domain processing, music similarity and audio search, and machine learning. For the last several years he has lead the auditory group at the Telluride Neuromorphic Workshop. |
Masataka
Goto National Institute of Advanced Industrial Science and Technology (AIST) |
Augmented Music-Understanding
Interfaces: Toward Music Listening in the Future Abstract: One of our research goals is to enrich music listening experiences bydeepening each person's understanding of music. Even if listeners want to better understand music or want to improve their ability to understand music, methods to realize those wishes have not yet been established and will have to be discovered. We therefore proposed a research approach "Augmented Music-Understanding Interfaces" that facilitates deeper understanding of music by using automatic music-understanding technologies based on signal processing. First, visualization of music content plays an important role in augmenting people's understanding of music because understanding is deepened through seeing. Second, "music touch-up" (personalization or customization by making small changes to elements in existing music) also helps music understanding because understanding is deepened through editing. In this talk I'd like to introduce some examples of such interfaces, share lessons learned, and then discuss our perspectives on Internet-based music listening with shared semantic information in the future. Biography: Masataka Goto is a Leader of Media Interaction Group at the National Institute of Advanced Industrial Science and Technology (AIST). He received the Doctor of Engineering degree from Waseda University, Japan, in 1998. He then joined AIST (the Electrotechnical Laboratory before reorganization in 2001). He serves concurrently as a Visiting Professor in the Department of Statistical Modeling, The Institute of Statistical Mathematics, and an Associate Professor (Cooperative Graduate School Program) in the Department of Intelligent Interaction Technologies, Graduate School of Systems and Information Engineering, University of Tsukuba. Over the past 17 years, he received 24 awards, including the Commendation for Science and Technology by the Minister of MEXT "Young Scientists' Prize", DoCoMo Mobile Science Awards "Excellence Award in Fundamental Science", IPSJ Nagao Special Researcher Award, and IPSJ Best Paper Award. |
Markus
Schedl Department of Computational Perception Johannes Kepler University (JKU) Linz Altenberger Str. 69, A-4040 Linz, Austria Tel:
+43 732 2468 1512 |