CLIENT

Google Deepmind

ROLES

UX Designer

UX Researcher

COLLABORATORS

Ari Carden

Sherry Chen

Harvin Park

Victoria Qian

TIMELINE

4 months

CONTEXT

High-level research, helping build Multimodal Generative AI for music

For our capstone team, we were approached by a researcher from Google Deepmind on the Magenta team with a proposal to create an application that would rapidly align sheet music and audio recordings. With such a tool, our client will be able to train their music generative model using the data output of the tool. Our client intends to make this application open-source, so that anyone may use this application.

SOLUTION

An open-source, accessible, application for rapid sheet music and audio alignment

SECONDARY RESEARCH

700k+ scores and 80k+ recordings available on IMSLP that are NOT ALIGNED

The International Music Score Library Project or IMSLP, is a digital library of music scores. This digital library is one of the largest collections of public domain music, containing 775,000+ scores and 86,000+ recordings as of September 2024. This is great for musicians as the collection is quite comprehensive; however there is still a large gap of aligned data: mappings between pixels in sheet music and the corresponding timestamps in associated performances. Our client also provided us with a paper written for ISMIR 2022 Conference with more context into previous examinations into sheet music and audio alignment.

“While several existing MIR datasets contain alignments between performances and structured scores (formats like MIDI and MusicXML), no current resources align performances with more commonplace raw-image sheet music...

Prototype application MeSA developed to provide missing service

MeSA or Measure to Sound Annotator was made as a early proof-of-concept for a tool that would provide this service. Lack of resources that align performances and sheet music can be explained through a number of reasons:


  1. Music elements such as expressive timing and repeats make annotation slow and difficult.

  2. Varying level of granularity when it comes to alignment:

    1. Alignment at the piece level is too coarse for any useful application beyond piece recognition.

    2. Note level alignment would be useful but be expensive.

    3. At the line level, there can be failures due to repeat signs.

  3. Real time alignment is difficult due to expressive timing and annotators unfamiliarity with a piece.

“To overcome these obstacles, we developed an interactive system, MeSA, which leverages off-the-shelf measure and beat detection software to aid musicians in quickly producing measure-level alignments...”

Here you can access a small dataset created using MeSA, MeSA-13.

COMPETITIVE ANALYSIS

The competition had NO alignment feature or was not the intended goal

Despite our clients claim that there were no competitors, our team conducted an analysis to verify this claim. Of the 6 competitors we found, there was indeed no automatic alignment service provided or alignment did occur, but was never the intended goal of the application.

USER INTERVIEWS

Musicians used both sheet music and audio recordings for practice; but separately

From research there was a huge gap of aligned data and lack of a service that aligned sheet music and audio, my team interviewed musicians our client was working with at the CMU School of Music.

RESEARCH QUESTIONS:

  1. Could you walk us through your practice routine?

  2. How can you tell that you are improving?

  3. Have you tried recording yourself and playing it back?

  4. Are there any resources or tools you used?

THE MAIN INSIGHT

None of the tools used by our interviewees included recorded performances aligned with sheet music

Based on the trends in the affinity map, we noticed that in addition to sheet music, recordings of performances (of themselves or of others) are essential reference material to learning repertoire but are found using other sources

TESTING AND IMPROVEMENTS

3 main design improvements

From 9 other peers and our advisor, we iterated over our designs with the feedback provided over the span of 4 weeks. In that time, we had 3 major improvements:

THE FINAL SCREENS

The Final Product

CONCLUSION AND LESSONS LEARNED

What I would do differently next time

As someone that used to study music and have some exposure to AI / ML, this project was an interesting intersection of the two domains. Not only am I immensely proud of my team’s output, but also for the opportunity of working with our client who operates specifically within this space of music and generative AI. Some takeaways I got:

  • No such thing as too many iterations. In the beginning stages, our team explored so many different options to try finding the right solution for musicians. Our team drew up several iterations and from there we branched out designing 4 iterations to ensure that every aspect within the application was crafted with intention.

  • Tradeoffs are always present. An example I can think of was early in the project when we still had audio waves being shown in our early iterations. Showing the changes and really communicating what that means for the user helped not only my team members but our advisors and client understand the rationale behind certain design decisions. I hope to continue this practice and develop this skill in future projects.

  • Be insight driven. This case study lived on a google document for quite a while, before eventually being published here. That document was about 14 pages long, filled with unnecessary text that didn’t answer the question: “how does this fit into the bigger picture?”. The following iterations of the case study involved a significant culling (about 60%) of content and focusing mainly on the major points throughout the project. Storytelling is an ability that I am still grappling with and by honing in on the insights and influential points, I can create more cohesive narratives.

The exchanges that occurred was incredibly valuable; our client was able to create an application that is not only beneficial to their research but to the general music generative AI community and our team was able to employ the entire UX process while learning best practices in designing with AI / ML in mind. In the end, I believe that I pushed for the application to it’s best state and made sure to not let my own thinking stop me from questioning whether a decision was truly best for the user.

For work inquiries or to chat with me, email me at anthonywudesign@gmail.com

Thanks for reading~

© 2024 Anthony Wu.

© 2024 Anthony Wu.

© 2024 Anthony Wu.