TAKING MUSIC TECH IDEAS TO MARKET
“Really happy to be joining the awesome #MusicBricks project. The whole concept of making tools freely available, providing space for makers and hackers and then supporting what’s made with them is just brilliant. This is a new cooperative Win strategy. Looking forward to see what comes out of our #MusicBricks offerings. Onwards!”
Matt Black, Coldcut / Ninja Tune
“I have managed to code the machine learning to recognise 3 different gestures with accuracy close to 100% and I’m so excited about it. I used your suggestions… and now, the machine recognizes the static data up to 100% accurate… It is so cool. I am loving these Machine Learning algorithms. Thank you for your help…”
Rojan Gharibpour, Incubatee
What is #MusicBricks?
#MusicBricks is a set of interoperable tools designed for hackers and creative developers. It gives creators and digital content makers easy access to the core building blocks of music.
Built upon high-level research coming out of Europe’s finest institutions, the technologies are bundled and packaged into simple-to-deploy APIs, GUIs and TUIs that enable creative and technical minds to create new music projects, products and performances that might otherwise have been unimaginable.
Gesture Sensors for Music Performance
The R-IoT sensor module embeds a 9 axis sensor with 3 accelerometers, 3 gyroscopes and 3 magnetometers, all 16 bit. It allows for getting 3D acceleration, 3-axis angular velocity and absolute orientation at a framerate of 200 Hz over WiFi. The core of the board is a Texas Instrument WiFi module with a 32 bit Cortex ARM processor that execute the program and deals with the Ethernet / WAN stack. It is compatible with TI’s Code Composer and with Energia, a port of the Arduino environment for TI processors. The sensor module is completed with a series of analysis MaxMSP modules that facilitates its use, based on the MuBu & Co Max library. This collection of analysis tools allows for: filtering and analyzing, computing scalar intensity from accelerometer or gyroscope, kick detection, Mdetection motion patterns such as “freefall”, spinning, shaking, slow motion. Further motion recognition tools are available in the MuBu & Co library.
This module includes a number of pitch tracking and melody transcription algorithms implemented in the Essentia library. Applications include visualization of predominant melody, pitch tracking, tuning rating, source separation.
Real-time Onset description
This module allows to detect onsets in real-time and provide a number of audio descriptors. It is part of essentiaRT~, a real-time subset of Essentia (MTG’s open-source C++ library for audio analysis and audio-based music information retrieval) implemented as an external for Pd and Max/MSP. As such, the current version does not yet include all of Essentia’s algorithms, but a number of features to slice and provide on-the-fly descriptors for classification of audio in real-time. A number of extractors analyse instantaneous features like the onset strength, the spectral centroid and the MFCC’s over a fixed-size window of 2048 points, after an onset is reported. Furthermore, essentiaRT~ is able to perform estimations on larger time-frames of user-defined lengths, and to report finer descriptions in terms of noisiness, f0, temporal centroid and loudness.
A slick UI for browsing music by zooming into a colourful world of bubbles which represent genres, artists or moods which allows to discover new music online from various sources. It is available for iOS and Android, with APIs connecting to 7digital, last.fm, Youtube, Spotify, etc. Demo app in Google Play Store. See sonarflow.com and the source code on github.com/spectralmind.
A color detection tool triggering and controlling audio. Based on OpenCV it includes a colour detection engine and object tracking algorithm to control music and sound using plain colours or coloured objects. Using the camera it can trigger musical events. It includes an example app with a dedicated GUI. It is available for OSX and iOS. Find it on github: https://github.com/stromatolite/Synaesthesia
Rhythm and Timbre Analysis
This is a library that processes audio data as input and analyzes the spectral rhythmic and timbral information in the audio to describe its acoustic content. It captures rhythmic and timbral features which can be stored or directly processed to compute acoustic similarity between two audio segments, find similar sounding songs (or song segments), create playlists of music of a certain style, detect the genre of a song, make music recommendations and much more. Depending on the needs, a range of audio features is available: Rhythm Patterns, Rhythm Histograms (i.e. a rough BPM peak histogram), Spectrum Descriptors and more. The library is available for Python, Matlab and Java.
(Melody & Bass Transcription + Beat & Key & Tempo Estimation)
The MusicBricksTranscriber provided by Fraunhofer IDMT is an executable that allows to transcribe the main melody and bass line of a given audio file. Also, the beat times, the key, and the average tempo are estimated. The results can be provided as MIDI, MusicXML, or plain XML files. In addition, a Python wrapper is included to further process the analysis results.
POF = Pd + OpenFrameworks : openFrameworks externals for Pure Data, providing openGL mutlithreaded rendering and advanced multitouch events management. A recent addition to #MusicBricks by Antoine Rousseau, in collaboration with Matt Black of Ninja Tune, it makes making PD music apps much easier including cross platform. It also feeds the Mobile Orchestra through SyncJams. Find it on Github.
Musimap’s algorithm applies fifty-five weighted variables to each music unit (e.g tracks, genres, labels) so as to model the world’s discography as a multi-layered system of cross-matched influences based on a musicological, lexicological and socio-psychological approach. The granular and proprietary database includes over 3B data points, 2B relations, and soon counting 50M songs. Its neural music network is the result of a unique combination of in-depth human curation and the latest AI technologies to engineer a multi-layered system.
Real-time Pitch Detection
The real-time pitch detection allows to estimate the predominant melody notes (monophonic) or multiple notes (polyphonic) from a consecutive audio sample blocks. This allows to transcribe the currently played / sung note pitches from a recorded instrument / vocal performance. The monophonic version also estimates the exact fundamental frequency values. Typical applications are music games and music learning applications. Fraunhofer IDMT provides a C++ library as well as sample projects that show how to include the functionality.
Search by Sound Music Similarity
The Search by Sound online system is based on the Rhythm and Timbre Analysis (see above) and provides a system which can be used via a REST Web API (called SMINT API) to upload, find and match acoustically similar songs in terms of rhythm and timbre – without the need to install any prerequisite or run the analysis on your own. It can be used with your own custom music dataset or the readily available content from freemusicarchive.org that was already pre-analyzed by rhythm and timbre, to find music matching a particular rhythm or timbre from that archive.
The Goatify tool provided by Fraunhofer IDMT is an executable that automatically replaces the main melody in a song with a given sample. Therefore the main melody is extracted and removed from the song. Then the sample is placed and pitched according to the melody notes in the song. For proper pitching of the sample, the pitch of the sample itself is extracted beforehand. The tool is delivered with free sound samples (goat, etc.) from www.freesound.org for direct use.
A recent addition to #MusicBricks, SyncJams is an open source standard to allow wireless inter music app sync and communication of key/scale between players in a ‘mobile orchestra’, authored by Chris McCormick in collaboration with Matt Black of Ninja Tune. Also defined as: “Zero-configuration network-synchronised metronome and state dictionary for music application”. Currently Pure Data and Python are supported. Find it on Github.
Real-time Pitch-Shifting and Time-Stretching
The real-time pitch shifting library allows users to change the pitch of audio material while keeping the tempo. It allows enabled changing the tempo without changing its pitch. Typical applications are music games and music learning applications as well real time performances. Fraunhofer IDMT provides a C++ library as well as sample projects that show how to include the functionality.
“#MusicBricks implicitly initiates a large network of people across many disciplines, with backgrounds as researchers, creators, artists, entrepreneurs, etc. This network provokes exchange of ideas far beyond the traditionally technology centred aspects that a technical university usually focuses on and therefore opens up inter-disciplinary research aspects that were not considered before.”
Thomas Lidy, TU Wien
Routes to market
“In some cases it’s “just” entertainment, in some cases this revolutionises how music is performed, in others it produces a new world-wide business”…
#MusicBricks are designed to be the building blocks of future entertainment, performance and products. By putting top academic research and industry technologies in the hands of creative developers, and with the support of commercial partners and investors, #MusicBricks has supported 11 ideas with great potential to find routes to market.
We received an email from a singer-songwriter in Hawaii, asking exactly what the #MusicBricks “toolkit” is. How does it relate to performing or writing music? What exactly are the tools? Are they digital instruments of some kind? How can academic research contribute to excellent songwriting? All fair questions.
At #MTFScandi in Umeå back in May, the brilliant Fanni Fazakas (aka Rumex) pointed her camera at pretty much everything else – and this wonderful short film is the result.
The success of the #MusicBricks project has been overwhelming. We’re going to be showcasing some of the most incredible projects that have emerged since it began, and you’ll get to see the ideas that have been supported and developed to commercial prototype at #MTFCentral this weekend.
We’re just a week away from #MTFCentral in Ljubljana. Looking forward to seeing you there. If you’re wondering what to expect, we’ve put together a taste of the festival in this highlights video.
We’re delighted to announce that Philips have now joined the Music Tech Fest family, and will be challenging our hackers to develop using their Hue lighting system. For the first time we’ll be able to combine sound and light to generate immersive musical experience, new types of performance, health and mood interventions, new forms of communication, and composition with music and light. They’re opening up their API at #MTFCentral, providing us with their developer kits (shown above) to test your concepts with, and bringing some of their top engineers to partner with hacker teams.
More artists and musicians, more music and technology companies, more inventors and their incredible machines, and more makers, developers and designers are joining us all the time. From music-making robots to instruments that remove barriers to participation for musicians with disabilities; from new formats for music to music made with chemistry and brainwaves.
And now there’s more to the hack camp too. You don’t have to be a programmer, an electronics expert or a maker to get involved. Artists – we want you too…
The hack camp at Music Tech Fest is known for its focus on physical objects, inventing noise-making machines, new types of performance, new musical instruments, interfaces and products – as well as working with more intangible materials like APIs, software, and concepts.
#MTFCentral brings together a phenomenal community of artists, makers, hackers, musicians, designers, industry pros, engineers, producers, inventors, entrepreneurs and researchers from right across Central Europe and beyond.
#MTFCentral is the festival of music ideas: a weekend of musical discovery and experimentation, held at the incredible Cankarjev Dom and +MSUM, the Museum of Modern Art in the heart of Ljubljana from the 18th to the 21st of September.
We’re just six months into the #MusicBricks project, and already eight new product concepts are being developed into brand new startups and projects. The tools have proven so popular and have generated such a buzz across social media that news of #MusicBricks has now reached over half a million people worldwide.
The Music Tech Fest team are in Barcelona this week for the Music Hack Day event at Sonar+D.
It’s time to put our skills to good use. We’re hacking music technology back in time in order to help kill the most dangerous kung fu master criminal of all time: Adolf Hitler.
For more information join our newsletter
#MusicBricks has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 644871