The details of a PureTech project that will function as a hub for studies evaluating the impact of music on the brain were revealed at the South by Southwest Interactive conference this week. It offers a look at how clinical trials could be run remotely. It seeks to provide a framework that would make it easier to do large studies that examine the effect of music on vulnerable populations from children with autism spectrum disorder to adults with Alzheimer’s disease.
In a conversation with the co-founder and CEO of The Sync Project Alexis Kopikis, a serial entrepreneur in technology, he said seeing the impact music therapy had on a son diagnosed with autism spectrum disorder motivated him to change his career path and head up the project. “You can’t get these kids into a lab. You have to provide [music therapy] in their own environment to do it at scale.”
Although initially the project will be used to study the effects of music to ease pain and anxiety, it has a much more ambitious plans that would use The Sync Project as a platform to do research studies and clinical rials for a wide array of patient populations.
In an email, Sync Project co-founder Ketki Karanam provided a description of the project.
Participants in a study would download the project platform as an app. The app will play music and connect with wearable sensors to track their biological responses. As an example, participants in a sleep study would play the music through the app and connect an EEG monitor to track brain wave activity.
The platform maps the music data to the physiological data, and with the help of some math and machine learning, it identifies correlations between different songs, or types of music and physiological changes. Those changes could include a song that corresponds with a decreased heart rate for a participant or a tune that improves their concentration.
The co-founders envision scientists assigning music playlists to study participants using its library of musical signatures. If they have a hypothesis on the music signatures effective for the condition they are studying, such as music with a low tempo and no syncopation to improve sleep onset, they can select and test music with those characteristics. They could also play different types of music and identify signatures using machine learning and statistics.
The app also integrates with Spotify, iTunes and other online music providers to play music to a user, and track the songs and audio that a user is listening to.
An activity monitor like Jawbone or FitBit would be used to track sleep patterns. Scientists can analyze the collected data to determine what types of music and properties of the music are most effective for improving sleep onset or maintenance. They can run these studies in controlled lab settings as well as on a large-scale in the real-world.
Additionally, the app integrates with consumer and clinical-grade wearables and sensors, such as heart rate monitors, consumer grade EEGs monitors, Empatica Band that can track autonomic arousal and many others. This allows the platform to collect high-resolution data on the user’s physiological responses in real-time while listening to music.
It integrates with The Echo Nest platform. It can break down the acoustic properties of the music like tempo, timbre, rhythmic structure, and pitch variation at a deep level. can also identify correlations between specific musical elements and physiology.
by Stephanie Baum , www.medcitynews.com, www.msidallas.com