Meanwhile, in Narre Warren

Thursday 25 November 2010

I come home to London next week, after having a great three weeks in Melbourne. More updates will follow then, with news about the Music For Bionic Ears project and other cool stuff, but right now I’m having too much fun catching up with friends and watching the Ashes. In the meantime, I’ll leave you with this:

Music For Bionic Ears: A Moment of Truth

Wednesday 17 November 2010

It’s a strange experience, having to play your music to an audience of one and waiting to find out their response, face to face. Even stranger, when they know nothing about your music; stranger still when you know they’re not hearing what you’re hearing.

On Monday I got to meet four cochlear implant wearers at the Bionic Ear Institute, as part of the Music For Bionic Ears project. They had differing levels of ability in perceiving music, and of experience in hearing and playing music. I played each of them my Study No. 2 and finally got some feedback on whether or not my experiments would have any positive effect.

The new tuning system seemed to work surprisingly well. The types of chords, and the processed organ sound I had used, weren’t as cluttered and muddy as I feared they might be. All four reported that they could hear chords and harmonies clearly, and that the sounds were, for the most part, pleasant to hear. (By pleasant, I mean that too much muddled sonic information tends to sound like white noise to implant wearers.)

It seemed almost too good to be true when a couple of listeners responded that they could identify the organ sound, hear distinct chords and harmonies, and moreover enjoy them. Previously, they had not found these types of sounds pleasant. This was a much better reaction than I had hoped. It seems that using a just intonation scale instead of standard equal temperament has a big effect on how implant wearers hear music. This could be a useful path of inquiry to follow, examining whether equal temperament is an obstacle to music perception and which tuning systems are clearest.

All listeners could identify the organ sound, although some also heard other instruments in the mix. This may have been due to the synthesised nature of the sound, and the other electronic treatments I had made. There are other aesthetic and philosophical implications to whether or not timbral recognition will be an issue in the finished piece, which I should follow up in a separate post shortly.

The piece I played was not focussed too much on melody, relying instead on presenting a succession of distinct sounds with varied loudness, duration, and harmonic complexity. Implant wearers often have a problem in detecting the small steps between notes that usually make up a melody, so it will be interesting to see if a different tuning has any effect. Alternatively, my piece may continue to work in a way that is less reliant on melody.

The Hearing Organised Sound blog has more information about the meeting, with further details about what the other composers in the project are up to. Their approaches are all quite different and are finding out other details I am now trying to take on board.

Meanwhile, in Melbourne

Friday 12 November 2010

I’m back in Melbourne for a few weeks. On Monday I finally get to visit the Bionic Ear Institute and meet some other people working on the Music For Bionic Ears project.

Music For Bionic Ears: One Sight, Two Sounds

Wednesday 3 November 2010

There was a little segment about the Music For Bionic Ears project on Australian TV recently, which can be watched online. (I can see it in the UK, so I guess everyone can.)

I’ve uploaded two of the studies I’ve made for the project for you to listen to, working with the 16-tone tuning system.
Bionic Ear Study No. 1
Bionic Ear Study No. 2

Study No. 1 was made by filtering white noise into the 22 frequency bands used in the design of a cochlear implant. This was done using a filtered granular synthesis contraption in AudioMulch. The filtered sounds produced were mimicked by a (virtual) piano, retuned to the 16-tone scale. The sounds you can hear in the study are a mix of the white noise, the piano, and either or both sounds reproduced through the cochlear implant simulator devised by Robin Fox.

Study No. 2 examines the various harmonies that can be produced with the scale. Using only one instrument (electric organ), a sequence of chords and single tones are played in a variable rhythm. Certain pitches, with frequencies that straddled a pair of electrodes, were shifted up or down an octave. This sequence was fed back into the same AudioMulch filter used in Study No. 1, which plays back differing amounts of the original and filtered organ.

What next? Study No. 1 is very rudimentary and serves as a preliminary map of the type of soundworld I am dealing with. Study No. 2 was a demonstration of harmonic combinations that are possible. In the latter piece, I suspect that the combination of chords used and the organ sound will come across as too cluttered in the more rigidly-defined sound structure of the implants. The piece I am working on now uses the following principles:

  • Implant wearers report being able to understand speech very well. I’m using a speaking voice as a sort of key, or guide, to the music. This includes filtering and processing the voice in different ways, and deriving melody and rhythm from speech patterns.
  • Using lighter instrumental timbres with simpler sounds.
  • Building textures that sound active, without becoming dense.