A Late Anthology of Early Music Vol. 1: Ancient to Renaissance

Sunday 15 March 2020

I listened to this new tape by Jennifer Walshe and had a whole bunch of ideas about what to write about it. Then I listened to it again and immediately forgot everything I was going to say. To collect my thoughts, I listened to some of Bach’s lute suites, played on guitar. They weren’t really written for lute either, but they were almost certainly written by Bach. All cultural transmission is distortion. On A Late Anthology of Early Music Vol. 1: Ancient to Renaissance, Walshe sings a selection of compositions dating from the 2nd Century to the 16th. They are arranged in chronological order. She has worked on these recordings in collaboration with CJ Carr and Zack Zukowkski, a duo collectively known as Dadabots. They work with neural network machine learning technology and produced multiple iterations of Walshe’s voice reinterpreted by artificial intelligence. In an imitation of the chronological approach, each piece is presented in a progressively more advance iteration.

As Walshe observes in her sleeve notes, this progressive approach parodies the meliorist, evolutionary narrative so commonly given in the history of Western music (as she herself had taught for years). It’s a false narrative, of course: art never improves – only the material of art changes. In this parody, chants and motets alike are rendered as a garbled melange of whispers, croaks and whistles. Over time, melody starts to emerge, a voice begins to be heard. At one point a trumpet suddenly appears out of the blue. As each piece becomes more recent to our time, a more recognisable identity can be heard; or perhaps we’ve been listening to it long enough for things to start making sense to us. It may seem crude now but it is, we are assured, the future.

Heard without any knowledge of the backstory, this is fascinatingly detailed electronic music, with an erratic logic of its own, with complex sounds moving both towards and away from acoustic sound, even dipping into an uncanny valley representation of the human voice. Would it sound more coherent with each successive piece, were we not informed of the process? Perhaps the parody is taking place on a deeper level. The premise is the same as the “we trained an AI bot to write fan fiction” jokes that have made the rounds in recent years. Are we kidding ourselves when we hear an improvement in the music’s faithfulness to the model? We’ve been leading generations of students to believe that music develops over time.

It’s easy to imagine such a project would eventually succeed, producing a replica of a singing human voice. It would be perfectly accurate, and as recognisably authentic to us as Bach’s music would be to him, were he to hear it played today.