When I was in a composition class, I used a music notation program that could play back what I had notated. It was a pretty nifty program, but the music sounded awful. Always. I blame this program for stamping out my dream of being the next Hanz Zimmer. However, occasionally the piece didn’t sound half bad when I took it to the piano. No Hanz Zimmer of course, but maybe something that someone might listen to.
What was so different about my muddling performance that made it sound more ‘musical’ than the rendition by the program? What made the program sound inexpressive?
First of all, it is helpful to look at a definition of “expressivity” in music:
Expressivity in music refers to the production and perception of variation in musical parameters. Music is inexpressive when it is uniform and mechanical, whereas music described as expressive communicates through dynamic fluctuations of acoustic and visual information. (Music in the Social and Behavioral Sciences: An Encyclopedia, 2014).
Perhaps the program sounded unexpressive because it played back the notes in a ‘uniform and mechanical’ manner. But could a computer program ever perform music expressively?
Regardless of whether it’s possible for a program to perform expressively, what would be the point of such a program? Perhaps it would finally eliminate the need for musicians. No more endless practicing or endless guilt over not practicing. No need to pay for rehearsal time, concert halls, conductors, or big time soloists. And finally we could be rid of those pesky instruments, forever going out of tune, breaking strings, losing reeds, missing keys.
Needless to say, I doubt that such a program would stop people from wanting to make music. After all, we have more music at our fingertips today than at any time in human history, yet people continue to labour at learning an instrument, pay to attend live concerts, and enjoy out-of-tune singing around a campfire.
On a more interesting level, a computer program that could fool people into thinking it was an expressive human performer might teach us something about what constitutes musical expression. But it is important to consider whether we can make the claim that such a computer program could be seen as a cognitive model of human music performance.
If the point of a music rendering program is to render music that sounds convincingly human, then we could say that the program is successful when it can fool us into thinking its performance was done by a human. If, however, the point of creating such a program is create a cognitive model of human performance, the standard for success is much more complicated. In fact, some would say that success is not possible. This is what I’m going to focus on in this post, that is, the question, Can a music performance program that sounds convincingly human can be said to model human performance experiences?
I’m taking advantage of the fact that this blog post assignment is open-ended and ungraded to take a break from scientific writing and dabble in some philosophical musings. I do not pretend that my two undergraduate philosophy courses have given me the ability to make an informed philosophical argument; I like to think of this as more of a sounding board for ideas.
In what is perhaps an alarmingly subjective way to look at this topic, I will begin by sharing with you one of my own performance experiences.
A few years ago, I decided to get a performance diploma in addition to my music degree. It was the only way I could have a graduating recital and I had some peculiar idea that having a graduating recital was the best way to finish off a music degree. This recital had to be about forty minutes long, and all the music had to be played from memory.
To be honest, I can’t remember what I was thinking during most of the recital. Maybe I was ‘lost in the music’. But I do remember a few moments vividly – the moments when I made a big mistake and had to decide what to do about it. One particularly poignant moment was when I messed up my favorite part of Soirée dans Grenaude by Claude Debussy. Here is what I was thinking, as well as I can remember:
Shit. Where does my left hand go? Shit shit shit. No, don’t think about it. Let your hands remember. I never mess up this part. Maybe I’ll just play it again. But my teacher and the professor are looking at the score. Are they looking at the score? My teacher will know every mistake, with that perfect pitch of his. But this professor isn’t very strict. I think she’d rather I play it again expressively than be accurate. Thank goodness it isn’t that other professor judging me. And my teacher said I’m an artist tonight, not a student, maybe he won’t mind. Most people here have no idea what the score says. Am I playing for them, or for a grade? But maybe I should just keep going. I tried to go back to fix a mistake in the Beethoven, and I made the mistake again the second time. But I’ve wanted to play this piece for years. What was the point if I mess up the best part?
But where do I go back? I can picture where I am on the page of the score. What key am I in? I’m playing a lot of black keys. But are they sharps or flats? Why can I never just memorize what key a piece is in? One of my old piano teachers always made me memorize key signatures. I cried so many times in lessons with her. I hope I’m a better teacher. Gah, some of my students are here. I always tell them to keep going if they make a mistake.
I’ll go back. Should I do a cadence? The cadence didn’t work so well in the Beethoven. It’s probably more Debussy-like to just fade in and out of sections anyway. Shit, how does the section begin? Right, right. Don’t think. Stop thinking. Here we go.
Obviously I don’t remember the exact words of what I was thinking (and was I thinking in words, anyway?). But I do remember considering all these things while trying to decide to play the section again, all the while still playing.
Music rendering programs today might be able to output a performance that sounds better than mine did, but programs today aren’t complicated enough to model everything that contributed to my performance decisions. That being said, one might point out that this doesn’t necessarily mean that such a complicated model couldn’t be developed in the future.
What would a program that realistically models performance expression look like? It might be helpful to think about a performance in this way:
“W expresses X by doing Y in context Z”, where
W is the individual performer
X is the object of expression
Y is how the expression is accomplished
Z is the context
I hope that the description above of my thought process during performance shows that each of these general variables is involved in a performance and that there are many considerations within each variable. Some would say that a purely cognitivist or computational perspective falls short in modelling the experience of performing music, and would argue for a perspective that acknowledges the situatedness and embodiment of cognition. Indeed, it seems as though the embodied music cognition framework is gaining recognition in the field of music psychology.
As far as general cognition goes, there are highly intelligent and informed people with opposing perspectives on whether we can computationally model human experience. Most people might just say it’s not worth losing sleep over (I did actually lose some sleep over this – I was writing an essay on this topic and procrastinated too much to get it done before bed time). But it is an interesting question, and perhaps an important topic to consider when doing research on human experience.