The speaker was Aniruddh Patel of the Neuroscience Institute in San Diego and his topic was "The Music of Language and the Language of Music." His talk focused on two aspects of the similarity between music and language - rhythm and syntax - and discussed experiments related to each.
The rhythm work had to do with whether or not the rhythm of a given composer's music reflects the rhythm of his (or her) native language. Dr. Patel played two samples of music and asked the audience which one sounded English and which was French. Surprisingly, this was easy. Then he discussed one unsuccessful theory before getting to more recent research. The issue has to do with how to measure rhythm in language and the successful approach focused on the regularity of the length of vowel sounds. He discussed a metric called "normalized pairwise variability index" or NVPI, which measures how much short and long vowel sounds are in adjacent syllables. For music, the NVPI would have to do with the actual rhythm. That is, if a piece alternated quarter notes and whole notes, it would have a much higher NVPI than a piece consisting entirely of quarter notes. It turns out that English has a significantly higher NVPI than French. They analyzed music by several English composers and several French composers and, sure enough, the music by English composers had a higher NVPI. The difference in the music was less than in the language, but was still pretty obvious.
For syntax, the question was what the musical analog would be. Essentially, he used the "closeness" of chords (as in how near or far two chords are in the circle of 5ths) to describe how music would be jarring syntactically. The experiment involved having people do self-paced reading (clicking on a key to advance a phrase at a time) with a syntactically "difficult" phrase in the middle. They accompanied this with the playing of chords and measured the time that subjects took to advance the phrases. The idea was that, if the same part of the brain was involved in both types of syntax, the reaction to the jarring phrase (which is slower than to the other phrases) would be even further slowed when the jarring chord was played. Which did, indeed, happen. They looked at other aspects of changing the chord accompanying that phrase (e.g. switching from a piano chord to an organ chord) and found that had no effect. All of which they used to conclude that there is some sort of brain "interference" between syntax in language and musical syntax.
Dr. Patel did an excellent job of explaining this work to a highly varied audience and stimulated a lot of discussion among the crowd. I was also really impressed by how much he seems to enjoy his research.
Eventually, the LoC will put the lectures (which they record) up on their web site. So those of you not in the D.C. area can get to hear them too. But, if you are here (or will be), I highly recommend attending in person. Next up is Daniel Levitin on 18 November, who will also be signing his new book, The World in Six Songs: How the Brain Created Musical Nature.