Can we see speech?

Of course we can! But let us start somewhere at the beginning!
Remember the scene in 2001: A Space Odyssey in which the two characters Dave Bowman and Frank Poole conspire against HAL 2000, the omnipotent AI of the Discovery One. They isolate themselves in a shuttle, assuming that since HAL cannot hear them anymore and they can talk freely. Well, they obviously forgot HAL’s ability to read lips.

 

Naively, we know that we can read lips and map their movements somehow onto something like a “meaning”. It is said that deaf people use this technique. This means that our brains not only contains an articulatory representation of speech, meaning how we need to move our lips and tongue and jaw in order to speak. Also, we have a representation of how our mouth looks like when we speak. You can easily check this on your own: Next time you watch a movie in a foreign language, count how often you deliberately watch at the character’s lips in order to enhance perception!

In the 1970, Harry McGurk and his research assistant John MacDonald were able to show  what has now become known as the McGurk effect: When you mismatch visual and auditory information for a syllable, e.g. [ba], makes you hear something completely different. Check out the following video:

What happens in the movie above is the following: You see the mouth movement of the [ga] syllables and hear the auditory signal for [ba]. Your brain tries to resolve this conflicting information and what you  hear (or are supposed to do so) is {da}. If this is the case for you, then this effect clearly shows that the brain stores both, the visual and the auditory information, and uses it to process speech. If this was not the case, you would not have a conflict and perceive something completely different.

Massaro and Stork (1998) “Speech Recognition and Sensory Integration” explain this effect on the basis of cue similarity, with cues being pieces of information in the signal that the brain processes. [ba] and [da] share auditory cues, [ga] and [da] share visual cues. When you mix these, i.e. auditory [ba] + visual [ga], the brain choses that output, or perception, which matches a category with the highest probability. In case of the McGurk Effect: {da}. Massaro and Stork used this probability and similarity based algorithm to create a computer program which can read lips – the precursor to HAL. Simultaneously, the McGurk effect shows us that the brain stores and uses all provided physical information which is contained in the input.

Therefore: Yes, we can see speech. It is nothing special, quite the contrary. The information is there and the brain uses it!

Together with my colleague Daniel Duran from the Institut für Maschinelle Verarbeitung, Stuttgart, we investigate in a modeling project, how cue overlap and frequency differences between [ba], [da] and [ga] might be a source for the McGurk Effect. We want to investigate this by comparing  the “Naive Discriminative Learner“, an algorithm capturing human and animal learning behavior, and “Exemplar Model”, a model that captures the creation of phonemic categories on the basis of similarities.

We presented our preliminary results during the second Simphon.net Workshop at Schloss Dagstuhl in our talk “Modelling multimodal integration – The case of the McGurk Effect”. This time, the workshop was attended by Bernd Möbius, Laurence White, Frank Zimmerer, Uwe Reichel, Ingmar Steiner, James Kirby, Antje Schweitzer, Katrin Schweitzer, Mike Walsh and our invited guest, Janet Pierrehumbert. The results are promising, and also to a certain extent surprising.

dagstuhl_simphonnet

 

Advertisements

Phonetics goes Rocket Science

Last semester, I wanted to show my students the turbulences in the air around the mouth during articulation. Unfortunately, I was not able to find any video material which could nicely illustrate such effects. We therefore started a cooperation with the “Deutsches Zentrum für Luft und Raumfahrt” in Lampoldshausen. Together with Friedolin Strauss from the DLR, Denis Arnold, Tino Sering and myself, we recorded a total of 45 seconds of speech using a schlieren photography captured by a high speed camera.

img_20161117_161509

You might be wondering: “Why only 45 seconds of speech?” Well, the camera recorded with a sampling rate of 10 000 frames per second. After recording roughly 5 seconds of speech, the system had to upload the data (20-30Gb) to the server. This took around 30 Minutes. In total, we spend four hours of recording and, as you can see, shooting nice pictures.

 

img_20161117_155409

Schlieren photography allows you to track changes in the density changes of a medium, in our case of the air. With a temporal resolution of 100 microseconds (0.1 milliseconds), the camera allows us to investigate subtle changes in the airflow out of the mouth and out of the nose. We saw effects we have not anticipated before already in the screening of the material. We are looking forward to the analysis, once the data will be on our servers.

If you want to know more about our visit to Lampoldshausen, the DLR published a blog post here (in German).

Introduction to R – rewritten

I completely rewrote my Introduction to R. The premise of the new manuscript is that you want to processes a written or spoken linguistic corpus using R in order to perform statistical analyses.

I reused lots of text of the previous script, especially the static introductions about what works how. But now it is embedded into a narrative: “preprocess a corpus”. The idea is to teach you all the necessary stuff only when it becomes necessary.

You can download the current version here.

Phonetik und Phonologie im deutschsprachigen Raum 2016

Last week I visited the PundP in Munich. Felicitas Kleber and Christoph Draxler did a wonderful job organizing this beautiful conference. The great thing about PundP is that it addresses not only phoneticians and phonologists who have been working in the field for a while, but also undergraduate students and PhD-students. It provides a friendly environment where one can find constructive criticism and great inspiration for one’s work.

I met wonderful friends there, among them Daniel Duran, who presented his revolutionary learning-and-testing environment; Adrian Leemann, who presented on his App “Dialäkt Äpp”, Cornelia Heyde, who studies the articulation of stutterers by means of ultra-sound and Mathias Scharinger, who is investigating neural mechanisms of perception.

This year, I myself presented the Karl-Eberhards-Corpus of spontaneously spoken southern German, work which I have been doing together with Denis Arnold for the last three years. The corpus consists of one hour conversations between two friends. It is annotated and corrected at the word level. Segment annotations are provided on a forced aligner basis.

Left: myself; right: Daniel Duran

Text to speech on your phone

There is plenty of stuff like books and articles I would like to read but simply don’t have the time.  On the other hand, I use the car to drive to work thinking: What a waste of time.
I found a solution:  a text-to-speech system for the smart phone called “@voice aloud reader“.

It is capable to process PDF files, text files, doc files and much more. It refers to the text-to-speech system already installed on your phone but you can install many more voices as well as languages.

Articulography

Three and a half years ago I started working at the Department of Quantitative Linguistics, University of Tübingen. My new field of research was articulography, a method to monitor and record the movement of the tongue and lips during speech production.

Together with my colleagues Denis Arnold and Martijn Wieling, we made a movie to advertise for our experiments. Recently, I found the movie again on youtube: