English Orthography – The capitalization of nouns between the 17th and 18th century

I recently visited the “39. Jahrestagung der Deutschen Gesellschaft für Sprache” (DGFS) in Saarbrücken, of which I am officially a new member (Yeah, it was about time!). The weather was terrible, it was pouring during my entire visit. Less time to procrastinate and walk through Saarbrücken, more time to listen to interesting talks. And there were plenty of them.

One talk, namely the one by Stefania Degaetano-Ortlieb’s on discourse connectors in scientific writing, inspired me to write this post. It was interesting, indeed, but I thought, connectors, schmectors, what is with the nouns of that scientific texts from the early 18th century. Almost all of them were written with an uppercase beginning letter.

I couldn’t believe it. Uppercase in English? It is like German or… actually, it seems that this is a very German pecularity. Nouns in Danish and Norwegian have used to be written in uppercase. But the practice was abolished in 1869 in Norway and in 1948 in Denmark. Remains Germany, the only country in the world expecting its kids to know and understand the difference between nouns and all of the rest of word classes. How dare they…

Back to English. I went to google ‘sNgram Viewer. The website is amazing and allows you to inspect word frequencies across time on the basis of modern and historic texts. I checked the spelling of one of the most common words: house starting at 1500 and ending in 2000. Check out what I found in the following figure. The red line represents “house” with lowercase, the blue line “House” with uppercase.

houseHouse
Red line: house. Blue line: House

There is a clear dip in the time from roughly 1650 to 1750,  where “house” in lowercase is in the minority and “House” in uppercase is in the majority. Before and after that periode the lowercase spelling is in the majority.

 

Ok, that was intriguing, so I checked six of the most common nouns in almost every language: “House, Houses, Man, Woman, Child, Children” in upper and lower case. The following figure shows you the ratio between uppercase and lowercase spellings with values above 100% indicating that uppercase spelling is in the majority.
(if you want to test it on your own, the formula was: (House+Houses+Man+Woman+Child+Children) / (house+houses+man+woman+child+children))

ratio_upper_lower
Ratio of uppercase to lowercase spellings. Above 100% represents uppercase in the majority.

This supported my assumption that in the one hundred years betwen 1650 and 1750 something in the English writing world happened. The phenomenon happened in both, American English (left) and British English (right).

Well, that said, I had to find out what actually happened. I started with an entry on wikipedia, which was more than boring. It stated only the contemporary capitalization rules. Nothing historic. Then I googled the topic. I found roughly four million entries. Checking out the top 20, I found that none were conclusive. There exist some theories. For example, one states that the script written at that time did not allow a proper distinction between words written in uppercase, because the letters were easily confused. As a result capitalization was abolished. But that was not very rewarding, because I still urged for knowing: WHY!? Why did capitalization start and why did it end?

Most German students think, capitalization is a mean thing teachers made up to torture them. However, studies in reading show that a reader seems to benefit from capitalization as comprehension is facilitated due to the additional “cue”  (read e.g. here). But if that was really the case, I wonder why only German sticks to capitalization of nouns and all other alphabethic based orthographies don’t use it.

On google scholar I found Richard Venezky’s Book “The American way of spelling“.  Unfortunately, the entire chapter on the writing system itself was not available on google books. The library of my university does not have it in stock. I thus decided to contact Dr. Venezky, just to be informed by the email deamon that his email was not valid anymore.

I ordered the book from another library. I will hopefully find something in a couple of weeks. And if not… well, I doubt that I will work myself through historic texts on spelling.

Advertisements

Can we see speech?

Of course we can! But let us start somewhere at the beginning!
Remember the scene in 2001: A Space Odyssey in which the two characters Dave Bowman and Frank Poole conspire against HAL 2000, the omnipotent AI of the Discovery One. They isolate themselves in a shuttle, assuming that since HAL cannot hear them anymore and they can talk freely. Well, they obviously forgot HAL’s ability to read lips.

 

Naively, we know that we can read lips and map their movements somehow onto something like a “meaning”. It is said that deaf people use this technique. This means that our brains not only contains an articulatory representation of speech, meaning how we need to move our lips and tongue and jaw in order to speak. Also, we have a representation of how our mouth looks like when we speak. You can easily check this on your own: Next time you watch a movie in a foreign language, count how often you deliberately watch at the character’s lips in order to enhance perception!

In the 1970, Harry McGurk and his research assistant John MacDonald were able to show  what has now become known as the McGurk effect: When you mismatch visual and auditory information for a syllable, e.g. [ba], makes you hear something completely different. Check out the following video:

What happens in the movie above is the following: You see the mouth movement of the [ga] syllables and hear the auditory signal for [ba]. Your brain tries to resolve this conflicting information and what you  hear (or are supposed to do so) is {da}. If this is the case for you, then this effect clearly shows that the brain stores both, the visual and the auditory information, and uses it to process speech. If this was not the case, you would not have a conflict and perceive something completely different.

Massaro and Stork (1998) “Speech Recognition and Sensory Integration” explain this effect on the basis of cue similarity, with cues being pieces of information in the signal that the brain processes. [ba] and [da] share auditory cues, [ga] and [da] share visual cues. When you mix these, i.e. auditory [ba] + visual [ga], the brain choses that output, or perception, which matches a category with the highest probability. In case of the McGurk Effect: {da}. Massaro and Stork used this probability and similarity based algorithm to create a computer program which can read lips – the precursor to HAL. Simultaneously, the McGurk effect shows us that the brain stores and uses all provided physical information which is contained in the input.

Therefore: Yes, we can see speech. It is nothing special, quite the contrary. The information is there and the brain uses it!

Together with my colleague Daniel Duran from the Institut für Maschinelle Verarbeitung, Stuttgart, we investigate in a modeling project, how cue overlap and frequency differences between [ba], [da] and [ga] might be a source for the McGurk Effect. We want to investigate this by comparing  the “Naive Discriminative Learner“, an algorithm capturing human and animal learning behavior, and “Exemplar Model”, a model that captures the creation of phonemic categories on the basis of similarities.

We presented our preliminary results during the second Simphon.net Workshop at Schloss Dagstuhl in our talk “Modelling multimodal integration – The case of the McGurk Effect”. This time, the workshop was attended by Bernd Möbius, Laurence White, Frank Zimmerer, Uwe Reichel, Ingmar Steiner, James Kirby, Antje Schweitzer, Katrin Schweitzer, Mike Walsh and our invited guest, Janet Pierrehumbert. The results are promising, and also to a certain extent surprising.

dagstuhl_simphonnet

 

Phonetics goes Rocket Science

Last semester, I wanted to show my students the turbulences in the air around the mouth during articulation. Unfortunately, I was not able to find any video material which could nicely illustrate such effects. We therefore started a cooperation with the “Deutsches Zentrum für Luft und Raumfahrt” in Lampoldshausen. Together with Friedolin Strauss from the DLR, Denis Arnold, Tino Sering and myself, we recorded a total of 45 seconds of speech using a schlieren photography captured by a high speed camera.

img_20161117_161509

You might be wondering: “Why only 45 seconds of speech?” Well, the camera recorded with a sampling rate of 10 000 frames per second. After recording roughly 5 seconds of speech, the system had to upload the data (20-30Gb) to the server. This took around 30 Minutes. In total, we spend four hours of recording and, as you can see, shooting nice pictures.

 

img_20161117_155409

Schlieren photography allows you to track changes in the density changes of a medium, in our case of the air. With a temporal resolution of 100 microseconds (0.1 milliseconds), the camera allows us to investigate subtle changes in the airflow out of the mouth and out of the nose. We saw effects we have not anticipated before already in the screening of the material. We are looking forward to the analysis, once the data will be on our servers.

If you want to know more about our visit to Lampoldshausen, the DLR published a blog post here (in German).

Introduction to R – rewritten

I completely rewrote my Introduction to R. The premise of the new manuscript is that you want to processes a written or spoken linguistic corpus using R in order to perform statistical analyses.

I reused lots of text of the previous script, especially the static introductions about what works how. But now it is embedded into a narrative: “preprocess a corpus”. The idea is to teach you all the necessary stuff only when it becomes necessary.

You can download the current version here.

Phonetik und Phonologie im deutschsprachigen Raum 2016

Last week I visited the PundP in Munich. Felicitas Kleber and Christoph Draxler did a wonderful job organizing this beautiful conference. The great thing about PundP is that it addresses not only phoneticians and phonologists who have been working in the field for a while, but also undergraduate students and PhD-students. It provides a friendly environment where one can find constructive criticism and great inspiration for one’s work.

I met wonderful friends there, among them Daniel Duran, who presented his revolutionary learning-and-testing environment; Adrian Leemann, who presented on his App “Dialäkt Äpp”, Cornelia Heyde, who studies the articulation of stutterers by means of ultra-sound and Mathias Scharinger, who is investigating neural mechanisms of perception.

This year, I myself presented the Karl-Eberhards-Corpus of spontaneously spoken southern German, work which I have been doing together with Denis Arnold for the last three years. The corpus consists of one hour conversations between two friends. It is annotated and corrected at the word level. Segment annotations are provided on a forced aligner basis.

Left: myself; right: Daniel Duran

Text to speech on your phone

There is plenty of stuff like books and articles I would like to read but simply don’t have the time.  On the other hand, I use the car to drive to work thinking: What a waste of time.
I found a solution:  a text-to-speech system for the smart phone called “@voice aloud reader“.

It is capable to process PDF files, text files, doc files and much more. It refers to the text-to-speech system already installed on your phone but you can install many more voices as well as languages.