How do appearances of female characters in TV shows affect viewer ratings?

 

Introduction

Of course, your first reaction is: Come on, that show is like a hundred years old. It is from the last century, people were different back then in the dark ages. My response to anyone who really thinks this at this moment: Really? Are we really that different in light of the current political situation (sorry, different topic) and in light of the reactions to a woman taking over as the main character of a Science-Fiction/Phantasy/Time-travel/The main character can change into every shape possible? Are we really that different? Then prove me wrong. I will continue here with Star Trek: The Next Generation (TNG). And by the way, this happens to be the dataset I have available. If you can provide any other, more “modern” dataset, then I will analyze it (but only if you preprocess the data).

The current material

I collected the data in 2015 (yes, it’s been a while, but science takes time). I downloaded the transcripts for all the TNG episodes from http://www.chakoteya.net/ (many thanks to whoever transcribed the episodes), which, I’m just realizing, also has transcripts for Doctor Who (any volunteers?). I could have taken the average ratings for each episode, but I wanted to have a more fine grained data. This is why I contacted imbd.com who provided me with information on the sex of the rater and their age for all the episodes of all Star Trek franchises (thanks again!).

I decided to calculate the average rating per episode depending on the sex of the rater, as I was interested whether women would rate episodes with a higher proportion of women dialogues better than man (any guesses?). This gives me two ratings per episode: one for female raters and one for male raters. For the analysis, I transformed imdb’s 1 to 10 scale to range between 0 and 1.

I am focusing here on the dialogues of the main characters, which are Wesley, Worf, Dr. Crusher, Dr. Pulaski, Laforge, Troi, Riker, Data and Picard. I ignored all the guest characters, side kicks, Klingons, Vulcans and who ever appears on the show, mainly because it was too complicated to tag each of these characters according to their sex.

I calculated a simple measure: the main characters’ women-to-men ratio in terms of the number of words spoken in each episode. I am ignoring the name of the character (everyone loves Data) and, most importantly, I am ignoring the plot. This is important, because only in this way we can assess whether viewers have a certain preference to the sex of the characters. I would expect that female raters favor episodes in which more female characters appear.

This gives us 176 values for 176 episodes. The larger the value, the more text (measured by the number of words) the main female characters have in that episode. A ratio of 1 would mean, men and woman have an equal amount of text. Above 1, women have more text, below one, men have more text.

The following Figure  illustrates the distribution of the women-to-men ratio in all episodes. It becomes very apparent that men have more text than women in TNG. There are only 9 episodes where men have less text. On average, women have roughly 34% of the text that men have (median of 22%). For analysis, I am ignoring those episodes that have more text, because excluding them makes the data better distributed (dont’t worry, I have also performed the analysis including the data and it does not change a thing).

I have used the betareg library that allows us to model values that range between 0 and 1 (the betareg library is actually designed to model ratios by means of beta-regression, that transforms probabilities ranging between 0 and 1 into logits. Hence, all statistical results presented below (i.e. beta = X), are logits. If you want to know more about logits, click here). The summary of beta-regression gives us coefficients (beta estimates), standard errors (sde), z-values (larger than 2 represents significant) and p-values (smaller than 0.05 represents significant).

I fitted the ratings with three predictors in one model.

The first predictor was the episode number in order to investigate how ratings evolved across the seasons. Indeed, the effect was significantly positive (beta = 0.028, standard error = 0.0005, z = 5.7, p < 0.001). Indeed, TNG’s ratings increased with every season. This effect is illustrated in the next plot. Although there is a strong variation within each season, the ratings got better and better. Nicely done, Star Trek producers.

 

Unsurprisingly, male raters gave TNG higher ratings then female raters: on a scale between 0 and 1, female raters gave on average 0.66 (beta = 0.86, sde = 0.07, z =12.7, p < 0.001), male raters gave on average 0.71 (beta = 0.23, sde = 0.8, z = 3, p = 0.003).

Now, what about the women-to-men-text ratio? Well, the effect is highly negative. The more text woman had in an episode, the worse the episode was rated (beta = -0.86, sde = 0.17, z = -5.1, p < 0.001). The following plots illustrates this effect.

Two insights follow from the figure. First, the distribution of the women-to-men-text-ratio is skewed. This that there are less episodes with more text for women. Second, that the variability in ratings is really large for episodes with a high percentage of text for male characters. One potential interpretation of this finding is that since there are fewer female-oriented eposides in the series, there is also a smaller probability that there will be a good episode episodes with higher women-to-men-text-ratios.

What is more important: female and male raters did not differ in their ratings. This means that female viewers have the same opinions about how often female characters should occur on Television like male viewers. In my opinion this result is devastating. Not only does it mean that viewers do not accept female characters to be present in Television. It also means that this opinion is supported by those who are represented by these characters: The women themselves. Given that today’s television has a strong influence on public opinion and on how each person defines their role in this society, this is absolutely unacceptable. Of course female viewers rate episodes with more female appearances worse then with more male appearances. This is what they have learned to be the status quo by watching Television.

Coming back to Doctor Who (have not forgotten this one). I claim that the reason why the latest season of Doctor Who has significantly worse ratings than all those before is simple: The main character is portrait by a woman, the plots center around female characters. For example, the episode “Rosa” that focuses on Rosa Parks. In the episode “The Tsuranga Conundrum” there is a female General. And so forth.

I totally acknowledge that the current finding has to be regarded with caution because Star Trek targets a male audience. Not only it should be replicated with Doctor Who (that started all this idea) but also with TV shows that target both sexes. Maybe someone will provide this data to me.

Advertisement

Statistical investigation of Mass Shootings in the US in 2018

After the recent Mass Shooting in Thousand Oaks, California, together with some colleagues we were discussion potential reasons for these outbreaks of violence in the US. We had a look at the Wikipedia list on Mass Shootings in 2018. We were shocked to see that there as many as 106 shootings noted. On abc15.com, we found a map illustrating the location of every mass shooting in 2018:

It becomes immediately apparent that most of the shootings happened in the Eastern part of the US. Given that there is already structure in the data, we were wondering whether there is more.
For example, is unemployment one reason for the shooting? Or population density? Concretely, the question arose whether the number of casualties can be predicted by such information. I expected that the number of casualties would increase, the larger the size of the population, the larger the population growth and the larger unemployment.

Methods

On the basis of the Wikipedia list on Mass Shootings in 2018, I collected information about the city/town/village the mass shooting occurred. The lists contains information on 106 mass shootings. I used Wikipedia, because it allowed me to obtain easily additional information on the locality of the mass shooting by following the links on the page. I collected the following pieces of information:

  • Population of the locality (city) in 2010 and 2016 (as provided by Wikipedia)
  • Population of the state in 2010 and 2016
  • Unemployment rate of the state

The 2010 data was based on the 2010 United States Census. I would have wanted to have information on the Unemployment rate of the locality. However, this surpassed my abilities. I also calculated the percentage of growth in population size between 2010 and 2016 for both, locality and state. In total, I used five variables to predict the the number of casualties (NumberOfCasualties).

  • PopulationCity (in 2016, ranging from 3365 to 8622698)
  • PopulationState (in 2016, ranging from 601723 to 37253956 )
  • PopulationGrowthCity (between 2010 and 2016, ranging from 0.875 to 1.315)
  • PopulationGrowthState (between 2010 and 2016, ranging from 0.994 to 1.35)
  • UnemploymentRateState (in 2016)

For 9 of the localities, population size was available only for 2016. Those were excluded from the analysis. In pilot analyses, the inclusion of those 9 localities did not change the overall results of the present analysis. The shootings in the Bronx and in Brooklyn were tagged as shootings in New York City.

All of those pieces information were collected automatically from Wikipedia and a data tables using a custom-made script in R. The download of the html-pages was performed with the function getURL() from the RCurl package.

Analysis and Results

City population size in 2016 had to be log transformed in order to obtain normal distribution. All predictors were centered and scaled for analysis (z-scaled). I first performed a standard Spearman-Rank correlation analysis between the predictors. I want to highlight here two results:

  • PopulationGrowthState and PopulationGrowthCity were strongly correlated (R = 0.56). This is not surprising at all, as the state population depends on the city population.
  • PopulationGrowthState was negatively correlated with UnemploymentRateState (R= – 0.48). The same effect, weaker, was for PopulationGrowthCity (R=-0.32). This means that when the population size grew, the unemployment rate went down. While this might not be nothing new for people working in demographics and economics, this surprised me, as I would have expected these two variables to be positively correlated. I would like to see, whether this was observed on a larger scale.

NumberOfCasualties ranged between 0 (N = 40) and 17 (N = 1).

I fit a generalized linear model to predict the NumberOfCasualties (function glm, family = poisson). The analysis is quite trick, because of the high collinearity in the data, i.e. some predictor can be used to predict another predictor. This becomes apparent above. This is why I used a step-by-step inclusion procedure and checked for indicators of collinearity in the model. If you want to read more about how to address collinearity in regression analyses, together with two other colleagues I have published a paper here. I have not tested any interactions between the predictors because of the small sample number in the data.

As it turned out, UnemploymentRateState and PopulationGrowthState were not significantly predictive for NumberOfCasualties (this means that the effect they cause cannot be used to support our initial hypothesis). I found this surprising, as I indeed would have predicted that unemployment drives people to commit horrible things.
The three remaining predictors were significant. The model’s intercept was 0.271 (std = 0.09, z = 2.9, p = 0.003). This is the estimated logit, where 0 equals 50% (in R, you can transform those values back using the function inv.logit() from the boot package. I did not apply this because it changes the visualization)

  • PopulationState (estimate = 0.2, std = 0.07, z = 2.9, p = 0.004)
  • PopulationCity (estimate = -0.4, std = 0.08, z = -4.8, p < 0.001)
  • PopulationGrowthCity (estimate = 0.3, std = 0.07, z = 4.4, p < 0.001)

The effects are illustrated in the following figure (illustrated with the function visreg() from the package visreg. I restricted the y-axis which is why not all data points are illustrated). The y-axis represents partial effect of the estimated logit, i.e. how the intercept (logit 0.271 = probability of 0.57 that a large number of people are killed) has to be changed depending on the predictor. 0 therefore represents no change (horizontal gray dashed line), -2 means that the intercept has to lowered to logit -1.729 = 0.15 probability; +2 means that the intercept has to be increased to logit 2.271 = 0.90 probability of a high number of casualties.

MS_figure.png

Panel (A) represents the population size of the state the mass shooting occurred. The larger the state, the larger the probability of a high number of casualties. This supports my hypothesis. Surprisingly and contrary to my hypothesis, the effect is reversed for the population size of the village/town/city where the shooting occurred: the smaller the city, the larger the probability of a higher number of casualties; the larger the city, the smaller the probability of a high number of casualties. Finally, the probability of a high number of casualties increases, when the city population has strongly grown between 2010 and 2016. This makes sense as: when cities grow, the total state population increases.

I will avoid any further interpretation of the data and the analysis. I also have not included other, potentially very interesting, variables into this analysis, in order to keep things as simple as possible.

Latest manuscript on using R for corpus studies.

It’s been almost a year since I updated the introduction to programming R for corpus studies. Download the latest script here.

There are plenty changes, among others there is…
* … a new link to getting the data for the introduction
* … an introduction to writing functions
* … a short introduction to preparing and running regression analyses
* … a short introduction to visualization your models

Regarding the formatting:
* there are hyperlinks in the file
* section names are in the header

And most importantly, thanks to many readers some typos got corrected.
Have fun!

DGfS 2018 Workshop: “Variation and phonetic detail in spoken morphology”

Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft
Stuttgart, Germany

March 7-9, 2018

Call For Abstracts

Submission deadline 20.8.2017
Invited Speaker
Sharon Peperkamp, Laboratoire de Sciences Cognitives et Psycholinguistique, Paris

Description

The relation between phonetics, phonology and morphology is much more complex than is assumed in current theories. For example, stress preservation in derived words is more variable than hitherto assumed. A word like orìginálity preserves main stress in its base oríginal as secondary stress, but other words have variable secondary stress (e.g. antìcipátion ~ ànticipátion, derived from antícipate, e.g. Collie 2008). In addition, there is evidence suggesting that acoustic and articulatory detail may play a role in the realization of morphologically complex words. For example, an [s] in American English is longer if it is part of a stem than when it is a plural marker or a clitic (cf. Plag et al. 2017). Pertinent work on both issues springs from different linguistic disciplines, in particular psycholinguistics, theoretical linguistics, phonetics, phonology, morphology, computational and quantitative linguistics, and has led to novel proposals regarding the general architecture of the morphology-phonology-phonetics interface. Different theories have been proposed on the basis of lexical listing vs. computation, analogical models or discriminative learning.

Within different linguistic disciplines, we see an increasing body of empirical work that addresses problems of variation and phonetic detail in morphology with the help of spoken data (e.g. Cohen 2015; Ben Hedia & Plag 2017, Strycharczuk & Scobbie 2017). Furthermore, there is more and more work testing theoretical proposals with the help of computational simulations (e.g. Arnold et al. 2017).

This workshop aims to bring together work from different disciplines that study and model variation and phonetic detail on the basis of spoken data. Relevant issues include: What new insights can spoken data bring to our knowledge about morphophonological variation? Are speakers sensitive to and/or aware of systematic subphonemic differences? What cognitively plausible computational and psycholinguistic models do best account for this variability? How can our theories of morphology deal with variation within and between speaker? What is the status of morphophonological and morphophonetic variation in grammar?

References

Arnold, D., Tomaschek, F., Sering, K., Lopez, F., and Baayen, R.H. 2017. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit. PLOS.

Ben Hedia, Sonia & Ingo Plag. 2017. Gemination and degemination in English prefixation: Phonetic evidence for morphological organization. Journal of Phonetics 62, 34-49.

Cohen, Clara P. 2014. Probabilistic reduction and probabilistic enhancement. Morphology, 24(4), 291-323.

Collie, Sarah. 2008. English stress preservation: the case for ‘fake cyclicity’. English Language and Linguistics 12(3). 505–532.

Plag, Ingo, Julia Homann & Gero Kunter. 2017. Homophony and morphology: The acoustics of word-final S in English. Journal of Linguistics 53(1), 181–216.

Strycharczuk, Patrycja and James M. Scobbie. 2017. Whence the fuzziness? Morphological effects in interacting sound changes in Southern British English. Laboratory Phonology: Journal of the Association for Laboratory Phonology 8(1): 7, 1–21.

Important Dates

Abstract submission deadline: Aug. 20, 2017
Notification of acceptance: Sept. 3, 2017
Submission of final abstract: Nov. 1, 2017
Conference: March 7-9, 2018

Requirements
Abstracts should be 300-400 words (1 page) and may contain additional material, such as examples, figures and references on another page. The uploaded file must be in PDF format.

Submission https://easychair.org/conferences/?conf=dgfsag1

Organization and Programme Committee
Sabine Arndt-Lappe (University Trier)
Gero Kunter (University of Düsseldorf)
Ruben van de Vijver (University of Düsseldorf)
Fabian Tomaschek (University of Tübingen)

Predicting word categorization using NDL

Together with Denis Arnold, Konstantin Sering, Florence Lopez and R. Harald Baayen, we published a Paper on applying Naive Discriminative Learning to acoustic cues in PLOS ONE. NDL allows us to model and simulate human learing by means of a two layer input-output network.
 
We derive our input cues directly from the acoustics of the speech signal, independently of its spectral and temporal variation. By these means we show that NDL can predict auditory categorization of words with human-like accuracy without assuming a pipeline via phones.
 
By the way, anyone who is interested in learing how to use NDL is invited to visit my workshop at Phonetics and Phonology in Europe (PaPE) 2017.
journal.pone.0174623.g001.PNG

How to start a manuscript (and finish it at some point)

Publish or Perish” is a common saying among scientists. But in order to publish, we must,  after the experiments have been performed and the analysis done, write. Of course I enjoy writing, but it is extremely hard and painful. Literally painful. Not only it is very hard to come into the writing zone. Staying there is even harder when you stare at an empty page. This is the time when I procrastinate most: I check emails, repeate analyses, improve the format of plots, watch ted talks. You probably know what I’m talking about.

The hardest job, at least in my experience, is to start a manuscript — the first attempt to put down the first thousands words forming a concise text. Over the years I came to realize that there is no such thing as a perfect first draft. Neither a second. Furthermore, I came up with several techniques in order to overcome the first painful hours with filling pages. Here, I would like to share some of my insights with you. The techniques are based on the idea to focus on one problem at one time. Some of these things seem to be very obvious, but believe me, repetition is the key to proficiency.

The baseline of a successfull writing start is preparation

Whatever you are doing, writing a term paper or a journal paper, you are presenting knowledge for a reader. Before you put any words to paper (or your screen) and formulate sentences, you need to know what you are talking about. Whenever you want to start writing but have only vague ideas about what to write, you will fail.That is the simple truth. Depending on what part of your paper you want to write, you need to prepare differently.

For an introduction, you need to do the research on your topic, find papers, read them, skim through the results and discussion in order to  not only report the background of your paper but also formulate precise hypotheses.

For a results section, you need to have the results of your analysis (be it analytical, statistical, or whatever).

For a discussion, you need to have a results section, otherwise there is nothing to discuss. Also, start working on the discussion only when you have finished the text on the introduction and the results.

Technique 1: Keywords

Never, I repeat, never start a new text by trying to write correctly formulated sentences. Embarking on a writing journey with correctly formulated sentences is the road to procrastination and frustration.
Do you remember what the aim of a paper is? Right, to provide knowledge to your reader. However, when you start writing a paper by trying to formulate complete sentences, you shift the focus of your task from transferring knowledge to finding the appropriate words and syntactic structure. Do not waste your energy on such a thing. Start with writing down your knowledge in the form of keywords. The keywords need to convey most of your knowledge, but it is not be important which exact words you use. Again, how you perform this task depends on the section you are writing.

Introduction
Write a keyword-based summary of the papers you read. Provide a short phrase for each of the topics: Who has done it, what was the point, what was the method (experiment, analysis, etc.), what was the finding, and what was the interpretation? Write down your hypotheses.

Results
Start putting keywords down about your analysis. If you have a figure, describe the figure. If you have a summary table for a regression (ANOVA, ANCOVA, LMER, GAMM, BAYES, etc.), describe your effects using general language (e.g. “In level A, Y was slower than in level B”, “An increase in Y was associated with an increase in X”). Do not waste your time and your energy with exact numbers. This is the first draft. You need to regard your first draft as the placeholder for exact numbers, precise formulations, and sparkling-from-intelligence metaphors.

Discussion
You have probably guessed it already: Start with keywords. Reread your introduction and results and write keywords about your main findings. Write keywords about how they are in line or differ from the literature you consulted. Write keywords about how you interpret your results. Write keywords about the problems you encountered.

Technique 2: Talk to an audience

The next step seems easy: take all these keywords and formulate sentences out of them. However, this is still hard to do, to take all those unconnected and sometimes contradictory pieces of knowledge and bind them together into a coherent and readable text. When you begin to write right from the start, you might end up struggling with correct formulations, nice words and nice syntactic structures, instead of doing what needs to be done: Formulate a text. Now, I came up with two possibilities in how to circumvent this struggle.
The first possibility is to tell the things you want to write to a friend. Tell him or her about your problem, what you have done and what results you found. The second possibility is to dictate and record  this onto a recording device. Or even better, do both. If you don’t have anyone at hand or don’t want to bore your friends with your scientific ideas, take your recording device and go on a walk (we need to do that more often, anyways).
Here is what happens when you start talking: Instead of struggling to find cool sounding words and sentences, your brain will put all the effort it takes to create a coherent and understandable version of what you want to say so that your audience understands your message. Humans communicate, humans try to put sense into their messages. Use that power.
Once you have recorded your text, transcribe it (e.g. using transcription software such as F4). And voilà, you have your text. You will be surprised, how much you have to talk about your topic all of a sudden. Furthermore, during transcription, you start to think whether what you said actually makes sense. However, do not rewrite your transcription. Literally put down every word you have spoken.

Technique 3: Rewrite

Now you have something to work on. Now, once you have a text with complete sentences, you can start to invest your effort into cool words and nice sentences. You do not need to focus on remembering the knowledge you want to include into your paper; you do not need to focus on performing an analysis; rather, you can focus and invest your all of your mental energy on correcting your text. And the surprising thing is: correcting and rewriting is a lot easier than writing down new sentences.
This is the time where you start including precise information about your analysis, i.e. exact numbers such as slopes, t- and p-values, and everything which is connected to your analysis. This is also the time when you can redo your research about questions which came up during the first draft.

Technique 4: Write regularly

OK, this is not my technique. It is from “How to write a lot” by Paul Silvia. He suggests, and I can only recommend this, to make room for writing every day. At least one our. It is not important whether you do this after waking up, or before going to bed. But save one hour (or two if you are in the mood) only for writing. Turn of your mobile phone, your internet connection, shut your door, turn off your TV,  and start to write. Silvia shows data that this is the only technique by which you can make progress. If you wait for the muse to kiss you, you can keep on waiting forever. You need to sit down and write. Every day!

Technique 5: Perform different tasks at different places

Sometime I heard that Walt Disney used three different rooms for his work – one room for the coming ip with ideas, one room for writing and one room for drawing. Whether this story is true or not, it makes perfect sense to adapt this technique for writing papers.
A short excursion: Think about your getting up habits? What is the order of going to the bathroom, making coffee (or tee), brushing your teeth, etc. Probably it is the same every day. And try to recall how unhappy you are, when this order is broken. The reason for this is that we have habits how things have to happen. Make it habit to write every day! Make it a habit to think about problems in one place (e.g. in the shower, on your way home, on your bike, while exercising), to write down keywords in another place (e.g. in the library, the pub, the cool coffee house next door), to rewrite in your office (or at home, if you cannot work in the office).
Furthermore, get habits for starting to write. For example, I always drink a cup of coffee before I start to write. It works like a mental switch.
Make it a habit to plan what you want to do in your writing hour. For example: Write 1000 words. Rewrite one chapter.
Make it a habit to reward yourself for work you have done. Not only, it makes you happy having done the work you planned. No, it makes you happy that you (finally) can watch the YouTube video you were eager to see. Rewarding yourself keeps you from procrastinating, as you change your habit from watching YouTube for procrastination to watching YouTube for reward.

Technique 6: Take your time

Rome was not built in one day. Neither is written a scientific paper. Take your time. After having written your first and second draft, leave the manuscript alone for a couple of days (sometimes even a week or two). Work on other things. In this way you clear your head from the current project, allowing oyu to reread your manuscript more critically, to find errors in the writing, the structure and the argumentation. Do this at least twice before you pass your manuscript on to someone else.

Good writing!

English Orthography – The capitalization of nouns between the 17th and 18th century

I recently visited the “39. Jahrestagung der Deutschen Gesellschaft für Sprache” (DGFS) in Saarbrücken, of which I am officially a new member (Yeah, it was about time!). The weather was terrible, it was pouring during my entire visit. Less time to procrastinate and walk through Saarbrücken, more time to listen to interesting talks. And there were plenty of them.

One talk, namely the one by Stefania Degaetano-Ortlieb’s on discourse connectors in scientific writing, inspired me to write this post. It was interesting, indeed, but I thought, connectors, schmectors, what is with the nouns of that scientific texts from the early 18th century. Almost all of them were written with an uppercase beginning letter.

I couldn’t believe it. Uppercase in English? It is like German or… actually, it seems that this is a very German pecularity. Nouns in Danish and Norwegian have used to be written in uppercase. But the practice was abolished in 1869 in Norway and in 1948 in Denmark. Remains Germany, the only country in the world expecting its kids to know and understand the difference between nouns and all of the rest of word classes. How dare they…

Back to English. I went to google ‘sNgram Viewer. The website is amazing and allows you to inspect word frequencies across time on the basis of modern and historic texts. I checked the spelling of one of the most common words: house starting at 1500 and ending in 2000. Check out what I found in the following figure. The red line represents “house” with lowercase, the blue line “House” with uppercase.

houseHouse
Red line: house. Blue line: House

There is a clear dip in the time from roughly 1650 to 1750,  where “house” in lowercase is in the minority and “House” in uppercase is in the majority. Before and after that periode the lowercase spelling is in the majority.

 

Ok, that was intriguing, so I checked six of the most common nouns in almost every language: “House, Houses, Man, Woman, Child, Children” in upper and lower case. The following figure shows you the ratio between uppercase and lowercase spellings with values above 100% indicating that uppercase spelling is in the majority.
(if you want to test it on your own, the formula was: (House+Houses+Man+Woman+Child+Children) / (house+houses+man+woman+child+children))

ratio_upper_lower
Ratio of uppercase to lowercase spellings. Above 100% represents uppercase in the majority.

This supported my assumption that in the one hundred years betwen 1650 and 1750 something in the English writing world happened. The phenomenon happened in both, American English (left) and British English (right).

Well, that said, I had to find out what actually happened. I started with an entry on wikipedia, which was more than boring. It stated only the contemporary capitalization rules. Nothing historic. Then I googled the topic. I found roughly four million entries. Checking out the top 20, I found that none were conclusive. There exist some theories. For example, one states that the script written at that time did not allow a proper distinction between words written in uppercase, because the letters were easily confused. As a result capitalization was abolished. But that was not very rewarding, because I still urged for knowing: WHY!? Why did capitalization start and why did it end?

Most German students think, capitalization is a mean thing teachers made up to torture them. However, studies in reading show that a reader seems to benefit from capitalization as comprehension is facilitated due to the additional “cue”  (read e.g. here). But if that was really the case, I wonder why only German sticks to capitalization of nouns and all other alphabethic based orthographies don’t use it.

On google scholar I found Richard Venezky’s Book “The American way of spelling“.  Unfortunately, the entire chapter on the writing system itself was not available on google books. The library of my university does not have it in stock. I thus decided to contact Dr. Venezky, just to be informed by the email deamon that his email was not valid anymore.

I ordered the book from another library. I will hopefully find something in a couple of weeks. And if not… well, I doubt that I will work myself through historic texts on spelling.

Can we see speech?

Of course we can! But let us start somewhere at the beginning!
Remember the scene in 2001: A Space Odyssey in which the two characters Dave Bowman and Frank Poole conspire against HAL 2000, the omnipotent AI of the Discovery One. They isolate themselves in a shuttle, assuming that since HAL cannot hear them anymore and they can talk freely. Well, they obviously forgot HAL’s ability to read lips.

 

Naively, we know that we can read lips and map their movements somehow onto something like a “meaning”. It is said that deaf people use this technique. This means that our brains not only contains an articulatory representation of speech, meaning how we need to move our lips and tongue and jaw in order to speak. Also, we have a representation of how our mouth looks like when we speak. You can easily check this on your own: Next time you watch a movie in a foreign language, count how often you deliberately watch at the character’s lips in order to enhance perception!

In the 1970, Harry McGurk and his research assistant John MacDonald were able to show  what has now become known as the McGurk effect: When you mismatch visual and auditory information for a syllable, e.g. [ba], makes you hear something completely different. Check out the following video:

What happens in the movie above is the following: You see the mouth movement of the [ga] syllables and hear the auditory signal for [ba]. Your brain tries to resolve this conflicting information and what you  hear (or are supposed to do so) is {da}. If this is the case for you, then this effect clearly shows that the brain stores both, the visual and the auditory information, and uses it to process speech. If this was not the case, you would not have a conflict and perceive something completely different.

Massaro and Stork (1998) “Speech Recognition and Sensory Integration” explain this effect on the basis of cue similarity, with cues being pieces of information in the signal that the brain processes. [ba] and [da] share auditory cues, [ga] and [da] share visual cues. When you mix these, i.e. auditory [ba] + visual [ga], the brain choses that output, or perception, which matches a category with the highest probability. In case of the McGurk Effect: {da}. Massaro and Stork used this probability and similarity based algorithm to create a computer program which can read lips – the precursor to HAL. Simultaneously, the McGurk effect shows us that the brain stores and uses all provided physical information which is contained in the input.

Therefore: Yes, we can see speech. It is nothing special, quite the contrary. The information is there and the brain uses it!

Together with my colleague Daniel Duran from the Institut für Maschinelle Verarbeitung, Stuttgart, we investigate in a modeling project, how cue overlap and frequency differences between [ba], [da] and [ga] might be a source for the McGurk Effect. We want to investigate this by comparing  the “Naive Discriminative Learner“, an algorithm capturing human and animal learning behavior, and “Exemplar Model”, a model that captures the creation of phonemic categories on the basis of similarities.

We presented our preliminary results during the second Simphon.net Workshop at Schloss Dagstuhl in our talk “Modelling multimodal integration – The case of the McGurk Effect”. This time, the workshop was attended by Bernd Möbius, Laurence White, Frank Zimmerer, Uwe Reichel, Ingmar Steiner, James Kirby, Antje Schweitzer, Katrin Schweitzer, Mike Walsh and our invited guest, Janet Pierrehumbert. The results are promising, and also to a certain extent surprising.

dagstuhl_simphonnet

 

Phonetics goes Rocket Science

Last semester, I wanted to show my students the turbulences in the air around the mouth during articulation. Unfortunately, I was not able to find any video material which could nicely illustrate such effects. We therefore started a cooperation with the “Deutsches Zentrum für Luft und Raumfahrt” in Lampoldshausen. Together with Friedolin Strauss from the DLR, Denis Arnold, Tino Sering and myself, we recorded a total of 45 seconds of speech using a schlieren photography captured by a high speed camera.

img_20161117_161509

You might be wondering: “Why only 45 seconds of speech?” Well, the camera recorded with a sampling rate of 10 000 frames per second. After recording roughly 5 seconds of speech, the system had to upload the data (20-30Gb) to the server. This took around 30 Minutes. In total, we spend four hours of recording and, as you can see, shooting nice pictures.

 

img_20161117_155409

Schlieren photography allows you to track changes in the density changes of a medium, in our case of the air. With a temporal resolution of 100 microseconds (0.1 milliseconds), the camera allows us to investigate subtle changes in the airflow out of the mouth and out of the nose. We saw effects we have not anticipated before already in the screening of the material. We are looking forward to the analysis, once the data will be on our servers.

If you want to know more about our visit to Lampoldshausen, the DLR published a blog post here (in German).

Introduction to R – rewritten

I completely rewrote my Introduction to R. The premise of the new manuscript is that you want to processes a written or spoken linguistic corpus using R in order to perform statistical analyses.

I reused lots of text of the previous script, especially the static introductions about what works how. But now it is embedded into a narrative: “preprocess a corpus”. The idea is to teach you all the necessary stuff only when it becomes necessary.

You can download the current version here.