Good Scientific Writing is Writing for Readers

A while ago I gave some tips and tricks how to start a scientific manuscript (click here). The idea was to record yourself while you are talking about your research, transcribe the recording and then edit the transcription.

I have not tackled, however, how to actually write and structure your text, build an argument, tell a story. And you know why? Because it is such a complicated process that I do not feel qualified enough to talk about it. At least not yet.

Yes, a scientific text is a story. Scientists are writers and well written papers are key to letting the world knowing about your world. Thanks to a colleague of mine, I discovered the teaching of Judy Swan – Associate Director for Writing in Science and Engineering. One key stone of her career is to communicate about scientific writing. Her key insight is that texts do not have a fixed interpretation. “Words do not have meanings, they have interpretations”, she states. The interpretation of words, sentences, entire texts depends on the contexts. Therefore, once a texts gets published, interpretations depends on the readers and how much they know and how much they understand what we write.

Good scientific writing manages the readers’ expectations. It begins at the level of the paper’s structure – introduction, methods, results, discussion – but it carries on at the level of paragraphs and sentences sentences. In a well written scientific text, important pieces of information are placed in locations where readers expect emphasis on important pieces of information.

Enjoy the video.

Does morphology affect phonetic characteristics?

Yes, is the short answer. The long answer: it depends on how well you learned the connection between a morphological function and  the word, the sound and the context in which the word is located.

Long story short: the better you learned this connection, and consequently the more sure you are the message you want to convey, the longer you make the phonetic signal.

Together with my colleagues Mirjam Ernestus, Ingo Plag and Harald Baayen we were able to show this relation in word-final [s] or [z] in American English and the morphological function it expresses, e.g. plural in ‘dogs’ or third person singular in ‘takes’.

To simulate learning with a computer, we used Naive Discriminative Learning (NDL), a computational formalization of Discriminative Learning. Discriminative learning assumes that speakers learn to discriminate their environment by language. Listeners on the other hand learn to use the perceived signal to predict the message the speaker intended. The theory predicts that the amount of experience speakers and listeners have with the relation between message and signal will affect their behavior. In speakers, it is the behavior is the production of the speech signal which encodes the message; in listeners, the behavior is how they respond to the signal. Here, we studied the speakers.

We operationalized the intended message as the morphological function a word has to convey and the cues as the intended acoustic signal. NDL thus learned to discriminate the morphological function on the basis of the acoustic cues. Crucially, our model used cues from the target word such as ‘dogs’ or ‘takes’, but also from words in the context of those words. The reason for this is that we assume that a word’s morphological function is not only encoded by the acoustics of the word itself, but by the acoustics of the words surrounding it.

NDL further allows to calculate measures about 1) how well the morphological function is supported by a set of cues and 2) how uncertain the model is about the lexical information. Take a driving on the highway as an example. Let’s say, you are waiting for an exit to leave the highway and finally a sign occurs. The larger an exit sign on the street, the more support you have that an exit will occur. However, the size of the sign does not say anything about how certain (or uncertain) you actually are that it is actually the exit you should take. Both will affect how you will behave on the street.

To sum it up, NDL allows you to simulate how morphological function and the phonetic signal are connected and therefore to investigate how the processes guiding speech production affect the phonetic signal

Interaction_Adiv_Act_LastDiphone.png
The Figure shows how [s] duration is affected by the interaction between bottom-up support (x-axis) and uncertainty (y-axis) about a word’s morphological function. Blue regions represent short durations, yellow regions represent long durations.

The entire study will be published under the title:

Phonetic effects of morphology and context: Modeling the duration of word-final S in English with NaÏve Discriminative Learning

It has been accepted for publication by the “Journal of Linguistics” and is being prepared for publication. A pre-publication version can be downloaded from psyarxive here.

The paper also provides a very good introduction to Discriminative Learning, how it is performed and how it can be used for predicting phonetic characteristics. If you want to perform an NDL learning simulation on your own, you can find an introduction to this technique in my R introduction here.

How do appearances of female characters in TV shows affect viewer ratings?

 

Introduction

Recently, I saw a Youtube video on reviewing the ratings of eleventh Doctor Who season, starring Jodie Whittaker as The Doctor. While the series is highly acclaimed among critics, according to rottentomatoes.com, fans regard this Doctor Who series as one of the worst. It has the worst average imdb.com ratings for the entire series and the worst rating for the season finale.
Well, I have seen the show. I have seen all the Doctor Who seasons. And I must say: I liked the new season a lot. Apart from one episode (The Witchfinders), all of them were great. But this post is not about writing a review on Doctor Who. This post is about why all fans regard the this specific season of Doctor Who. My hypothesis is: First, because The Doctor is played by a woman. I do not have to repeat all the hateful responses when BBC announced that Jodie Whittaker will star as The Doctor. But this reaction is the baseline of my hypothesis: Viewers do not like women to be the main characters of TV shows. In this post, I am going to test this hypothesis using the ratings of Star Trek: The Next Generation (1987 to 1994).

Of course, your first reaction is: Come on, that show is like a hundred years old. It is from the last century, people were different back then in the dark ages. My response to anyone who really thinks this at this moment: Really? Are we really that different in light of the current political situation (sorry, different topic) and in light of the reactions to a woman taking over as the main character of a Science-Fiction/Phantasy/Time-travel/The main character can change into every shape possible? Are we really that different? Then prove me wrong. I will continue here with Star Trek: The Next Generation (TNG). And by the way, this happens to be the dataset I have available. If you can provide any other, more “modern” dataset, then I will analyze it (but only if you preprocess the data).

The current material

TNG is also interesting for one special reason. It is a show that advocated reason, liberal values, non-violent solutions (unlike its successors), women’s rights, fought injustice on every level of society. A little bit like Doctor Who today, but with Klingons. So, let’s put it at the test: How did fans respond to women in TNG. Concretely, I am interested in the women-to-men ratio of the dialogues in each episode.

I collected the data in 2015 (yes, it’s been a while, but science takes time). I downloaded the transcripts for all the TNG episodes from http://www.chakoteya.net/ (many thanks to whoever transcribed the episodes), which, I’m just realizing, also has transcripts for Doctor Who (any volunteers?). I could have taken the average ratings for each episode, but I wanted to have a more fine grained data. This is why I contacted imbd.com who provided me with information on the sex of the rater and their age for all the episodes of all Star Trek franchises (thanks again!).

I decided to calculate the average rating per episode depending on the sex of the rater, as I was interested whether women would rate episodes with a higher proportion of women dialogues better than man (any guesses?). This gives me two ratings per episode: one for female raters and one for male raters. For the analysis, I transformed imdb’s 1 to 10 scale to range between 0 and 1.

I am focusing here on the dialogues of the main characters, which are Wesley, Worf, Dr. Crusher, Dr. Pulaski, Laforge, Troi, Riker, Data and Picard. I ignored all the guest characters, side kicks, Klingons, Vulcans and who ever appears on the show, mainly because it was too complicated to tag each of these characters according to their sex.

I calculated a simple measure: the main characters’ women-to-men ratio in terms of the number of words spoken in each episode. I am ignoring the name of the character (everyone loves Data) and, most importantly, I am ignoring the plot. This is important, because only in this way we can assess whether viewers have a certain preference to the sex of the characters. I would expect that female raters favor episodes in which more female characters appear.

This gives us 176 values for 176 episodes. The larger the value, the more text (measured by the number of words) the main female characters have in that episode. A ratio of 1 would mean, men and woman have an equal amount of text. Above 1, women have more text, below one, men have more text.

The following Figure  illustrates the distribution of the women-to-men ratio in all episodes. It becomes very apparent that men have more text than women in TNG. There are only 9 episodes where men have less text. On average, women have roughly 34% of the text that men have (median of 22%). For analysis, I am ignoring those episodes that have more text, because excluding them makes the data better distributed (dont’t worry, I have also performed the analysis including the data and it does not change a thing).

ST_density.png

The analysis

I have used the betareg library that allows us to model values that range between 0 and 1 (the betareg library is actually designed to model ratios by means of beta-regression, that transforms probabilities ranging between 0 and 1 into logits. Hence, all statistical results presented below (i.e. beta = X), are logits. If you want to know more about logits, click here). The summary of beta-regression gives us coefficients (beta estimates), standard errors (sde), z-values (larger than 2 represents significant) and p-values (smaller than 0.05 represents significant).

I fitted the ratings with three predictors in one model.

The first predictor was the episode number in order to investigate how ratings evolved across the seasons. Indeed, the effect was significantly positive (beta = 0.028, standard error = 0.0005, z = 5.7, p < 0.001). Indeed, TNG’s ratings increased with every season. This effect is illustrated in the next plot. Although there is a strong variation within each season, the ratings got better and better. Nicely done, Star Trek producers.

 

ST_episodenumber.png

Using an interaction between rater sex and the women-to-men text ratio, I wanted to inspect whether female raters would rate an episode differently then male raters. However, the interaction was not significant (beta = -0.13, standard error = 0.24, z = -0.53, p = 0.6), which means that there was no difference between female and male raters.

Unsurprisingly, male raters gave TNG higher ratings then female raters: on a scale between 0 and 1, female raters gave on average 0.66 (beta = 0.86, sde = 0.07, z =12.7, p < 0.001), male raters gave on average 0.71 (beta = 0.23, sde = 0.8, z = 3, p = 0.003).

Now, what about the women-to-men-text ratio? Well, the effect is highly negative. The more text woman had in an episode, the worse the episode was rated (beta = -0.86, sde = 0.17, z = -5.1, p < 0.001). The following plots illustrates this effect.

 

ST_rating.png

Two insights follow from the figure. First, the distribution of the women-to-men-text-ratio is skewed. This that there are less episodes with more text for women. Second, that the variability in ratings is really large for episodes with a high percentage of text for male characters. One potential interpretation of this finding is that since there are fewer female-oriented eposides in the series, there is also a smaller probability that there will be a good episode episodes with higher women-to-men-text-ratios.

Another question that arises is whether the ratio has changed across the seven years  TNG was airing. It did not. The Spearmen-rank correlation between women-to-men-text-ratio and the episode number is -0.006, when calculated on the data set excluding the nine strong outliers (i.e. the episodes with a ratio larger than 1), and 0.065, when the strong outliers are kept in the data set. The following figure illustrates this by means of a dotplot.

ST_development.png

In defense of the producers: they realized that they had very few female-oriented eposides and started to produce them at some point. But this did not change the overall trend at all across the entire lifetime of the series to neglect main female characters.

 

Discussion

The effect is obvious: The more text main female characters had in an episode, the worse the episode was rated. Given that the interaction between women-to-men-text-ratio and sex of the rater was not significant, this means that there was no difference between female and male raters with respect how they perceived the amount of female appearances. Note, that this finding is independent of the plot of the episode. This is simply the number of words a female character appears (which is obviously equivalent to screen time).

What is more important: female and male raters did not differ in their ratings. This means that female viewers have the same opinions about how often female characters should occur on Television like male viewers. In my opinion this result is devastating. Not only does it mean that viewers do not accept female characters to be present in Television. It also means that this opinion is supported by those who are represented by these characters: The women themselves. Given that today’s television has a strong influence on public opinion and on how each person defines their role in this society, this is absolutely unacceptable. Of course female viewers rate episodes with more female appearances worse then with more male appearances. This is what they have learned to be the status quo by watching Television.

Coming back to Doctor Who (have not forgotten this one). I claim that the reason why the latest season of Doctor Who has significantly worse ratings than all those before is simple: The main character is portrait by a woman, the plots center around female characters. For example, the episode “Rosa” that focuses on Rosa Parks. In the episode “The Tsuranga Conundrum” there is a female General. And so forth.

Again, someone might argue that people have changed. More and more female characters appear on TV shows. I see and acknowledge that. However, TV consumption is a vicious circle. Producers and authors want to sell a product and they create it such that broadcasting companies will buy it. The bosses of the broadcasting companies will only buy products that will attract money, in the form of advertisers. The advertisers will only buy adds for a show if that show sells, i.e. has high ratings and a high percentage of viewers. Viewers will only watch what they like and they like only, and this is the crucial point here, what they are used to. And viewers are not used to TV shows that are in a male domain to be female oriented, even though there are lots of female Star Trek viewers. Note that this logic still applies, even though viewer habits have changed in the time of Netflix and other online streaming companies. 

The outrage that Jodie Whittaker had to experience after she was announced as the first female Doctor shows that viewers expectations have not changed in the last 30 years ever since TNG was first aired. We clearly need to work on that. We need to break the vicious circle.

I totally acknowledge that the current finding has to be regarded with caution because Star Trek targets a male audience. Not only it should be replicated with Doctor Who (that started all this idea) but also with TV shows that target both sexes. Maybe someone will provide this data to me.

Acknowledgements

My thanks go to Jessie Nixon who provided interesting input to this blog post.

Statistical investigation of Mass Shootings in the US in 2018

After the recent Mass Shooting in Thousand Oaks, California, together with some colleagues we were discussion potential reasons for these outbreaks of violence in the US. We had a look at the Wikipedia list on Mass Shootings in 2018. We were shocked to see that there as many as 106 shootings noted. On abc15.com, we found a map illustrating the location of every mass shooting in 2018:

It becomes immediately apparent that most of the shootings happened in the Eastern part of the US. Given that there is already structure in the data, we were wondering whether there is more.
For example, is unemployment one reason for the shooting? Or population density? Concretely, the question arose whether the number of casualties can be predicted by such information. I expected that the number of casualties would increase, the larger the size of the population, the larger the population growth and the larger unemployment.

Methods

On the basis of the Wikipedia list on Mass Shootings in 2018, I collected information about the city/town/village the mass shooting occurred. The lists contains information on 106 mass shootings. I used Wikipedia, because it allowed me to obtain easily additional information on the locality of the mass shooting by following the links on the page. I collected the following pieces of information:

  • Population of the locality (city) in 2010 and 2016 (as provided by Wikipedia)
  • Population of the state in 2010 and 2016
  • Unemployment rate of the state

The 2010 data was based on the 2010 United States Census. I would have wanted to have information on the Unemployment rate of the locality. However, this surpassed my abilities. I also calculated the percentage of growth in population size between 2010 and 2016 for both, locality and state. In total, I used five variables to predict the the number of casualties (NumberOfCasualties).

  • PopulationCity (in 2016, ranging from 3365 to 8622698)
  • PopulationState (in 2016, ranging from 601723 to 37253956 )
  • PopulationGrowthCity (between 2010 and 2016, ranging from 0.875 to 1.315)
  • PopulationGrowthState (between 2010 and 2016, ranging from 0.994 to 1.35)
  • UnemploymentRateState (in 2016)

For 9 of the localities, population size was available only for 2016. Those were excluded from the analysis. In pilot analyses, the inclusion of those 9 localities did not change the overall results of the present analysis. The shootings in the Bronx and in Brooklyn were tagged as shootings in New York City.

All of those pieces information were collected automatically from Wikipedia and a data tables using a custom-made script in R. The download of the html-pages was performed with the function getURL() from the RCurl package.

Analysis and Results

City population size in 2016 had to be log transformed in order to obtain normal distribution. All predictors were centered and scaled for analysis (z-scaled). I first performed a standard Spearman-Rank correlation analysis between the predictors. I want to highlight here two results:

  • PopulationGrowthState and PopulationGrowthCity were strongly correlated (R = 0.56). This is not surprising at all, as the state population depends on the city population.
  • PopulationGrowthState was negatively correlated with UnemploymentRateState (R= – 0.48). The same effect, weaker, was for PopulationGrowthCity (R=-0.32). This means that when the population size grew, the unemployment rate went down. While this might not be nothing new for people working in demographics and economics, this surprised me, as I would have expected these two variables to be positively correlated. I would like to see, whether this was observed on a larger scale.

NumberOfCasualties ranged between 0 (N = 40) and 17 (N = 1).

I fit a generalized linear model to predict the NumberOfCasualties (function glm, family = poisson). The analysis is quite trick, because of the high collinearity in the data, i.e. some predictor can be used to predict another predictor. This becomes apparent above. This is why I used a step-by-step inclusion procedure and checked for indicators of collinearity in the model. If you want to read more about how to address collinearity in regression analyses, together with two other colleagues I have published a paper here. I have not tested any interactions between the predictors because of the small sample number in the data.

As it turned out, UnemploymentRateState and PopulationGrowthState were not significantly predictive for NumberOfCasualties (this means that the effect they cause cannot be used to support our initial hypothesis). I found this surprising, as I indeed would have predicted that unemployment drives people to commit horrible things.
The three remaining predictors were significant. The model’s intercept was 0.271 (std = 0.09, z = 2.9, p = 0.003). This is the estimated logit, where 0 equals 50% (in R, you can transform those values back using the function inv.logit() from the boot package. I did not apply this because it changes the visualization)

  • PopulationState (estimate = 0.2, std = 0.07, z = 2.9, p = 0.004)
  • PopulationCity (estimate = -0.4, std = 0.08, z = -4.8, p < 0.001)
  • PopulationGrowthCity (estimate = 0.3, std = 0.07, z = 4.4, p < 0.001)

The effects are illustrated in the following figure (illustrated with the function visreg() from the package visreg. I restricted the y-axis which is why not all data points are illustrated). The y-axis represents partial effect of the estimated logit, i.e. how the intercept (logit 0.271 = probability of 0.57 that a large number of people are killed) has to be changed depending on the predictor. 0 therefore represents no change (horizontal gray dashed line), -2 means that the intercept has to lowered to logit -1.729 = 0.15 probability; +2 means that the intercept has to be increased to logit 2.271 = 0.90 probability of a high number of casualties.

MS_figure.png

Panel (A) represents the population size of the state the mass shooting occurred. The larger the state, the larger the probability of a high number of casualties. This supports my hypothesis. Surprisingly and contrary to my hypothesis, the effect is reversed for the population size of the village/town/city where the shooting occurred: the smaller the city, the larger the probability of a higher number of casualties; the larger the city, the smaller the probability of a high number of casualties. Finally, the probability of a high number of casualties increases, when the city population has strongly grown between 2010 and 2016. This makes sense as: when cities grow, the total state population increases.

I will avoid any further interpretation of the data and the analysis. I also have not included other, potentially very interesting, variables into this analysis, in order to keep things as simple as possible.

Latest manuscript on using R for corpus studies.

It’s been almost a year since I updated the introduction to programming R for corpus studies. Download the latest script here.

There are plenty changes, among others there is…
* … a new link to getting the data for the introduction
* … an introduction to writing functions
* … a short introduction to preparing and running regression analyses
* … a short introduction to visualization your models

Regarding the formatting:
* there are hyperlinks in the file
* section names are in the header

And most importantly, thanks to many readers some typos got corrected.
Have fun!

DGfS 2018 Workshop: “Variation and phonetic detail in spoken morphology”

Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft
Stuttgart, Germany

March 7-9, 2018

Call For Abstracts

Submission deadline 20.8.2017
Invited Speaker
Sharon Peperkamp, Laboratoire de Sciences Cognitives et Psycholinguistique, Paris

Description

The relation between phonetics, phonology and morphology is much more complex than is assumed in current theories. For example, stress preservation in derived words is more variable than hitherto assumed. A word like orìginálity preserves main stress in its base oríginal as secondary stress, but other words have variable secondary stress (e.g. antìcipátion ~ ànticipátion, derived from antícipate, e.g. Collie 2008). In addition, there is evidence suggesting that acoustic and articulatory detail may play a role in the realization of morphologically complex words. For example, an [s] in American English is longer if it is part of a stem than when it is a plural marker or a clitic (cf. Plag et al. 2017). Pertinent work on both issues springs from different linguistic disciplines, in particular psycholinguistics, theoretical linguistics, phonetics, phonology, morphology, computational and quantitative linguistics, and has led to novel proposals regarding the general architecture of the morphology-phonology-phonetics interface. Different theories have been proposed on the basis of lexical listing vs. computation, analogical models or discriminative learning.

Within different linguistic disciplines, we see an increasing body of empirical work that addresses problems of variation and phonetic detail in morphology with the help of spoken data (e.g. Cohen 2015; Ben Hedia & Plag 2017, Strycharczuk & Scobbie 2017). Furthermore, there is more and more work testing theoretical proposals with the help of computational simulations (e.g. Arnold et al. 2017).

This workshop aims to bring together work from different disciplines that study and model variation and phonetic detail on the basis of spoken data. Relevant issues include: What new insights can spoken data bring to our knowledge about morphophonological variation? Are speakers sensitive to and/or aware of systematic subphonemic differences? What cognitively plausible computational and psycholinguistic models do best account for this variability? How can our theories of morphology deal with variation within and between speaker? What is the status of morphophonological and morphophonetic variation in grammar?

References

Arnold, D., Tomaschek, F., Sering, K., Lopez, F., and Baayen, R.H. 2017. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit. PLOS.

Ben Hedia, Sonia & Ingo Plag. 2017. Gemination and degemination in English prefixation: Phonetic evidence for morphological organization. Journal of Phonetics 62, 34-49.

Cohen, Clara P. 2014. Probabilistic reduction and probabilistic enhancement. Morphology, 24(4), 291-323.

Collie, Sarah. 2008. English stress preservation: the case for ‘fake cyclicity’. English Language and Linguistics 12(3). 505–532.

Plag, Ingo, Julia Homann & Gero Kunter. 2017. Homophony and morphology: The acoustics of word-final S in English. Journal of Linguistics 53(1), 181–216.

Strycharczuk, Patrycja and James M. Scobbie. 2017. Whence the fuzziness? Morphological effects in interacting sound changes in Southern British English. Laboratory Phonology: Journal of the Association for Laboratory Phonology 8(1): 7, 1–21.

Important Dates

Abstract submission deadline: Aug. 20, 2017
Notification of acceptance: Sept. 3, 2017
Submission of final abstract: Nov. 1, 2017
Conference: March 7-9, 2018

Requirements
Abstracts should be 300-400 words (1 page) and may contain additional material, such as examples, figures and references on another page. The uploaded file must be in PDF format.

Submission https://easychair.org/conferences/?conf=dgfsag1

Organization and Programme Committee
Sabine Arndt-Lappe (University Trier)
Gero Kunter (University of Düsseldorf)
Ruben van de Vijver (University of Düsseldorf)
Fabian Tomaschek (University of Tübingen)

Predicting word categorization using NDL

Together with Denis Arnold, Konstantin Sering, Florence Lopez and R. Harald Baayen, we published a Paper on applying Naive Discriminative Learning to acoustic cues in PLOS ONE. NDL allows us to model and simulate human learing by means of a two layer input-output network.
 
We derive our input cues directly from the acoustics of the speech signal, independently of its spectral and temporal variation. By these means we show that NDL can predict auditory categorization of words with human-like accuracy without assuming a pipeline via phones.
 
By the way, anyone who is interested in learing how to use NDL is invited to visit my workshop at Phonetics and Phonology in Europe (PaPE) 2017.
journal.pone.0174623.g001.PNG