Larry McEnerney, Director of the University of Chicago’s Writing Program, talks about what is important in a scientific text. He discusses how to properly formulate a problem for the community of readers a paper is aiming at. Very insightful.
A while ago I gave some tips and tricks how to start a scientific manuscript (click here). The idea was to record yourself while you are talking about your research, transcribe the recording and then edit the transcription.
I have not tackled, however, how to actually write and structure your text, build an argument, tell a story. And you know why? Because it is such a complicated process that I do not feel qualified enough to talk about it. At least not yet.
Yes, a scientific text is a story. Scientists are writers and well written papers are key to letting the world knowing about your world. Thanks to a colleague of mine, I discovered the teaching of Judy Swan – Associate Director for Writing in Science and Engineering. One key stone of her career is to communicate about scientific writing. Her key insight is that texts do not have a fixed interpretation. “Words do not have meanings, they have interpretations”, she states. The interpretation of words, sentences, entire texts depends on the contexts. Therefore, once a texts gets published, interpretations depends on the readers and how much they know and how much they understand what we write.
Good scientific writing manages the readers’ expectations. It begins at the level of the paper’s structure – introduction, methods, results, discussion – but it carries on at the level of paragraphs and sentences sentences. In a well written scientific text, important pieces of information are placed in locations where readers expect emphasis on important pieces of information.
Enjoy the video.
Yes, is the short answer. The long answer: it depends on how well you learned the connection between a morphological function and the word, the sound and the context in which the word is located.
Long story short: the better you learned this connection, and consequently the more sure you are the message you want to convey, the longer you make the phonetic signal.
Together with my colleagues Mirjam Ernestus, Ingo Plag and Harald Baayen we were able to show this relation in word-final [s] or [z] in American English and the morphological function it expresses, e.g. plural in ‘dogs’ or third person singular in ‘takes’.
To simulate learning with a computer, we used Naive Discriminative Learning (NDL), a computational formalization of Discriminative Learning. Discriminative learning assumes that speakers learn to discriminate their environment by language. Listeners on the other hand learn to use the perceived signal to predict the message the speaker intended. The theory predicts that the amount of experience speakers and listeners have with the relation between message and signal will affect their behavior. In speakers, it is the behavior is the production of the speech signal which encodes the message; in listeners, the behavior is how they respond to the signal. Here, we studied the speakers.
We operationalized the intended message as the morphological function a word has to convey and the cues as the intended acoustic signal. NDL thus learned to discriminate the morphological function on the basis of the acoustic cues. Crucially, our model used cues from the target word such as ‘dogs’ or ‘takes’, but also from words in the context of those words. The reason for this is that we assume that a word’s morphological function is not only encoded by the acoustics of the word itself, but by the acoustics of the words surrounding it.
NDL further allows to calculate measures about 1) how well the morphological function is supported by a set of cues and 2) how uncertain the model is about the lexical information. Take a driving on the highway as an example. Let’s say, you are waiting for an exit to leave the highway and finally a sign occurs. The larger an exit sign on the street, the more support you have that an exit will occur. However, the size of the sign does not say anything about how certain (or uncertain) you actually are that it is actually the exit you should take. Both will affect how you will behave on the street.
To sum it up, NDL allows you to simulate how morphological function and the phonetic signal are connected and therefore to investigate how the processes guiding speech production affect the phonetic signal
The entire study will be published under the title:
Phonetic effects of morphology and context: Modeling the duration of word-final S in English with NaÏve Discriminative Learning
It has been accepted for publication by the “Journal of Linguistics” and is being prepared for publication. A pre-publication version can be downloaded from psyarxive here.
The paper also provides a very good introduction to Discriminative Learning, how it is performed and how it can be used for predicting phonetic characteristics. If you want to perform an NDL learning simulation on your own, you can find an introduction to this technique in my R introduction here.
Recently, I saw a Youtube video on reviewing the ratings of eleventh Doctor Who season, starring Jodie Whittaker as The Doctor. While the series is highly acclaimed among critics, according to rottentomatoes.com, fans regard this Doctor Who series as one of the worst. It has the worst average imdb.com ratings for the entire series and the worst rating for the season finale.
Well, I have seen the show. I have seen all the Doctor Who seasons. And I must say: I liked the new season a lot. Apart from one episode (The Witchfinders), all of them were great. But this post is not about writing a review on Doctor Who. This post is about why all fans regard the this specific season of Doctor Who. My hypothesis is: First, because The Doctor is played by a woman. I do not have to repeat all the hateful responses when BBC announced that Jodie Whittaker will star as The Doctor. But this reaction is the baseline of my hypothesis: Viewers do not like women to be the main characters of TV shows. In this post, I am going to test this hypothesis using the ratings of Star Trek: The Next Generation (1987 to 1994).
Of course, your first reaction is: Come on, that show is like a hundred years old. It is from the last century, people were different back then in the dark ages. My response to anyone who really thinks this at this moment: Really? Are we really that different in light of the current political situation (sorry, different topic) and in light of the reactions to a woman taking over as the main character of a Science-Fiction/Phantasy/Time-travel/The main character can change into every shape possible? Are we really that different? Then prove me wrong. I will continue here with Star Trek: The Next Generation (TNG). And by the way, this happens to be the dataset I have available. If you can provide any other, more “modern” dataset, then I will analyze it (but only if you preprocess the data).
The current material
TNG is also interesting for one special reason. It is a show that advocated reason, liberal values, non-violent solutions (unlike its successors), women’s rights, fought injustice on every level of society. A little bit like Doctor Who today, but with Klingons. So, let’s put it at the test: How did fans respond to women in TNG. Concretely, I am interested in the women-to-men ratio of the dialogues in each episode.
I collected the data in 2015 (yes, it’s been a while, but science takes time). I downloaded the transcripts for all the TNG episodes from http://www.chakoteya.net/ (many thanks to whoever transcribed the episodes), which, I’m just realizing, also has transcripts for Doctor Who (any volunteers?). I could have taken the average ratings for each episode, but I wanted to have a more fine grained data. This is why I contacted imbd.com who provided me with information on the sex of the rater and their age for all the episodes of all Star Trek franchises (thanks again!).
I decided to calculate the average rating per episode depending on the sex of the rater, as I was interested whether women would rate episodes with a higher proportion of women dialogues better than man (any guesses?). This gives me two ratings per episode: one for female raters and one for male raters. For the analysis, I transformed imdb’s 1 to 10 scale to range between 0 and 1.
I am focusing here on the dialogues of the main characters, which are Wesley, Worf, Dr. Crusher, Dr. Pulaski, Laforge, Troi, Riker, Data and Picard. I ignored all the guest characters, side kicks, Klingons, Vulcans and who ever appears on the show, mainly because it was too complicated to tag each of these characters according to their sex.
I calculated a simple measure: the main characters’ women-to-men ratio in terms of the number of words spoken in each episode. I am ignoring the name of the character (everyone loves Data) and, most importantly, I am ignoring the plot. This is important, because only in this way we can assess whether viewers have a certain preference to the sex of the characters. I would expect that female raters favor episodes in which more female characters appear.
This gives us 176 values for 176 episodes. The larger the value, the more text (measured by the number of words) the main female characters have in that episode. A ratio of 1 would mean, men and woman have an equal amount of text. Above 1, women have more text, below one, men have more text.
The following Figure illustrates the distribution of the women-to-men ratio in all episodes. It becomes very apparent that men have more text than women in TNG. There are only 9 episodes where men have less text. On average, women have roughly 34% of the text that men have (median of 22%). For analysis, I am ignoring those episodes that have more text, because excluding them makes the data better distributed (dont’t worry, I have also performed the analysis including the data and it does not change a thing).
I have used the betareg library that allows us to model values that range between 0 and 1 (the betareg library is actually designed to model ratios by means of beta-regression, that transforms probabilities ranging between 0 and 1 into logits. Hence, all statistical results presented below (i.e. beta = X), are logits. If you want to know more about logits, click here). The summary of beta-regression gives us coefficients (beta estimates), standard errors (sde), z-values (larger than 2 represents significant) and p-values (smaller than 0.05 represents significant).
I fitted the ratings with three predictors in one model.
The first predictor was the episode number in order to investigate how ratings evolved across the seasons. Indeed, the effect was significantly positive (beta = 0.028, standard error = 0.0005, z = 5.7, p < 0.001). Indeed, TNG’s ratings increased with every season. This effect is illustrated in the next plot. Although there is a strong variation within each season, the ratings got better and better. Nicely done, Star Trek producers.
Using an interaction between rater sex and the women-to-men text ratio, I wanted to inspect whether female raters would rate an episode differently then male raters. However, the interaction was not significant (beta = -0.13, standard error = 0.24, z = -0.53, p = 0.6), which means that there was no difference between female and male raters.
Unsurprisingly, male raters gave TNG higher ratings then female raters: on a scale between 0 and 1, female raters gave on average 0.66 (beta = 0.86, sde = 0.07, z =12.7, p < 0.001), male raters gave on average 0.71 (beta = 0.23, sde = 0.8, z = 3, p = 0.003).
Now, what about the women-to-men-text ratio? Well, the effect is highly negative. The more text woman had in an episode, the worse the episode was rated (beta = -0.86, sde = 0.17, z = -5.1, p < 0.001). The following plots illustrates this effect.
Two insights follow from the figure. First, the distribution of the women-to-men-text-ratio is skewed. This that there are less episodes with more text for women. Second, that the variability in ratings is really large for episodes with a high percentage of text for male characters. One potential interpretation of this finding is that since there are fewer female-oriented eposides in the series, there is also a smaller probability that there will be a good episode episodes with higher women-to-men-text-ratios.
Another question that arises is whether the ratio has changed across the seven years TNG was airing. It did not. The Spearmen-rank correlation between women-to-men-text-ratio and the episode number is -0.006, when calculated on the data set excluding the nine strong outliers (i.e. the episodes with a ratio larger than 1), and 0.065, when the strong outliers are kept in the data set. The following figure illustrates this by means of a dotplot.
In defense of the producers: they realized that they had very few female-oriented eposides and started to produce them at some point. But this did not change the overall trend at all across the entire lifetime of the series to neglect main female characters.
The effect is obvious: The more text main female characters had in an episode, the worse the episode was rated. Given that the interaction between women-to-men-text-ratio and sex of the rater was not significant, this means that there was no difference between female and male raters with respect how they perceived the amount of female appearances. Note, that this finding is independent of the plot of the episode. This is simply the number of words a female character appears (which is obviously equivalent to screen time).
What is more important: female and male raters did not differ in their ratings. This means that female viewers have the same opinions about how often female characters should occur on Television like male viewers. In my opinion this result is devastating. Not only does it mean that viewers do not accept female characters to be present in Television. It also means that this opinion is supported by those who are represented by these characters: The women themselves. Given that today’s television has a strong influence on public opinion and on how each person defines their role in this society, this is absolutely unacceptable. Of course female viewers rate episodes with more female appearances worse then with more male appearances. This is what they have learned to be the status quo by watching Television.
Coming back to Doctor Who (have not forgotten this one). I claim that the reason why the latest season of Doctor Who has significantly worse ratings than all those before is simple: The main character is portrait by a woman, the plots center around female characters. For example, the episode “Rosa” that focuses on Rosa Parks. In the episode “The Tsuranga Conundrum” there is a female General. And so forth.
Again, someone might argue that people have changed. More and more female characters appear on TV shows. I see and acknowledge that. However, TV consumption is a vicious circle. Producers and authors want to sell a product and they create it such that broadcasting companies will buy it. The bosses of the broadcasting companies will only buy products that will attract money, in the form of advertisers. The advertisers will only buy adds for a show if that show sells, i.e. has high ratings and a high percentage of viewers. Viewers will only watch what they like and they like only, and this is the crucial point here, what they are used to. And viewers are not used to TV shows that are in a male domain to be female oriented, even though there are lots of female Star Trek viewers. Note that this logic still applies, even though viewer habits have changed in the time of Netflix and other online streaming companies.
The outrage that Jodie Whittaker had to experience after she was announced as the first female Doctor shows that viewers expectations have not changed in the last 30 years ever since TNG was first aired. We clearly need to work on that. We need to break the vicious circle.
I totally acknowledge that the current finding has to be regarded with caution because Star Trek targets a male audience. Not only it should be replicated with Doctor Who (that started all this idea) but also with TV shows that target both sexes. Maybe someone will provide this data to me.
My thanks go to Jessie Nixon who provided interesting input to this blog post.
After the recent Mass Shooting in Thousand Oaks, California, together with some colleagues we were discussion potential reasons for these outbreaks of violence in the US. We had a look at the Wikipedia list on Mass Shootings in 2018. We were shocked to see that there as many as 106 shootings noted. On abc15.com, we found a map illustrating the location of every mass shooting in 2018:
It becomes immediately apparent that most of the shootings happened in the Eastern part of the US. Given that there is already structure in the data, we were wondering whether there is more.
For example, is unemployment one reason for the shooting? Or population density? Concretely, the question arose whether the number of casualties can be predicted by such information. I expected that the number of casualties would increase, the larger the size of the population, the larger the population growth and the larger unemployment.
On the basis of the Wikipedia list on Mass Shootings in 2018, I collected information about the city/town/village the mass shooting occurred. The lists contains information on 106 mass shootings. I used Wikipedia, because it allowed me to obtain easily additional information on the locality of the mass shooting by following the links on the page. I collected the following pieces of information:
- Population of the locality (city) in 2010 and 2016 (as provided by Wikipedia)
- Population of the state in 2010 and 2016
- Unemployment rate of the state
The 2010 data was based on the 2010 United States Census. I would have wanted to have information on the Unemployment rate of the locality. However, this surpassed my abilities. I also calculated the percentage of growth in population size between 2010 and 2016 for both, locality and state. In total, I used five variables to predict the the number of casualties (NumberOfCasualties).
- PopulationCity (in 2016, ranging from 3365 to 8622698)
- PopulationState (in 2016, ranging from 601723 to 37253956 )
- PopulationGrowthCity (between 2010 and 2016, ranging from 0.875 to 1.315)
- PopulationGrowthState (between 2010 and 2016, ranging from 0.994 to 1.35)
- UnemploymentRateState (in 2016)
For 9 of the localities, population size was available only for 2016. Those were excluded from the analysis. In pilot analyses, the inclusion of those 9 localities did not change the overall results of the present analysis. The shootings in the Bronx and in Brooklyn were tagged as shootings in New York City.
All of those pieces information were collected automatically from Wikipedia and a data tables using a custom-made script in R. The download of the html-pages was performed with the function getURL() from the RCurl package.
Analysis and Results
City population size in 2016 had to be log transformed in order to obtain normal distribution. All predictors were centered and scaled for analysis (z-scaled). I first performed a standard Spearman-Rank correlation analysis between the predictors. I want to highlight here two results:
- PopulationGrowthState and PopulationGrowthCity were strongly correlated (R = 0.56). This is not surprising at all, as the state population depends on the city population.
- PopulationGrowthState was negatively correlated with UnemploymentRateState (R= – 0.48). The same effect, weaker, was for PopulationGrowthCity (R=-0.32). This means that when the population size grew, the unemployment rate went down. While this might not be nothing new for people working in demographics and economics, this surprised me, as I would have expected these two variables to be positively correlated. I would like to see, whether this was observed on a larger scale.
NumberOfCasualties ranged between 0 (N = 40) and 17 (N = 1).
I fit a generalized linear model to predict the NumberOfCasualties (function glm, family = poisson). The analysis is quite trick, because of the high collinearity in the data, i.e. some predictor can be used to predict another predictor. This becomes apparent above. This is why I used a step-by-step inclusion procedure and checked for indicators of collinearity in the model. If you want to read more about how to address collinearity in regression analyses, together with two other colleagues I have published a paper here. I have not tested any interactions between the predictors because of the small sample number in the data.
As it turned out, UnemploymentRateState and PopulationGrowthState were not significantly predictive for NumberOfCasualties (this means that the effect they cause cannot be used to support our initial hypothesis). I found this surprising, as I indeed would have predicted that unemployment drives people to commit horrible things.
The three remaining predictors were significant. The model’s intercept was 0.271 (std = 0.09, z = 2.9, p = 0.003). This is the estimated logit, where 0 equals 50% (in R, you can transform those values back using the function inv.logit() from the boot package. I did not apply this because it changes the visualization)
- PopulationState (estimate = 0.2, std = 0.07, z = 2.9, p = 0.004)
- PopulationCity (estimate = -0.4, std = 0.08, z = -4.8, p < 0.001)
- PopulationGrowthCity (estimate = 0.3, std = 0.07, z = 4.4, p < 0.001)
The effects are illustrated in the following figure (illustrated with the function visreg() from the package visreg. I restricted the y-axis which is why not all data points are illustrated). The y-axis represents partial effect of the estimated logit, i.e. how the intercept (logit 0.271 = probability of 0.57 that a large number of people are killed) has to be changed depending on the predictor. 0 therefore represents no change (horizontal gray dashed line), -2 means that the intercept has to lowered to logit -1.729 = 0.15 probability; +2 means that the intercept has to be increased to logit 2.271 = 0.90 probability of a high number of casualties.
Panel (A) represents the population size of the state the mass shooting occurred. The larger the state, the larger the probability of a high number of casualties. This supports my hypothesis. Surprisingly and contrary to my hypothesis, the effect is reversed for the population size of the village/town/city where the shooting occurred: the smaller the city, the larger the probability of a higher number of casualties; the larger the city, the smaller the probability of a high number of casualties. Finally, the probability of a high number of casualties increases, when the city population has strongly grown between 2010 and 2016. This makes sense as: when cities grow, the total state population increases.
I will avoid any further interpretation of the data and the analysis. I also have not included other, potentially very interesting, variables into this analysis, in order to keep things as simple as possible.
It’s been almost a year since I updated the introduction to programming R for corpus studies. Download the latest script here.
There are plenty changes, among others there is…
* … a new link to getting the data for the introduction
* … an introduction to writing functions
* … a short introduction to preparing and running regression analyses
* … a short introduction to visualization your models
Regarding the formatting:
* there are hyperlinks in the file
* section names are in the header
And most importantly, thanks to many readers some typos got corrected.
Jahrestagung der Deutschen Gesellschaft für Sprachwissenschaft
March 7-9, 2018
Call For Abstracts
Submission deadline 20.8.2017
Sharon Peperkamp, Laboratoire de Sciences Cognitives et Psycholinguistique, Paris
The relation between phonetics, phonology and morphology is much more complex than is assumed in current theories. For example, stress preservation in derived words is more variable than hitherto assumed. A word like orìginálity preserves main stress in its base oríginal as secondary stress, but other words have variable secondary stress (e.g. antìcipátion ~ ànticipátion, derived from antícipate, e.g. Collie 2008). In addition, there is evidence suggesting that acoustic and articulatory detail may play a role in the realization of morphologically complex words. For example, an [s] in American English is longer if it is part of a stem than when it is a plural marker or a clitic (cf. Plag et al. 2017). Pertinent work on both issues springs from different linguistic disciplines, in particular psycholinguistics, theoretical linguistics, phonetics, phonology, morphology, computational and quantitative linguistics, and has led to novel proposals regarding the general architecture of the morphology-phonology-phonetics interface. Different theories have been proposed on the basis of lexical listing vs. computation, analogical models or discriminative learning.
Within different linguistic disciplines, we see an increasing body of empirical work that addresses problems of variation and phonetic detail in morphology with the help of spoken data (e.g. Cohen 2015; Ben Hedia & Plag 2017, Strycharczuk & Scobbie 2017). Furthermore, there is more and more work testing theoretical proposals with the help of computational simulations (e.g. Arnold et al. 2017).
This workshop aims to bring together work from different disciplines that study and model variation and phonetic detail on the basis of spoken data. Relevant issues include: What new insights can spoken data bring to our knowledge about morphophonological variation? Are speakers sensitive to and/or aware of systematic subphonemic differences? What cognitively plausible computational and psycholinguistic models do best account for this variability? How can our theories of morphology deal with variation within and between speaker? What is the status of morphophonological and morphophonetic variation in grammar?
Arnold, D., Tomaschek, F., Sering, K., Lopez, F., and Baayen, R.H. 2017. Words from spontaneous conversational speech can be recognized with human-like accuracy by an error-driven learning algorithm that discriminates between meanings straight from smart acoustic features, bypassing the phoneme as recognition unit. PLOS.
Ben Hedia, Sonia & Ingo Plag. 2017. Gemination and degemination in English prefixation: Phonetic evidence for morphological organization. Journal of Phonetics 62, 34-49.
Cohen, Clara P. 2014. Probabilistic reduction and probabilistic enhancement. Morphology, 24(4), 291-323.
Collie, Sarah. 2008. English stress preservation: the case for ‘fake cyclicity’. English Language and Linguistics 12(3). 505–532.
Plag, Ingo, Julia Homann & Gero Kunter. 2017. Homophony and morphology: The acoustics of word-final S in English. Journal of Linguistics 53(1), 181–216.
Strycharczuk, Patrycja and James M. Scobbie. 2017. Whence the fuzziness? Morphological effects in interacting sound changes in Southern British English. Laboratory Phonology: Journal of the Association for Laboratory Phonology 8(1): 7, 1–21.
Abstract submission deadline: Aug. 20, 2017
Notification of acceptance: Sept. 3, 2017
Submission of final abstract: Nov. 1, 2017
Conference: March 7-9, 2018
Abstracts should be 300-400 words (1 page) and may contain additional material, such as examples, figures and references on another page. The uploaded file must be in PDF format.
Organization and Programme Committee
Sabine Arndt-Lappe (University Trier)
Gero Kunter (University of Düsseldorf)
Ruben van de Vijver (University of Düsseldorf)
Fabian Tomaschek (University of Tübingen)
“Publish or Perish” is a common saying among scientists. But in order to publish, we must, after the experiments have been performed and the analysis done, write. Of course I enjoy writing, but it is extremely hard and painful. Literally painful. Not only it is very hard to come into the writing zone. Staying there is even harder when you stare at an empty page. This is the time when I procrastinate most: I check emails, repeate analyses, improve the format of plots, watch ted talks. You probably know what I’m talking about.
The hardest job, at least in my experience, is to start a manuscript — the first attempt to put down the first thousands words forming a concise text. Over the years I came to realize that there is no such thing as a perfect first draft. Neither a second. Furthermore, I came up with several techniques in order to overcome the first painful hours with filling pages. Here, I would like to share some of my insights with you. The techniques are based on the idea to focus on one problem at one time. Some of these things seem to be very obvious, but believe me, repetition is the key to proficiency.
The baseline of a successfull writing start is preparation
Whatever you are doing, writing a term paper or a journal paper, you are presenting knowledge for a reader. Before you put any words to paper (or your screen) and formulate sentences, you need to know what you are talking about. Whenever you want to start writing but have only vague ideas about what to write, you will fail.That is the simple truth. Depending on what part of your paper you want to write, you need to prepare differently.
For an introduction, you need to do the research on your topic, find papers, read them, skim through the results and discussion in order to not only report the background of your paper but also formulate precise hypotheses.
For a results section, you need to have the results of your analysis (be it analytical, statistical, or whatever).
For a discussion, you need to have a results section, otherwise there is nothing to discuss. Also, start working on the discussion only when you have finished the text on the introduction and the results.
Technique 1: Keywords
Never, I repeat, never start a new text by trying to write correctly formulated sentences. Embarking on a writing journey with correctly formulated sentences is the road to procrastination and frustration.
Do you remember what the aim of a paper is? Right, to provide knowledge to your reader. However, when you start writing a paper by trying to formulate complete sentences, you shift the focus of your task from transferring knowledge to finding the appropriate words and syntactic structure. Do not waste your energy on such a thing. Start with writing down your knowledge in the form of keywords. The keywords need to convey most of your knowledge, but it is not be important which exact words you use. Again, how you perform this task depends on the section you are writing.
Write a keyword-based summary of the papers you read. Provide a short phrase for each of the topics: Who has done it, what was the point, what was the method (experiment, analysis, etc.), what was the finding, and what was the interpretation? Write down your hypotheses.
Start putting keywords down about your analysis. If you have a figure, describe the figure. If you have a summary table for a regression (ANOVA, ANCOVA, LMER, GAMM, BAYES, etc.), describe your effects using general language (e.g. “In level A, Y was slower than in level B”, “An increase in Y was associated with an increase in X”). Do not waste your time and your energy with exact numbers. This is the first draft. You need to regard your first draft as the placeholder for exact numbers, precise formulations, and sparkling-from-intelligence metaphors.
You have probably guessed it already: Start with keywords. Reread your introduction and results and write keywords about your main findings. Write keywords about how they are in line or differ from the literature you consulted. Write keywords about how you interpret your results. Write keywords about the problems you encountered.
Technique 2: Talk to an audience
The next step seems easy: take all these keywords and formulate sentences out of them. However, this is still hard to do, to take all those unconnected and sometimes contradictory pieces of knowledge and bind them together into a coherent and readable text. When you begin to write right from the start, you might end up struggling with correct formulations, nice words and nice syntactic structures, instead of doing what needs to be done: Formulate a text. Now, I came up with two possibilities in how to circumvent this struggle.
The first possibility is to tell the things you want to write to a friend. Tell him or her about your problem, what you have done and what results you found. The second possibility is to dictate and record this onto a recording device. Or even better, do both. If you don’t have anyone at hand or don’t want to bore your friends with your scientific ideas, take your recording device and go on a walk (we need to do that more often, anyways).
Here is what happens when you start talking: Instead of struggling to find cool sounding words and sentences, your brain will put all the effort it takes to create a coherent and understandable version of what you want to say so that your audience understands your message. Humans communicate, humans try to put sense into their messages. Use that power.
Once you have recorded your text, transcribe it (e.g. using transcription software such as F4). And voilà, you have your text. You will be surprised, how much you have to talk about your topic all of a sudden. Furthermore, during transcription, you start to think whether what you said actually makes sense. However, do not rewrite your transcription. Literally put down every word you have spoken.
Technique 3: Rewrite
Now you have something to work on. Now, once you have a text with complete sentences, you can start to invest your effort into cool words and nice sentences. You do not need to focus on remembering the knowledge you want to include into your paper; you do not need to focus on performing an analysis; rather, you can focus and invest your all of your mental energy on correcting your text. And the surprising thing is: correcting and rewriting is a lot easier than writing down new sentences.
This is the time where you start including precise information about your analysis, i.e. exact numbers such as slopes, t- and p-values, and everything which is connected to your analysis. This is also the time when you can redo your research about questions which came up during the first draft.
Technique 4: Write regularly
OK, this is not my technique. It is from “How to write a lot” by Paul Silvia. He suggests, and I can only recommend this, to make room for writing every day. At least one our. It is not important whether you do this after waking up, or before going to bed. But save one hour (or two if you are in the mood) only for writing. Turn of your mobile phone, your internet connection, shut your door, turn off your TV, and start to write. Silvia shows data that this is the only technique by which you can make progress. If you wait for the muse to kiss you, you can keep on waiting forever. You need to sit down and write. Every day!
Technique 5: Perform different tasks at different places
Sometime I heard that Walt Disney used three different rooms for his work – one room for the coming ip with ideas, one room for writing and one room for drawing. Whether this story is true or not, it makes perfect sense to adapt this technique for writing papers.
A short excursion: Think about your getting up habits? What is the order of going to the bathroom, making coffee (or tee), brushing your teeth, etc. Probably it is the same every day. And try to recall how unhappy you are, when this order is broken. The reason for this is that we have habits how things have to happen. Make it habit to write every day! Make it a habit to think about problems in one place (e.g. in the shower, on your way home, on your bike, while exercising), to write down keywords in another place (e.g. in the library, the pub, the cool coffee house next door), to rewrite in your office (or at home, if you cannot work in the office).
Furthermore, get habits for starting to write. For example, I always drink a cup of coffee before I start to write. It works like a mental switch.
Make it a habit to plan what you want to do in your writing hour. For example: Write 1000 words. Rewrite one chapter.
Make it a habit to reward yourself for work you have done. Not only, it makes you happy having done the work you planned. No, it makes you happy that you (finally) can watch the YouTube video you were eager to see. Rewarding yourself keeps you from procrastinating, as you change your habit from watching YouTube for procrastination to watching YouTube for reward.
Technique 6: Take your time
Rome was not built in one day. Neither is written a scientific paper. Take your time. After having written your first and second draft, leave the manuscript alone for a couple of days (sometimes even a week or two). Work on other things. In this way you clear your head from the current project, allowing oyu to reread your manuscript more critically, to find errors in the writing, the structure and the argumentation. Do this at least twice before you pass your manuscript on to someone else.
I recently visited the “39. Jahrestagung der Deutschen Gesellschaft für Sprache” (DGFS) in Saarbrücken, of which I am officially a new member (Yeah, it was about time!). The weather was terrible, it was pouring during my entire visit. Less time to procrastinate and walk through Saarbrücken, more time to listen to interesting talks. And there were plenty of them.
One talk, namely the one by Stefania Degaetano-Ortlieb’s on discourse connectors in scientific writing, inspired me to write this post. It was interesting, indeed, but I thought, connectors, schmectors, what is with the nouns of that scientific texts from the early 18th century. Almost all of them were written with an uppercase beginning letter.
I couldn’t believe it. Uppercase in English? It is like German or… actually, it seems that this is a very German pecularity. Nouns in Danish and Norwegian have used to be written in uppercase. But the practice was abolished in 1869 in Norway and in 1948 in Denmark. Remains Germany, the only country in the world expecting its kids to know and understand the difference between nouns and all of the rest of word classes. How dare they…
Back to English. I went to google ‘sNgram Viewer. The website is amazing and allows you to inspect word frequencies across time on the basis of modern and historic texts. I checked the spelling of one of the most common words: house starting at 1500 and ending in 2000. Check out what I found in the following figure. The red line represents “house” with lowercase, the blue line “House” with uppercase.
There is a clear dip in the time from roughly 1650 to 1750, where “house” in lowercase is in the minority and “House” in uppercase is in the majority. Before and after that periode the lowercase spelling is in the majority.
Ok, that was intriguing, so I checked six of the most common nouns in almost every language: “House, Houses, Man, Woman, Child, Children” in upper and lower case. The following figure shows you the ratio between uppercase and lowercase spellings with values above 100% indicating that uppercase spelling is in the majority.
(if you want to test it on your own, the formula was: (House+Houses+Man+Woman+Child+Children) / (house+houses+man+woman+child+children))
This supported my assumption that in the one hundred years betwen 1650 and 1750 something in the English writing world happened. The phenomenon happened in both, American English (left) and British English (right).
Well, that said, I had to find out what actually happened. I started with an entry on wikipedia, which was more than boring. It stated only the contemporary capitalization rules. Nothing historic. Then I googled the topic. I found roughly four million entries. Checking out the top 20, I found that none were conclusive. There exist some theories. For example, one states that the script written at that time did not allow a proper distinction between words written in uppercase, because the letters were easily confused. As a result capitalization was abolished. But that was not very rewarding, because I still urged for knowing: WHY!? Why did capitalization start and why did it end?
Most German students think, capitalization is a mean thing teachers made up to torture them. However, studies in reading show that a reader seems to benefit from capitalization as comprehension is facilitated due to the additional “cue” (read e.g. here). But if that was really the case, I wonder why only German sticks to capitalization of nouns and all other alphabethic based orthographies don’t use it.
On google scholar I found Richard Venezky’s Book “The American way of spelling“. Unfortunately, the entire chapter on the writing system itself was not available on google books. The library of my university does not have it in stock. I thus decided to contact Dr. Venezky, just to be informed by the email deamon that his email was not valid anymore.
I ordered the book from another library. I will hopefully find something in a couple of weeks. And if not… well, I doubt that I will work myself through historic texts on spelling.