The Positivist Pitfall

First and foremost Humanists describe what is and only in a second step answer why it is that way. Qualitative Data Analysis and many similar research techniques often follow a simple scheme: creating categories/variables/topics, applying them to text and then analysing meaningful groups by qualitative and quantitative comparison. By this, we easily fall into the “Positivist Pitfall” of only describing, instead of explaining our data. However, we must strive to create meaningful interpretations that can be disputed or even be proven wrong.

As the Humanities usually work with process generated data, the first step of all analysis usually contains a close description (quantitative or qualitative) of that data. First and foremost we want to know what our documents or sources say, usually about a specific topic. Therefore, when applying digital research methodologies in the humanities they likewise tend to follow a rather descriptive scheme. We read, write summaries, structure text, extract variables, create word frequencies or use topic modelling. Hence, this proper description of our data is easily taken for the outcome of our research.

Qualitative Data Analysis also follows this almost universal scheme: We find some data, read the text, split our documents into meaningful groups, then identify the most important themes and turn them into codes/categories. Then we apply these categories, compare our document groups qualitatively (comparative reading of content) and quantitatively (comparing frequencies of topics, codes or words). Finally, we create a report with some numbers and diagrams, and turn the summary of each category into a chapter of our paper. It was so easy after all… To fall right into the Positivist Pitfall!

A common scheme for QDA, based on Mayring (2014), leading straight into the Positivist Pitfall.

What is the Positivist Pitfall?

I use the term “Positivist Pitfall” to label any research that remains largely descriptive in its results. I stumbled upon that phenomenon, because I fell straight into it myself (with my master thesis, Müller 2017). The Positivist Pitfall means that we just follow a standard procedure that results in a close description of our data but that does not create a meaningful interpretation. It only answers the “What is the data like?” and not the “Why is it that way?” This does not mean that descriptive approaches are per se invalid, but in many cases these results will be largely unsatisfying.

Why must we avoid the Positivist Pitfall?

Although falling into the Positivist Pitfall does not necessarily create invalid results, it is much like stopping the race straight in front of the finish line. When working with more descriptive methods like Qualitative Content Analysis (Mayring 2015, Kuckartz 2014) this danger might be lager than with more interpretative theory-building approaches like Grounded Theory (e.g. Strauss and Corbin 2010).

As an early career scholar in the humanities one is especially vulnerable to it. Commonly we don’t work with sources that we created ourselves (like interviews or surveys) yet instead, we work with process generated data that we rather “find” as something “given” (Bauernschmidt 2014, p. 417-419). Often we dive deep into the texts, bring some order into them and create a detailed description of what we find. But we must go the last mile and create a meaningful interpretation that can be disputed and even be proven wrong.

The White Knight has it worse than us. Luckily, we don’t even need an Alice to help us out.

How do we get out of the Positivist Pitfall?

The good news is: It is not a pitfall after all. Instead, as soon as we are aware that we are inside of it, we are already half way out. All the work we have done is not invalid, instead it is a great basis to build upon. We just have to ask ourselves one important question: “Why is the data the way it is?”

  • Why is group 1 different from group 2?
  • What do our results mean in a broader picture?
  • Can we make reasonable predictions about similar cases?

To answer these questions we have to leave safe ground behind us and make well argued interpretations that could nonetheless be proven wrong by someone else. If we cannot be proven wrong – because we only described what is and what is not – our research is only half as meaningful.

Conclusion

Although humanists work with process generated data that is often hard to understand and summarize, describing our data and sources must never be the end point of our research. We must not only ask “What does the data say about topic X?” but instead push forward and ask: “Why are our sources the way they are?” Historical sources are never something given. They are the product of complex processes and influences. Therefore we must not only look at the outcomes of these processes (the documents) but also draw inferences about what these processes were and how they shaped and created the data that we have before us today.



Cite this blog post
Andreas Mueller (2019, August 8). The Positivist Pitfall. Met-Hodos. Retrieved April 16, 2024, from https://methodos.hypotheses.org/1320

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.