During a long night in April 2018 the idea for a Digital Humanities regulars’ table in Halle was born. Now, 20 meetings later, I personally have learnt a lot. I found new friendships and had a ton of fun. For me, the key elements of its ongoing success were, keeping the regulars’ table informal, avoiding institutional boundaries, a low barrier of entry and not giving up too early.
First and foremost Humanists describe what is and only in a second step answer why it is that way. Qualitative Data Analysis and many similar research techniques often follow a simple scheme: creating categories/variables/topics, applying them to text and then analysing meaningful groups by qualitative and quantitative comparison. By this, we easily fall into the “Positivist Pitfall” of only describing, instead of explaining our data. However, we must strive to create meaningful interpretations that can be disputed or even be proven wrong.
It’s the year 2040, are the Digital Humanities still around? In general, I see four possible scenarios: DH replaces the Humanities, DH becomes its own discipline, DH turns into IT assistance and DH fades away. According to a short survey on Twitter, the second scenario is considered most likely. Whatever the outcome, I argue, the future of DH will depend on what results they deliver and what lasting contribution they can make to the Humanities as a whole.
The Universal Lexicon is not the Universal Lexicon, that is the central finding of the following paper that I presented at ISECS2019 in Edinburgh. By analysing metadata of this 68-volume lexicon and taking a stratified random sample of 680 biographical articles, I could show how the lexicon changed considerably during the 23 years of its production period. When using the Universal-Lexicon as a historical source, we cannot simply compare articles from “A” and “Z”. Instead, we must be careful to consider the different contents and contexts of its early, middle and late phase.
Continue reading A Changing Giant: The Impact of Time on Zedler’s Universal-Lexicon (1731-54)
On the surface MAXQDA and ATLAS.ti seem almost identical. Yet when we look under the hood, we see strong differences: one follows the logic of a relational data base and sorts everything into neat categories and the other operates like a graph data base that links different entities to form a large network. The implicit potentials and constrains of each (and any) software commonly drive our research because we too often follow the road that we already know best.
Mixed Methods is a methodology that attempts to breach the qualitative-quantitative divide by integrating aspects of both approaches. However, both methods are not just juxtaposed, but rather used to create combined results. In this, Mixed Methods Research (MMR) often follows a pragmatic doctrine that puts the research question above epistemological or methodological considerations.
The following is a – not 100% serious – mini-speech I prepared for the Walpurgis-Night-Special (30th of March) of the Digital Humanities regulars’ table in Halle (#DHAL). It draws conclusions by looking on the many commonalities between Digital Humanities researchers in the present and people accused of witchcraft in the Early Modern period.
To make your conference poster more engaging, it is helpful to include several points of entrance. No one will read your poster from beginning to end. No one. Therefore, offering small easy-to-digest parts and quick takeaways can improve the impact of your poster largely. However, making a good conference poster requires some serious investment, to make it pay off, use it to promote your research by giving it a digital afterlife.
Getting rid of trivial content in the Twitter feed of your academic or professional Twitter account is a continuous task. In order to not subvert yourself to an AI driven filter bubble you need to manually mute the accounts responsible for uploading this content. By following a few simple steps you can regain control over your Twitter feed pretty quickly without disrupting your professional network.
If we want to quantify the results of our coding in QDA, we have to think about two aspects first: scope and quantity. The scope forces us to consider what length our text segments should have (words, sentence, paragraph), whereas the quantity indicates what we are allowed to do with these segments later in analysis (numerical, boolean, non-numerical analysis). To quantify codes properly, this must be taken into consideration long before we complete our coding. Continue reading Scope & Quantity: Defining Codes for Proper Quantification
This book combines a MAXQDA manual with Kuckartz (2014). However, it creates a value that is greater than the sum of its parts. Although it sometimes tries too hard to be brief, it does well in combining methodological knowledge with technical questions. Unfortunately, the book has only appeared in German. Continue reading Review: “Analyse qualitativer Daten mit MAXQDA” (2019)
Analytical units force us to be specific about what we do. How much text do we code? What context do we take into consideration? What documents or text segments do we include or exclude from our analysis? Thinking about them can definitely help us improve the quality of our analysis. Continue reading Sampling Unit, Coding Unit, Context Unit? Analytical Units in QDA