top of page

Why We Should Stop Using Mediation Analysis

  • Autorenbild: Raffael Heiss
    Raffael Heiss
  • 12. Feb.
  • 3 Min. Lesezeit

"Imagine that a medical research group were to conduct an experimental test of a new drug for dementia, and use mediation analysis…”


Unfortunately, no magic in mediation analysis. Weak data, biased results.
Unfortunately, no magic in mediation analysis. Weak data, biased results.

In recent years, I have reviewed dozens of papers from well-respected journals where authors employ mediation analysis – often in the context of small experiments. Typically, these researchers find an effect on a theoretically proximal mediator (e.g., media exposure influencing emotion or attitude), but not on a more distal outcome (e.g., intended behavior). They then claim a causal model in which A (e.g., media exposure) leads to B (e.g., emotion or attitude), and B leads to C (e.g., a behavioral intention).


Despite their lack of empirical rigor, such models have become a dominant procedure in communication research and psychology, earning acceptance among editors, authors, and reviewers. One reason for their widespread adoption may be the absence of dedicated comment and analysis sections in social science journals, where methodological issues could be discussed critically and transparently.


The key methodological problem is that these models are almost always based on inappropriate data. But as soon as one path in the mediation model relies on cross-sectional data, the indirect effect becomes prone to confounding biases and is often uninformative [1,2,3,4].


Consider this example:  Exposure to a critical report about vaccine side effects might evoke a negative emotional reaction toward vaccination. Even if the study does not find that such exposure reduces people’s willingness to get vaccinated, it will likely reveal that negative emotional reactions correlate with vaccination intentions in the sample.


Based on this observation, researchers can calculate an indirect effect linking their experimental manipulation (the media report) to behavior via emotion. However, the link between emotion and behavior is purely correlational; and  the negative emotions measured in the sample are most likely not solely caused by the experimental manipulation. In fact, they are likely to be confounded by deeper attitudes, past experiences, and social environment factors.


In summary, indirect effects calculated from one or more cross-sectional paths are, in most cases, uninformative. It is far more informative to separately examine the effect of the experimental manipulation on (a) the emotional response and (b) the behavior. Thus, researchers could define a primary outcome (e.g., behavior) and secondary outcomes (e.g., emotional reaction) and report how these outcomes are causally affected by the experimental manipulation.


Of course, researchers should feel free to report correlations between theoretically linked outcome variables and speculate about potential mechanisms. However, scientific research is about evidence, and researchers must avoid biasing their conclusions through inappropriate modeling or by “overselling” weak data.


In social science, the consequences of such sloppy procedures are not as severe – or as directly observable – as in other disciplines. Imagine that a medical research group were to conduct an experimental test of a new drug for dementia, and use mediation analysis. The study might find that the drug enhances puzzle-solving performance and that individuals who solve puzzles faster are less likely to develop dementia. The indirect effect would most likely be statistically significant. 


But is this good evidence to recommend the drug for dementia prevention – even if it has side effects that lower patients’ quality of life? Probably not. So why use such a procedure in the first place? The answer is clear: we shouldn’t, and the same caution applies to social sciences.



References:

  1. Chan, M., Hu, P., & Mak, M. K. F. (2022). Mediation analysis and warranted inferences in media and communication research: Examining research design in communication journals from 1996 to 2017. Journalism & Mass Communication Quarterly, 99(2), 463-486. https://doi.org/10.1177/1077699020961519 

  2. Coenen, L. (2022). The indirect effect is omitted variable bias: A cautionary note on the theoretical interpretation of products-of-coefficients in mediation analyses. European Journal of Communication, 37(6), 679-688. https://doi.org/10.1177/02673231221082244

  3. Kline, R. B. (2015). The mediation myth. Basic and Applied Social Psychology, 37(4), 202-213. https://doi.org/10.1080/01973533.2015.1049349 

  4. Rohrer, J. M., Hünermund, P., Arslan, R. C., & Elson, M. (2022). That’s a lot to process! Pitfalls of popular path models. Advances in Methods and Practices in Psychological Science, 5(2), https://doi.org/10.1177/25152459221095827 


Graph is generated with OpenAI's DALL-E.



Contact

Raffael Heiss

MCI – The Entrepreneurial School

Universitätsstrasse 15

6020 Innsbruck

Contact Raffael Heiss

Graphics on this website are generated using OpenAI's DALL-E.

  • Bluesky_Logo.svg
  • LinkedIn
bottom of page