Blog
Science, Health, Technology
3 Ergebnisse gefunden mit einer leeren Suche
- How Social Media Platforms Influence Academic Research, and How to Address it
Imagine if food or drug companies were allowed to hide the ingredients of their products from independent researchers. In a new comment published in Humanities and Social Sciences Communications , Isabelle Freiling and I argue that this is exactly the situation with social media platforms — and it can exacerbate forms of industry influence that have long been documented in other fields. Social media platforms are no longer just objects of research — they are increasingly data gatekeepers and research funders. This creates a structural conflict of interest, especially when platforms engage in research on issues in which they have a direct stake, such as whether they contribute to political polarization or misinformation, and how these problems should be addressed. A recent preprint shows that many of these material connections are in fact not disclosed, which is highly problematic (Bak-Coleman et al., 2026). However, the conflicts created often cannot be resolved by disclosure alone. In this comment, we argue that the problem goes beyond disclosure (Heiss & Freiling, 2026). Even research in which conflicts are formally disclosed can still be subject to systematic bias, and there is substantial evidence that this is indeed the case. While large-scale private funding of social science research is relatively new, other disciplines have been dealing with industry influence for decades. In response, they have developed a strong tradition of meta-research examining how funding sources shape research agendas and results. The findings are striking. For instance, a meta-analysis in nutrition research on the relationship between sugar-sweetened beverages and obesity or type 2 diabetes found that almost all industry-funded studies concluded that sugary drinks do not increase obesity or diabetes — while nearly all independent studies reported the opposite (Schillinger et al., 2016). How Industry Influence Operates Industry influence on research can operate at any stage of the research cycle (Bero, 2023). For example, through targeted funding calls, industry actors can shape research agendas and even define the questions that academics work on. They can also selectively fund proposals they perceive as most aligned with their strategic interests. Furthermore, funding itself and personal interactions between researchers and funders can trigger feelings of reciprocity, making researchers less willing to interpret data in ways that might harm the funding industry partner. This effect is likely to be even stronger when researchers interact directly with industry representatives. However, researchers are often not aware of the biases that may result from such interactions, and even when influence is present, it is often difficult to detect and prove. Beyond agenda setting and interpretation, industry influence may also operate at the level of data analysis, for example through choices of variables, model specifications, inclusion and exclusion criteria, or robustness checks that are shaped by structural dependencies. Once industry-aligned research results are published, industry players can also bias science communication by selectively highlighting specific findings and disseminating them in a one-sided manner through powerful marketing channels. In fact, there are reasons to assume that industry influence can be exerted at any stage of the research process, from agenda setting to dissemination, as illustrated in Figure 1. For social media platforms, the gate-keeping role in data access plays a particularly critical role. Figure 1. Industry influence in social media research can operate at multiple, interconnected stages of the research process, including funding, control over data access, data analysis, interpretation of results, dissemination, and sustained relationships between platforms and researchers. Data Access as a Unique Issue There is no reason to assume that social media platforms do not follow similar industry interests. Compared to other industries, however, platforms have a unique advantage: they host and control access to their own data. This gives them an unprecedented form of epistemic power. In our comment, we illustrate this problem with a simple analogy: “Imagine if researchers were denied access to the ingredients of food products or drugs to study their effects on human health. However, when it comes to social media products, access to their “ingredients,” such as algorithms and input data, is limited.” While in pharmaceutical or food research, scientists can generate independent data, this is not the case for platform research. Access to core information — including algorithms, content exposure, and engagement dynamics — is controlled by the companies themselves. This issue became especially visible in the major industry–academia collaboration between Facebook and independent researchers around the 2020 US presidential election. In this collaboration, Meta provided exclusive access to internal data, and Meta researchers actively participated in the research process. This setting opened the door to potential industry influence and it later emerged that Meta had altered key algorithmic features during the field study. However, the results of the studies were highlighted on Meta’s website as robust evidence that the platform’s algorithm does not increase political polarization. The media echoed this frame. Figure 2. Examples of media coverage following the release of the 2020 U.S. presidential election studies from the Meta–researcher collaboration. While there is strong evidence that industry influence has biased scientific results in other fields — such as nutrition science, tobacco research, or fossil fuel studies — there is no convincing reason to assume that social media research should be immune to similar dynamics. Institutionalization of Influence Beyond individual projects, platforms also seek to institutionalize their influence through long-term funding schemes, fellowships, and research centers. Examples include Meta’s global early career fellowships, the Social Science One partnership, the Chan Zuckerberg Initiative’s large-scale funding of AI research, or Google’s Jigsaw. While these initiatives may support valuable research, they also create permanent dependencies and intermediary organizations that can subtly align research agendas with corporate interests. This makes it especially important to understand the extent and impact of industry influence now — before such institutional structures become so entrenched that they are difficult to reverse. Challenging Industry Influence In light of this, we argue that new policies are needed. These include limiting industry collaborations in areas with clear conflicts of interest (such as misinformation, polarization, and health), and reforming how industry funding is distributed — for example, by establishing independent agencies to manage and allocate such funds. If platforms are genuinely interested in advancing knowledge independent of corporate interests, there is little reason not to delegate funding decisions to independent bodies. In addition, stronger regulation is needed to ensure that researchers can access social media data independently. Imagine if researchers were unable to study the ingredients of food products or drugs in order to assess potential harms. In the case of social media, we face precisely this situation: access to core data is controlled by the companies themselves, forcing researchers into dependency relationships. This is highly problematic. The EU’s Digital Services Act attempts to address this issue by requiring very large platforms to provide vetted researchers with access to internal data in order to study systemic risks such as misinformation or political polarization. While early evidence suggests that platforms often resist or delay meaningful access, the DSA represents the first serious attempt to structurally reduce researchers’ dependency on platform goodwill. Finally — and most importantly — we need a more critical research community that systematically examines industry influence. The core risk is not only biased findings, but the gradual capture of the entire knowledge production process around social media — from research questions and data access to funding structures and institutional infrastructures. To understand and counteract this process, we need robust meta-analyses and investigative “research on research” that make industry influence visible, measurable, and contestable (see Table 1 for a summary, from Heiss & Freiling, 2026). References Bak-Coleman, J., West, J., O’Connor, C., & Bergstrom, C. T. (2026). Industry influence in high-profile social media research [Preprint]. https://doi.org/10.48550/arXiv.2601.11507 Bero, L. (2023). Industry influence on research: A cycle of bias. In N. Maani, M. Petticrew, & S. Galea (Eds.), The commercial determinants of health (pp. 185–196). Oxford University Press. https://doi.org/10.1093/oso/9780197578742.003.0019 Heiss, R., & Freiling, I. (2026). Addressing social media platforms’ influence on academic research. Humanities and Social Sciences Communications ,13,192. https://doi.org/10.1057/s41599-026-06690-6 Schillinger, D., Tran, J., Mangurian, C., & Kearns, C. (2016). Do sugar-sweetened beverages cause obesity and diabetes? Annals of Internal Medicine , 165, 895–897. https://doi.org/10.7326/L16-0534 Title graph is generated with OpenAI's DALL-E.
- Four Key Biases in Influencers’ Medical Advice: The Risks and How to Reduce Them
Medical advice from social media influencers reaches millions of people every day—often far more than traditional news media—yet they are still perceived as private individuals and therefore evade any form of editorial responsibility. While influencers have become an important information source, their guidance is often incomplete, biased, or misleading. In our new analysis, published in The BMJ , we outline the key biases in influencers’ messaging and how to respond in order to mitigate the associated risks. Four key biases that shape influencer health advice Our analysis identifies four recurring sources of bias in influencer content: Lack of medical expertise: Many influencers do not have formal training in medicine or health sciences, yet they recommend diagnostic tests, treatments, or supplements as if they were experts. This increases the risk of promoting ineffective or harmful interventions. Industry influence: Influencers often collaborate with brands that provide payments, free products, or affiliate commissions. These commercial incentives can shape the content they share, especially when advertising is not properly disclosed. Entrepreneurial interests : Some influencers market their own health products, including supplements or tests. The desire to boost sales can lead to exaggerated claims or the promotion of unproven interventions. Personal beliefs: Health recommendations may also reflect personal ideologies—such as alternative medicine, anti-vaccine views, or lifestyle philosophies—that are not supported by scientific evidence. These biases are amplified by parasocial relationships: one-sided emotional connections that make followers feel they “know” the influencer. This sense of familiarity increases trust and makes persuasive health messaging more effective. Examples illustrate how this plays out in practice. A widely discussed case is Kim Kardashian’s promotion of full-body MRI screening—a test without proven health benefit and with considerable risks of overdiagnosis. Another example is “Dr. Eric Berg,” a US chiropractor with a large online following who promotes high-dose supplements and sells his own branded products, some of which have received regulatory warnings due to concerning ingredient levels (for more examples see Table 1, from BMJ analysis). Representative survey results show strong influence on young people The risks highlighted in the BMJ analysis are supported by findings from our recent study published in the Journal of Adolescent Health . This study examined how 15- to 25-year-olds in Austria interact with influencers and how this shapes their health decisions. Key findings include: Young people estimate that over 50% of the content in their feeds comes from influencers. More than 80% encounter influencer health content at least occasionally. 53% have already purchased a product because of influencer recommendations. – including supplements ( 31% ), addictive substances ( 15% ), medications ( 13% ), and medical self-tests ( 11% ). 78% say they trust the influencers they follow “at least somewhat”—a similar level of trust as in traditional media. Problematic social media use and strong parasocial relationships are associated with higher susceptibility to influencer marketing. Digital health literacy has a protective effect : young people with higher perceived competence are less likely to buy health-related products based on influencer endorsement. These findings indicate that influencers hold substantial persuasive power—power that is often underestimated in public health discussions. A need for coordinated action While some influencers contribute positively by explaining medical topics or correcting misinformation, the overall digital environment remains difficult to navigate. Our BMJ analysis highlights the need for coordinated action among governments, digital platforms, health institutions, and educators. The EU’s Digital Services Act requires major platforms to assess and mitigate systemic health risks, such as the spread of misleading medical content. National governments can assign more editorial responsibility to high-reach influencers and restrict harmful health advertising. Strengthening digital and health literacy , particularly among young people, is essential for helping users recognise commercial, ideological, or manipulative content. Together, these strategies can create a safer digital information environment in which reliable medical advice is easier to identify and harmful content is less likely to spread. The BMJ Heiss, R., Woloshin, S., Dave, S., Engel, E., Gell, S., & Willis, E. (2025). Responding to public health challenges of medical advice from social media influencers. The BMJ. https://www.bmj.com/content/391/bmj-2025-086061 Click here for free full-text PDF . Journal of Adolescent Health Engel, E., Gell, S., Karsay, K., Heiss, R. (2025). Engagement with Influencers as Sources of Health Information and Product Promotions: A Cross-Sectional Survey of Austrian Youth Aged 15–25. Journal of Adolescent Health. https://www.jahonline.org/article/S1054-139X(25)00415-X/fulltext Animated graph is generated with OpenAI's DALL-E.
- Why We Should Stop Using Mediation Analysis
"Imagine that a medical research group were to conduct an experimental test of a new drug for dementia, and use mediation analysis…” Unfortunately, no magic in mediation analysis. Weak data, biased results. In recent years, I have reviewed dozens of papers from well-respected journals where authors employ mediation analysis – often in the context of small experiments. Typically, these researchers find an effect on a theoretically proximal mediator (e.g., media exposure influencing emotion or attitude), but not on a more distal outcome (e.g., intended behavior). They then claim a causal model in which A (e.g., media exposure) leads to B (e.g., emotion or attitude), and B leads to C (e.g., a behavioral intention). Despite their lack of empirical rigor, such models have become a dominant procedure in communication research and psychology, earning acceptance among editors, authors, and reviewers. One reason for their widespread adoption may be the absence of dedicated comment and analysis sections in social science journals, where methodological issues could be discussed critically and transparently. The key methodological problem is that these models are almost always based on inappropriate data. But as soon as one path in the mediation model relies on cross-sectional data, the indirect effect becomes prone to confounding biases and is often uninformative [1,2,3,4]. Consider this example: Exposure to a critical report about vaccine side effects might evoke a negative emotional reaction toward vaccination. Even if the study does not find that such exposure reduces people’s willingness to get vaccinated, it will likely reveal that negative emotional reactions correlate with vaccination intentions in the sample. Based on this observation, researchers can calculate an indirect effect linking their experimental manipulation (the media report) to behavior via emotion. However, the link between emotion and behavior is purely correlational; and the negative emotions measured in the sample are most likely not solely caused by the experimental manipulation. In fact, they are likely to be confounded by deeper attitudes, past experiences, and social environment factors. In summary, indirect effects calculated from one or more cross-sectional paths are, in most cases, uninformative. It is far more informative to separately examine the effect of the experimental manipulation on (a) the emotional response and (b) the behavior. Thus, researchers could define a primary outcome (e.g., behavior) and secondary outcomes (e.g., emotional reaction) and report how these outcomes are causally affected by the experimental manipulation. Of course, researchers should feel free to report correlations between theoretically linked outcome variables and speculate about potential mechanisms. However, scientific research is about evidence, and researchers must avoid biasing their conclusions through inappropriate modeling or by “overselling” weak data. In social science, the consequences of such sloppy procedures are not as severe – or as directly observable – as in other disciplines. Imagine that a medical research group were to conduct an experimental test of a new drug for dementia, and use mediation analysis. The study might find that the drug enhances puzzle-solving performance and that individuals who solve puzzles faster are less likely to develop dementia. The indirect effect would most likely be statistically significant. But is this good evidence to recommend the drug for dementia prevention – even if it has side effects that lower patients’ quality of life? Probably not. So why use such a procedure in the first place? The answer is clear: we shouldn’t, and the same caution applies to social sciences. References: Chan, M., Hu, P., & Mak, M. K. F. (2022). Mediation analysis and warranted inferences in media and communication research: Examining research design in communication journals from 1996 to 2017. Journalism & Mass Communication Quarterly, 99 (2), 463-486. https://doi.org /10.1177/1077699020961519 Coenen, L. (2022). The indirect effect is omitted variable bias: A cautionary note on the theoretical interpretation of products-of-coefficients in mediation analyses. European Journal of Communication, 37 (6), 679-688. https://doi.org/10.1177/02673231221082244 Kline, R. B. (2015). The mediation myth. Basic and Applied Social Psychology, 37 (4), 202-213. https://doi.org/10.1080/01973533.2015.1049349 Rohrer, J. M., Hünermund, P., Arslan, R. C., & Elson, M. (2022). That’s a lot to process! Pitfalls of popular path models. Advances in Methods and Practices in Psychological Science, 5 (2), https://doi.org/10.1177/25152459221095827 Graph is generated with OpenAI's DALL-E.


