April 29, 2024

Navigating False Narratives in Scholarly Research and Publishing

Generative artificial intelligence (GenAI) technologies like large language models (LLMs) are seeing exponential growth in terms of automating tasks across various business functions. There has been a rapid adoption of these technologies across all industries. Scholarly Publishing is no exception to this tech revolution. The AI models can generate almost human-like texts, making them quite useful for rote tasks like drafting, research summaries and not to mention the capability of even responding to queries and comments by users.

According to a survey by Nature, it was reported that 28 percent of researchers surveyed use LLMs to help write manuscripts, and 32 percent reported that these tools helped them write manuscripts faster.

The Complexities of False Narratives:

The benefits of GenAI are undeniable, but it is also crucial to address the elephant in the room. In this context, facts and findings can be distorted to often mislead or engineer public opinion.

Academic publishing prides itself on its accuracy, integrity, and credibility. The research papers in STM journals are often used to advance our understanding of how the world works and therefore there cannot be room for any bias or incorrect facts. A simple example would be a GenAI model creating a false summarization of a research paper which leads to misrepresentation in the findings, leading to misinformation.

False narratives can materialize in multiple forms in Scholarly Research. These could range from subtle inaccuracies to complete misinterpretations in data. There could also be cases where bias seeps in due to the inherent training data used by the GenAI models.

The Impact on Scholarly Publishing:

The potential for false narratives poses a huge challenge for the publishing industry as a whole and this includes even STM publishing. More than the potential for fallacy, the bigger issue at hand would be the breach of trust that readers/users place in published content. This could mean prolonged and widespread implications in the long run.

For example, imagine a scientific research paper claims that it might have found a potential cure for reversing Alopecia, but the research findings are only limited to mice. A GenAI model could misinterpret the findings to say that the cure could also work in humans.

Therefore, protecting the integrity of the publications in STM is of paramount importance. A reader who loses trust in the accuracy of the information in STM publishers would mean loss of revenue and decline in readership.

Solutions To Mitigate Risks:

One way to mitigate the risk of the spread of false narrative by GenAI is to implement a robust fact checking and review system in place. This means that there must be human intervention at some point in the fact checking process. Organizations and publishing houses need to invest in training their resources to work effectively with GenAI. This includes understanding the capabilities of GenAI and their limitations. Moreover, the ethical and bias considerations also need to be considered during the GenAI content review.

Alternatively, there are a few other steps that can be taken to mitigate the risks associated with GenAI usage:

  • Authentication protocols using blockchain technology could be used to authenticate the identity of users and to distinguish between AI and humans.
  • Disclaimers which inform audiences and readers of whether the content they are reading is generated using AI.
  • Tagging content by labeling it to distinguish it between AI-generated content and human-generated content.


As a technology-focused supplier, Apex CoVantage is a strong strategic partner to aid the scientific and academic community in navigating the challenges and opportunities presented by GenAI. Apex sees the value of the human in the loop. We support STM publishers by utilizing robust review processes, investing in staff training, and mitigating the risks of GenAI. If you are interested in how Apex CoVantage can support your STM publishing needs in the era of GenAI, do not hesitate to reach out to us.


Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis

Generative AI in Academic Research: Perspectives and Cultural Norms – Research & Innovation (cornell.edu)