Maintaining Information Integrity in a Post-Truth, AI-Dominated World
- regularforcesyee
- Jun 4, 2024
- 2 min read

The term “epistemic security” refers to protecting the integrity of knowledge and access to reliable information sources. It provides reasonable certainty that what we know is in fact true and that our sources of information are valid. It also helps us to identify fake news items, disinformation campaigns, and other threats to information integrity.
Epistemic security is vital to the protection of individual, business, and national interests. With the rise of AI, an all-too-real concept quickly dominated our daily interaction with the world and threatened our information security. By 2026, up to 90% of online content could be AI-generated. Although AI offers a host of benefits for humanity, it also presents several challenges. With AI, we are not only further removing ourselves from direct interaction with our observable surroundings, we are also outsourcing our critical thinking skills to machines.
Within academia, professors were concerned with a student plagiarizing other (human) authors or copying information from questionable human-generated sources, such as Wikipedia. Now, it is highly likely the same student will use a generative AI platform (like ChatGPT) to produce a paper that “plagiarizes” AI-generated content, while citing non-existent AI-generated sources.
While AI provides alarming new avenues for the rapid spread of disinformation, a more insidious effect lies within the creation of “hallucinations.” Large language models (LLMs) are deep learning models that use massive data input/output sets for training on a variety of tasks, including understanding and generating new content.
These hallucinations refer to generated content that, while appearing superficially valid, is not based on a trained source inputs and can have serious real-world consequences. News bots can spread misinformation about a developing emergency event, inciting panic or compromising disaster management efforts. In the healthcare realm, LLM hallucinations can lead to misdiagnoses.
Imagine manufactured content aimed to discredit a rival, or content purporting that a particular group suffered an atrocity at the hands of their enemies, when no such attack ever occurred. Imagine this content being repeated thousands of times instantly, with little to no effort to check its veracity, particularly among those who are primed to believe it’s true. These deepfakes may be artificial, but the consequences are real. While this is certainly true in regimes where access to media is tightly controlled by the state, it is also true in free societies where citizens have access to reputable sources but instead believe information that promotes their preferred narrative.