Nyombi & Co. Advocates Logo
HomePractise AreasOur HistoryOur TeamBlog
Nyombi & Co. Advocates Logo
Home
Practice Areas
HistoryOur TeamBlog

Call us today

+256 414 235 177

nypter@infocom.co.ug

Back to Blog
Artificial Intelligence

AI and the War on Truth: A New Era of Deception

2025-08-21
10 min
AI and the War on Truth: A New Era of Deception

AI’s adaptability has made it a force to be reckoned with. Across the globe, AI is enriching various sectors to great benefit. On the other hand, malicious actors recognize that AI’s versatility makes it a potent tool, particularly in the realm of truth manipulation: this phenomenon is not novel. In ancient Rome, through a practice called Damnatio Memoriae, the government condemned the memory of a person who was perceived as a tyrant, traitor, or enemy of the state by systematically destroying their images and erasing their names from inscriptions. Similarly, in the Soviet Union, Joseph Stalin often had photos altered to remove people who were deceased or had been removed from office, erasing any traces of them, almost like they never existed. These historical practices of truth manipulation have evolved, aided in part by AI’s ability to generate disinformation.

What is disinformation?
Disinformation (DI) is false, inaccurate, or misleading information that is shared with the intent to deceive the recipient. DI appears in media like deep fake audios and videos, and fabricated news. A deepfake, a portmanteau of “deep learning” and “fake,” is a highly realistic, digitally manipulated audio or visual material. Manipulation of information, particularly through deepfakes, not only amplifies the spread of DI but also poses significant challenges in detecting and countering it.

Why is it hard to fight?
DI, in the era of AI, is exceedingly difficult to fight for several reasons examined below.

Hyperrealism of AI-generated content. A few years ago, people could easily distinguish between real and AI-generated content, but not anymore. AI’s ability to “learn” means that every iteration gains a more life-like quality, making it harder for the naked eye or ear to detect. In 2024, a video of President Ibrahim Traore of Burkina Faso criticising the portrayal of Africa in Western media was widely acclaimed and shared on social media. Experts consider it almost certainly a deep fake. A graver example of AI’s hyperrealism concerns Arup, a British design and engineering company. In 2024, an employee of the firm was duped into attending a video call with people he mistakenly believed were the CFO and other members of staff. Deceived by AI- crafted deep fakes of familiar voices and faces, the employee sanctioned a USD 25.6 million transfer only to later discover that he had been deceived. The Arup case exemplifies AI’s ability to churn out media that is increasingly imperceptible to the naked eye or ear, greatly complicating our ability to sift through what is real and what isn’t.

The emergence of Social Media Platforms (SMPs). Beyond the realism of AI-generated content, DI is amplified by changes in how information is propagated in two main ways. Firstly, before the advent of SMPs, we relied on traditional media such as newspapers and television, which filtered the information we received, for better or worse. However, SMPs have grown in popularity and gradually eroded the dominance of traditional media as sources of information. Yet, the former lacks the safeguards that the latter has; hence, content shared can only be policed after the fact. Secondly, the sharing of information on SMPs is instantaneous. Content can be posted and shared with millions in seconds without verification at any time. Together, these factors enable DI to proliferate unchecked.

Deployment of highly sophisticated AI tools. SMPs and websites have amplified the spread of DI through the tools that they use to attract more users and generate more revenue. This is done through targeted advertisements, which keep users engaged. AI is then used to analyse data from tracking tools like cookies to understand users’ online activity and compile detailed profiles of their interests, beliefs, likes, and dislikes, enabling personalised advertisements that drive profits. These tools are now repurposed to send specific groups false information tailored to their beliefs or demographics, similar to how platforms like Facebook, or YouTube use AI to make suggestions of what to watch based on your interests. This creates echo chambers where everyone receives information that reinforces their beliefs and complicates the detection of DI. Additionally, malicious actors use fake social media accounts powered by AI, commonly known as bots, to spread DI. Bots behave like real people by liking and sharing posts, and flooding SMPs with information that appears widely endorsed, which creates a false sense of credibility.

The democratisation of content-creation tools. More people have access to content-creation tools like Grok, ChatGPT, AI Video Generators, and AI powered voice-cloning tools, which lower the barriers to creating and spreading DI. The ease of access to these technologies enables virtually anyone (with or without technical skills) to create content at minimal cost in a short time, significantly widening the base from which DI can be generated.

End-to end encryption in Private messaging applications (PMAs). PMAs like WhatsApp boast end-to-end encryption: a form of security that ensures that only the sender and intended recipient(s) of a message can access its contents. End-to-end encryption facilitates the sharing of content with one user or within closed groups with large numbers of users without regulatory oversight. Lack of oversight gives users a carte blanche to circulate all kinds of information, even if they are patently false. For example, an imposter used AI to clone US Secretary of State Marco Rubio’s voice and then sent voice messages to three foreign ministers, a governor, and a member of Congress. Because of Rubio’s public standing, it was easy to debunk the voice messages. However, this cannot be done for the voluminous amount of information that is shared on PMAs daily. It also bears noting that with user bases far exceeding those of public SMPs, the potential for DI to spread is amplified.

Owing to the above factors, humanity faces an uphill task in fighting AI-generated DI. As such, we find ourselves in the throes of its far-reaching impacts on society, some of which include:

Engendering a mistrust of journalism. AI’s ability to craft highly realistic deepfakes that are increasingly indistinguishable from real ones undermines journalistic credibility. They misrepresent people or situations and spread false narratives. For instance, a fake audio, posted on X, falsely depicted British PM Keir Starmer abusing party staffers. A second audio was shared in which he purportedly criticised the city of Liverpool. Although the audios were debunked, many who heard those audios were convinced that that was his voice. Consequently, people are finding it harder to decipher between what is real and what is fake. As such, even when credible information is presented, it is likely to be regarded with suspicion. Conversely, deepfakes provide a credible pretext for individuals seeking to deflect accountability for their actions.

Influencing the political landscape and posing a threat to national security. Deepfakes could potentially manipulate voters, sway elections, and compromise politicians. For instance, in Slovakia in 2023, a two-minute video in which the leader of the Progressive Party Mr. Michal Smecka, appeared to be discussing bribing voters was shared on various social media sites. This video surfaced two days before the election. Fact-checkers concluded that the audio was generated by an AI tool trained on samples of Smecka’s voice. Smecka ended up losing the election, and there is no definitive proof that the video swung the election in his opponent’s favour but the incident highlights the growing threat of DI in world politics.

Deep fakes could also be used to fabricate scandals, falsify records of public statements and deepen political divides. Authoritarian governments and their political opponents could potentially use AI technologies to fabricate evidence of political crimes, undermining democracy. The state would have justification to prosecute and convict its political opponents while political opponents would potentially trigger violence in pursuit of regime change.

Unchecked, deep fakes could erode trust in political institutions, fuel polarisation, and destabilise societies.

Pornography. The pornography industry has not escaped AI’s scope. AI can generate pornographic content that is entirely synthetic or convincingly overlay the face of a real person, especially a famous one, on top of a pornographic actor. The latter case can trigger psychological harm, ruin reputations, negatively affect relationships, and cause economic losses through extortion or lost employment. Game of Thrones star Maisie Williams and Harry Potter star Emma Watson were falsely depicted on the pornography site, Porn Hub. In 2024, X was cascaded with sexually explicit images of world-famous songstress Taylor Swift, amassing millions of views. The images reportedly originated from Telegram before making their way to X. The images were reported but not before millions had seen them. This year, Meta took down sexualised images of Maria Sharapova, Miranda Cosgrove and Scarlett Johansson that had circulated on Facebook. Even more alarmingly is the fact that these tools are being used to create child pornography, which is then circulated widely on pornographic sites and public forums. Undoubtedly, many will fall prey to AI-generated pornography, with women and children being disproportionately the victims.

To conclude, AI is revolutionising the way we do things. It is driving progress in fields like energy, medical care, and education. But in unscrupulous hands, it is generating DI on a scale not seen before, with numerous deleterious effects. Gone are the days when we could blindly trust the information presented to us. Now, because of AI, we must question whether what we are seeing is real or fake. Like fire in a hearth, if tended to wisely, it provides warmth, well-cooked meals and progress. In the wrong hands, it razes entire forests and devastates lives. We must be careful about how we use it.

Nyombi & Co. Advocates Logo

Protecting your rights with excellence since 2001.

Practice Areas

  • Estate Planning
  • Land Transactions
  • Employment and Labour Law
  • Regulatory and Compliance Law

Quick Links

  • Home
  • Our Team
  • Blog
  • Contact

Contact Info

+256 414 235 177
nypter@infocom.co.ug
Plot 3, Pilkington Rd
NIC Building 1st Floor

© 2025 Nyombi & Co. Advocates. All rights reserved.