The World of Post-Truth. How Information Became a Battleground

The World of Post-Truth. How Information Became a Battleground

Victor Taran / Tyzhden

Information no longer comes to us in separate blocks throughout the day. It has no clear boundaries and does not give time for reflection. The informational space has stopped being just a way to deliver news. It has become an environment in which decisions, moods, and behaviors can be influenced, altering the course of political, social, and economic events.

According to World Economic Forum estimates, disinformation today is the greatest short-term global risk to society. At the same time, data from the Reuters Institute shows that the trust level in news worldwide is about 40%, indicating a systemic crisis of confidence in information.

News updates continuously, creating the impression that it is important to react immediately. You open a feed and see a video with a loud caption about an event that supposedly just happened. The person on the screen speaks confidently, the picture looks clear, and the comments under the video reinforce the impression that it is true. You feel anxiety or outrage and almost automatically press the “forward” button. Likes, reposts, comments. Everything seems perfect. However, a few hours later, a clarification appears that this video is old, altered, or simply AI-generated. Because, as Sam Altman rightly points out, today people should no longer automatically believe that what they see or hear is reality.

Of course, you write a refutation, but it’s too late. The message has already spread to hundreds of chats and embedded itself in others’ beliefs. Congratulations — you have become a victim and participant in someone else’s information campaign. This is how the informational environment we have been living in for the past few years works.

Sometimes the goal is not to persuade, but to confuse.

Seeing several mutually contradictory versions, a person begins to doubt everything. This reduces trust and makes us more vulnerable to subsequent messages. It is not necessary to create outright falsehoods for this. It is enough to change the emphasis, take a fact out of context, or repeat the same thesis from different sources. Over time, this creates the feeling that a certain opinion is generally accepted.

The scale of this flow is hard to even imagine. According to official releases, more than 500 hours of video are uploaded to YouTube every minute. On TikTok, billions of videos are watched daily, with the number of new ones counted in tens of millions. In Facebook and Instagram, hundreds of millions of posts, stories, and reels are created daily. According to open data from the company, Facebook alone has billions of active users who are constantly generating and consuming content.

Modern content has learned to imitate the truth well. High-quality imagery, confident tone, and a large number of reactions create a sense of credibility. For most, this is enough to believe. Emotions accelerate this process.

Research by the Massachusetts Institute of Technology showed that false news spreads faster than true news on social networks precisely because it more often evokes a strong emotional reaction (fear, outrage), which helps spread unchecked information.

Algorithms adapt to user behavior and gradually create a separate informational reality for them. If a person often reacts to alarming or provocative news, they receive even more similar content. Over time, this starts to seem like an objective view of the world, although it’s actually just a filtered stream.

As a result, two people can look at the same event and see completely different versions. Both will appear convincing but will be based on different sets of information.

When Appearance Replaces Content: How Artificial Intelligence Produces Convincing Falsehoods

A few years ago, creating fake information required time and effort. One had to write texts, edit images, and invent details on their own. Now, a significant part of this work is done by artificial intelligence, and faster than a person can read the result. Tools for generating texts, images, and videos (such as ChatGPT or Midjourney) allow the creation of large volumes of material in seconds. In 2023–2025, researchers recorded a rapid increase in the number of AI-generated images on social media, and platforms began marking such content.

As a result, the amount of visual information (videos, images, short clips) is growing faster than ever before. And increasingly more of it is not directly created by people, but with the help of algorithms. This means that the user is faced not just with a large volume of information, but with a stream that can be scaled almost without limits.

You might come across news that looks like a regular media piece. It has a logical structure, confident presentation, and even quotes. But this text can be completely generated. It doesn’t necessarily contain blatant falsehoods: often it’s a combination of truthful fragments and fictional details, which are hard to notice at once.

Deepfake is material where a person’s face or voice is altered to seem real. You see a familiar person and automatically trust what they say, even if it’s not actually them in the video.

According to companies researching synthetic media, the number of deepfakes is increasing every year, and the tools for creating them are becoming more accessible to the general public, especially for combating political and economic competitors. For example, a report by Freedom House notes that artificial intelligence is already being used for manipulations during elections in various countries, including Romania.

The reason for the increase in all these materials is simple: it has become cheaper to create and easier to scale them. One tool can generate dozens of texts, images, or videos in a short time, which will look truthful and realistic. Technology companies openly admit that distinguishing artificially created content is becoming increasingly difficult even for specialists. This means that the amount of convincing but unreliable content is growing faster than the consumer’s ability to verify it.

Cyber Component and the Effect of Distrust

Information dissemination often occurs not only with human participation. Many processes on the web appear natural, but in fact, are technically augmented. Data leaks, profile hacks, and forged documents create an additional layer of complexity. Some messages are spread by automated accounts that mimic activity. These are so-called “bots” and “impostors” (fake accounts posing as real individuals). They can automatically create and disseminate messages, imitating the activity of real users.

People pick up these messages and add their own reactions and thoughts. Algorithms see increased activity and begin to show this content to more users. As a result, the impression of mass participation is created, even if the initial impulse was artificial. This mechanism operates unnoticed. And when seeing the final result, a person rarely thinks about how it was formed.

Guidelines in an Altered Reality, or What to Do When Fact-Checking Cannot Keep Up

The classical approach to fact-checking remains important but does not account for the lightning speed of the modern environment. Information spreads faster than it can be verified. Another issue is that with the help of artificial intelligence, a more complex model of reality distortion is formed. Part of the information is genuine, part is altered, and distinguishing one from the other becomes much harder.

This creates the effect of alternative truth. A person sees familiar facts, but they are presented in a different context or supplemented with fictitious details. As a result, a sense of instability appears, and trust in any sources gradually declines. A person starts doubting even what seemed obvious before.

There is also a psychological aspect confirmed by research in cognitive processes. The first impression is more strongly imprinted than subsequent clarifications. Even after refutation, people often remember the initial version of the message. Added to this is the tendency to believe information that confirms one’s own views. This means that even verified facts do not always change opinions.

Against this background, the approach to counteraction is also changing. States and large technology platforms already recognize the scale of the problem.

The documents of the European Commission explicitly state that disinformation poses a threat to democratic processes. Therefore, new regulations and requirements for platform transparency are emerging. Social networks are implementing labels for content created with the help of artificial intelligence, limiting the reach of blatantly manipulative materials, and developing verification tools. Regulations aimed at algorithm transparency and responsibility for disseminating disinformation are appearing at the state level.

However, these steps do not produce immediate results and do not relieve the user of responsibility. The information environment has become so fast and vast that no system can fully control it in real time.

This is why the key change occurs at the individual behavior level. A person is not obliged to verify every fact but can learn to notice moments when information tries to make them act automatically. This already provides an advantage in an environment overloaded with information. The main skill becomes not the speed of consumption, but the ability to stop and take a closer look. This pause seems insignificant, but it is what separates reaction from decision.

The issue of personal information hygiene becomes the next logical step. How to not get lost in the flow, which habits help reduce the impact of manipulation, and how to verify information in conditions of overload will be discussed in the following articles.

Source

 

Collage: Tyzhden

Автор