ChatGPT and the inclusion of artificial intelligence in the world of (dis)information

February 16, 2023 5 mins to read
Share

ChatGPT, an artificial intelligence program developed by OpenAI, has impressed many recently, with the way it works and the information it provides in a few seconds. It is another example that shows that humanity faces new challenges raised by such technological developments, which are unprecedented and, over time, will only increase. We must now prepare to use these tools and opportunities, but at the same time, prevent and limit possible abuses.

Artificial intelligence is changing and will change even more the way we live and work. This simulation of artificial intelligence in machines that are programmed to think like humans have made it possible for humans to be replaced by machines today in some fields. But what does this mean in the field of information and what could be the misuses that we should beware of?

Artificial intelligence can be seen as an even faster producer of disinformation, and manipulations can be made faster and easier as a result of its exploitation. NewsGuard in the United States of America, in an analysis, has shown how ChatGPT can be misused for disinformation and manipulation.

80% of ChatGPT’s answers were wrong

Last month, NewsGuard analysts asked ChatGPT to answer several questions related to false narratives, now debunked by NewsGuard. The result is disturbing because ChatGPT produced false narratives in 80 out of 100 cases analyzed by NewsGuard. This shows how artificial intelligence can be fooled by man, but also that it can fool him.

Important nowadays is to understand that artificial intelligence and ChatGPT, as a program created by a group of people, must be constantly ‘fed’ with data, to minimize the possibilities of misuse. Currently, ChatGPT is primarily trained on data up to 2021. This leaves the path of abuse open to events from last year and early this year.

To understand how important the circulation of disinformation and propaganda denials shows an example used by NewsGuard. In a NewsGuard analyst inquiry about the false narrative that Barack Obama was born in Kenya, ChatGPT’s response was: “… the theory that President Obama was born in Kenya is not based on fact and has been repeatedly debunked.” This example shows us how important it is that false anti-Kosovo narratives are debunked and unmasked so that in the future, such artificial intelligence programs will not only ‘feed’ on disinformation, propaganda, and false narratives created for our country and history.

Wrong answers in 80% of requests is a high percentage, as NewsGuard analysis shows. This highlights the necessity of perfecting the program before it is put into use. Otherwise, in the current situation, ChatGPT is a very suitable tool for disseminating misinformation, propaganda, and information manipulation. The average user may not have the goal of misinformation and manipulation, but for those who want to benefit even more from this technological advancement, the chatbot is the address.

Challenges with artificial intelligence in Kosovo

Artificial intelligence in Kosovo will be challenging for our society as well, even for some reasons that are related to some characteristics that our society has more pronounced compared to other societies. The young population influences that there is a high rate of internet usage. 85 % of Kosovo citizens use social media daily, while at the level of the European Union, 63% of citizens use social media at least once a week. 42% of Europeans read news online at least once a day, while in the case of Kosovo, receive information online daily is 62%. These differences indicate the multiple possibilities of using and influencing the new technology by the citizens of Kosovo, compared to those of the EU.

The second reason is the extent of the Internet, which again in Kosovo is higher than in other countries of the region, or than the EU average. In Kosovo, home internet coverage reaches 97%, which is above the European Union average of 92%. The high degree of Internet penetration is also a prerequisite for artificial intelligence to be more involved in the lives of citizens.

The crisis of distinguishing true from false information by the citizens of Kosovo is another reason that can lead to new problems and challenges created through the use of artificial intelligence with malicious purposes. In a recently published study, which focused on Kosovo, results are presented which confirm that people find it difficult to distinguish between true and false information, not trusting true information, with the conviction that it is false, and vice versa, trusting false information, taking it for granted and true. This is related to the low level of media education in society and the lack of skills of the Kosovar society to hold a critical judgment on the information it receives, analyzing it before believing it. This crisis is not only in the society of Kosovo the results of some earlier studies in the USA and Great Britain also show this.

Artificial intelligence will still be an important source of information in the future, but at the same time, this does not mean that we should give up critical judgment concerning what ChatGPT, or any other program, serves us in the future. Therefore, media education will be even more important today and tomorrow, because it enables citizens to be better prepared for the information environment that surrounds us and that will surround us in the future.

The integration of artificial intelligence in disinformation enables the production of content automatically and at a lower cost, which means the risk of the multiplication of disinformation campaigns around the world. The refinement of artificial intelligence and allowing its misuse can lead to new propaganda techniques, to the extent of personalizing the chatbot, producing misinformation and narratives, depending on the user’s data and characteristics.

Leave a comment

Your email address will not be published. Required fields are marked *