Breaking News: The Government & AI

Karly Richman, The University of California – Santa Barbara


Abstract 

Media platforms and tactical strategies have evolved to reflect the ongoing technological innovations such as the use of artificial intelligence (AI). According to NASA, “artificial intelligence refers to computer systems that can perform complex tasks normally done by human-reasoning, decision making, [creativity], etc.” This technological advancement has become a part of everyday life as it is integrated in the media worldwide, making it both an innovative contribution to society, as well as a liability. The use of AI in the media has impacted global public opinion yet what is less known is that much of its use falls under the public’s radar. AI’s influence on the media is heavily embedded into all aspects of global affairs – including the political sphere. Most notably, governments, a public actor that also acts as our highest authorities, abuse their access to this technology, which this article brings awareness to. AI has been utilized to promote propaganda, spread false information and censor information. It is important to acknowledge AI’s role in influencing public opinion across the globe and address its possible damaging implications. Through the analysis of concrete research on current national and international solutions to AI misuse, evidence shows that current efforts are not sufficient. This article calls upon organizations and governments around the world to better promote the ethical use of AI instead of malpractice, inaction and lack of regulation. In practice, this translates to centralizing international and national laws around the use of AI, implementation of AI detection systems, and regulation to promote transparency in governmental use of AI which can often be misleading and unethical. 

Keywords: Artificial intelligence, Public opinion, Government, Electoral interference, Censorship

I. Introduction

Some of the most widespread and shocking media reports of today are not fully created by a human, but instead made using AI. Reporting accounts on the government and by the government have become increasingly confusing and misleading, deteriorating a sense of trust among the public as well as having adverse persuasion effects on public opinion. Government entities around the world are using AI in malicious ways to spread propaganda, censor and manipulate information. The implications of the use of AI by governments and of governments has been seen to affect issues, institutions and events, such as the shaping of elections, spreading misinformation on historical events and promoting biases, among other impacts. Artificial intelligence can take many forms in the media such as the generation of deepfake videos, images, and text, or even the customization of content to push a certain message onto users. While some use of AI can be obvious to users, other times it can be misleading and create a false sense of truth on information which affects global and national public opinion. It is important to promote knowledge, understanding and awareness of this topic, particularly how dialogue can forward meaningful regulations over ethical concerns. However, despite the widespread discussion, there seems to be a sense of inaction on national and international levels. The time is now. As AI becomes more amalgamated with everyday life, and increasingly more accessible to unrestrained actors, it is imperative to set guidelines on proper use before machine-learning programs acquire too much power. It is the role of governments and international actors to set the stage for such regulation. Through the exploration and analysis of this timely topic, awareness is spread to everyday people who may not realize how artificial intelligence is being used to influence their media, which in turn influences their opinions and beliefs. With AI being such a fast-moving field, it is important to create discussion not just around its innovations and advantages, but also about its disadvantages and ethical concerns – especially by users. This article addresses how governmental bodies around the globe have been manipulating AI in unethical ways which is seen through the examination of AI spreading propaganda, false information and censorship. From this examination and a basic understanding of AI tools, it becomes clear that actionable steps need to be taken to address this issue, as there isn’t enough being currently done. 

II. Human Role in Training AI

How does AI work? Many people think AI “thinks on its own” and do not understand the ways humans control and shape its content and system. What lies behind this is a specific technique called supervised machine learning. This uses an AI algorithm to analyze large amounts of “training data.” By employing this method, a large data sample is used to help create the most accurate predictions. This can be thought of as a teaching model in which “training data” are the correct answers to known data, which can make technology learn how to make predictions on unseen data and in doing this, start to recognize patterns and continue to make assumptions from those results. Simplistically, this learning process is similar to teaching a child names or how to label food. After showing a child repetitive photos of fruits, which is considered “the data,” while also telling them what they are, the child eventually starts recognizing differences on their own. AI does this too, but at an accelerated pace. One notable difference between human reinforcement learning and AI machine-learning is that humans can decide for themselves what data to use and how it is labeled. With specific data inputs, AI can become biased and manipulated, creating an echo chamber effect in which biases are reinforced and not corrected. AI can be trained and learn to prioritize certain content over others, continuing a harmful cycle of prejudice as it draws more on information that becomes increasingly biased. While AI engineers continue to optimize their algorithms to be faster and better than others, the repercussions of bias can be lost. These flaws, whether intentional or not, can impact the accuracy of AI algorithm outputs and reinforce harmful preconceptions. 

III. Government and Propaganda 

As was explained, AI is heavily influenced by humans, despite the popular notion that its outputs are entirely out of human control. The government has been one group that has taken advantage of this tool. One reason governments use AI is to spread propaganda to carry out their political agenda. A report from Freedom House has found that “at least 16 countries sow doubt, smear opponents, or influence public debate.” This can be through the form of AI-generated texts, images and videos that reflect harmfully on an opponent to sway public opinion. Looking at a specific example, state media outlets in Venezuela have used AI-manipulated images and videos to portray alleged news anchors from an international English channel, who in reality do not exist, according to MIT Technology Review. It was found that these fictional humans were produced by a custom deep fake company to spread pro-government messages – this constitutes direct manipulation of information. What is even more alarming is that political propaganda is not limited to authoritarian regimes. Not only is AI being used by government officials to persuade the public, but political officials themselves are being manipulated to spread false information and spread a certain narrative. In the United States, for example, Reuters Fact Check found that a circulating clip of President Joe Biden saying transphobic remarks may look authentic, but is a deep fake video as no evidence of Biden making those remarks were discovered. Fake videos like these created by AI can be used as leverage against political parties and political figures – in this case Joe Biden and the Democratic party. It is important to bring awareness to how complex and intricate AI-videos can be made to be because as AI advances in skill, it becomes more difficult to distinguish fiction from non-fiction, and thus makes it harder for the public to develop informed opinions based on accurate information. Another example can be seen during the 2024 election when AI-generated videos, images and audios of Democratic candidate, Kamala Harris, were spread to the public. Politically relevant and prominent figures such as Elon Musk have even reposted these fake videos to promote his personal agenda. The ultimate candidate and sitting president, Donald Trump, uses similar tactics such as using AI to spread insensitive and derogatory propaganda about the conflict in Gaza. An AI-generated video was posted on his account that depicted the currently war-torn Gaza Strip turned into what was called “Trump Gaza.” The video alluded to a reinvented Gaza Strip under Trump’s control that was illustrated with lavish resorts, golden Trump statues and balloons. This could clearly be distinguished as an AI-generated video, but was nevertheless highly controversial and sent a specific message to praise Trump. It eluded that if Trump was in control over the Gaza Strip, this historically complex and war-torn area would turn into a rich utopia. In posting a video like this on social media, Trump is ridiculing a deeply catastrophic issue and manipulating the rhetoric for his own benefit. His AI-usage of the disaster distracts from the reality of war and death in the highly unstable region, and replaces the scene with fictional statues of himself, lavish medals, and glorified dancers – the perfect utopia. For the President of the United States to imply that U.S. involvement, led by Trump, in a historically rooted conflict would be the ultimate solution to the deep historically-rooted conflict is highly troubling. This example captures the misuse of AI by government representatives and officials who use the technology to spread political propaganda and boost their domestic rankings. In many cases, this strategic kind of AI-usage impacts voter decisions on who to vote into office based on their falsified perceptions of candidates seen in the media. 

IV. Government and Censorship 

Not only do government officials use AI to spread ideas or falsify information online, but AI can also be used to selectively censor information that is deemed unfavorable to a specific political agenda. By manipulating AI, censorship can be more widespread and even more effective. For example, chatbots in China have been programmed to not answer questions about the 1989 Tiananmen Square massacre that followed student-led protests against the Chinese authoritarian regime. Direct training to the AI is used to censor information and respond in a biased manner to adhere to a certain political agenda, effectively cutting off access to history and truthful education about one’s government. Similarly, the Indian government ordered platforms like Youtube and Twitter to remove a documentary featuring their Prime Minister Modi in an unfavorable light, recommending also that they utilize moderation tools that are AI-based. This form of AI-usage severely restricts the freedom of expression and limits information given to the public. As seen here through these concrete examples, governments use AI tools to their own advantage to promote political agendas, particularly through censorship and taking away the public right to education and informed political decision-making. 

V. Political Impact 

Censorship and propaganda can have an immense impact on the outcome of elections and can disrupt democratic voting processes as voters are misled, confused, and ill-informed. AI-generated images have been seen as a tool to convey a certain narrative of particular politicians and parties, despite the images not containing any real content. One such example was the circulation of images from Hurricane Helene, whereby AI-generated images were used to depict the disaster to criticize a political opponent’s handling of the situation. An image of a crying child holding a dog on a boat elicited intense emotional responses throughout the internet, despite being credited as having been made with AI. While the public was split in deciding whether the image was fake or not, many political opponents from the Republican party pushed the image to admonish the Biden administration for their response to the disaster. A Republican Senator Mike Lee was even forced to delete his post about it after X added a community note flagging the image. With an upcoming election in mind, the Republican party saw the natural disaster and the benefits of generative-AI as an ideal opportunity to sway public opinion on the current administration. The controversy of publicizing a likely AI image makes it evident that government officials shared AI-generated content either intentionally or negligently, making it indisputable that their actions resulted in confusion and deception among the public. 

Governments are not only affecting their own nation’s public opinions through media intervention, but also make an impact on a global scale. In July 2024, the United States Department of Justice issued a press release announcing “the seizure of two domain names and the search of 968 social media accounts used by Russian actors to create an AI-enhanced social media bot farm that spread disinformation in the United States and abroad. The social media bot farm used elements of AI to create fictitious social media profiles — often purporting to belong to individuals in the United States — which the operators then used to promote messages in support of Russian government objectives.” Russian usage of bot farms, enhanced by artificial intelligence with the intent of confusing Americans and other individuals, promote Russian propaganda transnationally. Fake online personas were created for accounts on social media where an operator can share information on a large scale in the “AI-enabled propaganda campaign to use a bot farm to spread disinformation in the United States and abroad.” In response, the Department of Justice chose to issue a press release stating its position that it “will not tolerate Russian government actors and their agents deploying AI to sow disinformation and fuel division among Americans…as malign actors accelerate their criminal misuse of AI, the Justice Department will respond and prioritize disruptive actions with our international partners and the private sector.” This example demands the questioning of cross-border AI misuse and how new AI strategies emerge which could have undemocratic effects such as electoral interference or even be considered a modern form of warfare and terrorism. The manipulation of AI and its potential large-scale impacts emphasizes the demand for immediate attention in setting international rules. 

VI. Looking Forward/Conclusion

Government actors are entrusted to protect their citizens’ interests, but many are actually using artificial intelligence to undermine this interest in advancing political objectives. While various solutions can address the rising threat of AI interference in public opinion, it is unknown whether governments would approve the implementation of strict rules surrounding AI as they may be used to drive personal motives and promote propaganda. Yet, international actors have recognized the need to find concrete solutions by first raising awareness. For example, the United Nations published Governing AI for Humanity, a report which provides recommendations from issues arising from AI. The report emphasizes the need for global governance on AI, with a focus on inclusivity, accountability and equity, and recognizing its evolving nature. It concludes with a broader recommendation to establish a global AI data framework with support from relevant agencies and organizations. The report further contends the claim made at the start of this publication, being that humans are still in control of AI, despite uncertain rhetoric to the contrary. Ultimately, humans are in the position to restrain AI-usage and should start by outlining definitions and setting common principles for AI training data, and judging the use of AI through a lens of transparency and rights-based accountability across jurisdictions. This statement serves as a stepping stone to establishing global laws on AI and generative technologies, and provides a glimpse of hope in combating government abuse of technology against its citizens. A sense of accountability and transparency needs to be created with the governance of artificial intelligence. This can take many forms such as making it necessary to put disclaimers when there is the use of AI. The United Nations is hoping to create unity among nations and a sense of common ground, and seeks scientific professionals to promote an ethical approach to the use of AI. However, the United Nations is limited in their courses of action. Although they are trying to address these pressing and evolving issues of the continually widespread technology, there is only so much they can do as an international organization. The United Nations’ AI Advisory Board helped to urge, promote and generate ideas of AI ethics, yet, countries can still refuse or decide to avoid taking action to address these issues. It can be concerning that those who have the power to help regulate AI are also the ones who are trying to use it to their advantage and manipulate it. It is important to promote ideas of peace, equality and responsibility when it comes to AI, but even more important that concrete action is being taken. The United Nations is creating a common ground of regulation and ethics and is creating space, ideas and initiatives to combat this, however, much more needs to be done as what has been is insufficient. Individual countries need stringent regulation on the use of AI and have standards that even governmental officials can not circumvent or be above the law. There is a lack of regulation and law in the United States and more needs to be addressed to tackle this issue in both the public and private sectors. Even the numerous examples outlined in this article are just examples that are known to the public via media reporting. There still could be many ways public opinion is being manipulated in ways the public does not even realize. MIT Technology Review’s article, “How generative AI is boosting the spread of disinformation and propaganda,” writes that “the affordability and accessibility of generative AI is lowering the barrier of entry for disinformation campaigns, and automated systems are enabling governments to conduct more precise and more subtle forms of online censorship.” There are so many ways the government could be using AI to send subliminal messages to the public in small ways to affect large events such as the outcome of elections.

Some geographical and country specific actions are being taken. For example, the European Parliament has created a legislative resolution that proposes regulation of artificial intelligence and amending some Union Legislative Acts. It is expected to come into action within the next couple of years. In China, there has also been draft legislation in the works. According to the Washington Post, China is creating draft regulations that reflect the core values of the country and socialism, as well as creating restrictions on the sourcing training data where intellectual property rights are promoted. The regulation says that AI must be designed to generate accurate content and these proposed regulations have built off of already existing legislation around deep fakes. Overall, much of the action being brought forth is only draft legislation. Most places trying to combat this issue have only drafted ideas and have not been able to concretely create and enforce laws. More official action needs to be done, but at least the generation of ideas and drafting of the legislation are important steps towards this goal. Awareness and research is important to help decipher what is true and what is just propaganda. The company, OpenAI, has even just released their new image generator, 4o Image Generator, which can make even more complex and realistic images than ever before. Regulation and awareness needs to be caught up with the fast growing technological advancements of AI. People can now impact governmental organizations and figures like never before and the government themselves can impact information like never before. The public needs to be informed when AI is being used in content. 

It is clear that more needs to be done and that there needs to be more discussion around governmental use of artificial intelligence. AI should be used as a tool, not a weapon. It can be difficult to address and regulate the use of AI in the media, especially as it is evolving at such a rapid pace and changing the way intellectual property and technological regulation is handled. Thus, it makes it important to keep ongoing conversations and research to evolve regulation, legislation and awareness as AI evolves too. This article calls upon the governments of each independent nation to act in the best interest of their people and calls upon the international community as a whole to uphold agreed upon moral and ethical standards with the use of AI and that governments are held accountable for their use of artificial intelligence. Using AI for mass manipulation should not be tolerated or encouraged. The United Nations needs to do more to address this issue, countries need more legislation in their own nation, countries need to collaborate more multilaterally on the problem and hopefully with the help of this article, the public can be more aware of the ways in which their information is being manipulated with AI.

Leave a comment

Create a website or blog at WordPress.com

Up ↑