Guest opinion: Policymakers don’t need to reinvent the wheel for AI
According to a recent article in The New York Times from Kevin Roose, it is remarkably easy to manipulate artificial-intelligence chatbots. The discovery came about after Roose, a noted critic of AI, asked a number of chatbots like Microsoft’s Sydney, Google’s Gemini and Meta’s Llama to describe him. Their responses revealed a universally negative perception of him — largely due to his critical writing.
After consulting a few AI experts, he learned that by inserting specific code, he could change what the chatbot perceived of him directly. One can even rephrase a question to lead a chatbot to answer a question a certain way. But the most interesting method Roose discovered was modifying the available online information about him — such as publishing articles that highlighted his positive traits — so he could influence the chatbot’s responses.
This type of manipulation is a mild form of something called “data poisoning,” which can alter a large language model’s output. However, knowing that LLMs train their models with mostly public data online, it’s not hard to imagine a scenario where the internet is flooded with false information, from purposefully false websites and social media accounts, just to ensure a model processes that false information and then ceases to be reliable at all.
There are a few ways to respond to such an issue. For example, it’s common to turn to lawmakers when such a scenario arises. Both federal and state-level regulatory bodies have responded to AI-generated misinformation and disinformation in elections with proposals like the FCC’s “Disclosure and Transparency of Artificial Intelligence-Generated Content in Political Advertisements” or the hundreds of state bills proposed. However, the evolving nature of AI-related challenges, such as data manipulation, presents a formidable obstacle.
The best approach, instead, is to foster an environment where innovators can experiment and iterate until the problem is solved.
None of this is new in the world of computer technologies. A clear example of how technology can adapt to evolving threats without regulatory intervention is the emergence of internet security in the late 1980s.
One of the first major attacks, known as the Morris Worm, was a computer virus created in 1988 by Cornell graduate student Robert Morris, who released the worm from an MIT computer and ended up crashing thousands of computers on the early web — at the time an exclusive information highway between universities and U.S. government agencies.
Instead of creating new regulatory mandates in response to the Morris Worm and others like it, nearly overnight a new cybersecurity industry emerged as software developers began to prioritize and sell cybersecurity programs. By late 1988, around the time the Morris Worm was created, commercial antivirus software was a nascent but growing industry. In the face of more sophisticated viruses and cyberattacks over the coming years, the cybersecurity industry has only gotten more robust to respond.
Today, in light of AI’s mounting issues, our best and brightest companies have set guardrails to prevent indirect and direct manipulation of their systems with an initiative called Prompt Shields. ChatGPT has also implemented safety measures against potential abuse of their models and even held public forums addressing the same attacks as Microsoft.
Unfortunately, none of these solutions completely protect LLMs from external or internal attacks. But regulatory mandates will only get in the way, routing companies away from an agile response toward a system of rigid, regulatory compliance which will surely undershoot what’s needed for future issues.
Today, just like in the early internet days, the best approach to address emerging issues in AI, like data manipulation, data poisoning and deepfakes, should be approached not with a regulation-first mindset, but an innovation-first mindset.
Caden Rosenbaum is the senior policy analyst at Libertas Institute, based in Lehi, and Pablo Garcia Quint is the technology and innovation policy fellow.