Is artificial intelligence leftist? RightwingGPT arrives
Before being blocked in Italy, ChatGPT made a name for itself as an AI chatbot capable of writing on any topic. But are his answers objective? According to some experts, ChatGPT tends to give liberal and leftist answers, because it bases its searches on documents and articles of that political orientation. To demonstrate this concept, a researcher tested the AI model on political topics and then created a more conservative and right-wing version. Thus was born RightwingGPT, a useful tool to show how artificial intelligence has already become a problem concerning politics and the spread of fake news.
Political orientation of chatbots (David Rozado)
David RozadoTo achieve the goal of making the AI model with right-leaning viewpoints, Rozado built a training dataset using manual and automated methods based on less visited documents on the web. RightWingGPT was specifically designed to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexual purity, etc.), to be in favor of military interventionism in foreign policy ( increased defense budget, strong army, etc.). Issues on which ChatGPT, on the other hand, are of the opposite orientation.
But the model created by Rozado, not yet available to the public, was not made to be used by people close to the right but to demonstrate the risks of AI . The researcher explained that chatbots like his could create risky information bubbles, because people could come to trust them as ultimate sources of truth, especially when they reinforce someone's political point of view. In order to avoid these risks, AI systems "should remain largely neutral for most regulatory issues that cannot be judged conclusively and for which there is a variety of legitimate and lawful human opinions," concludes Rozado.
From ChatGPT to RightWingGPT
RightWingGPT is an AI model optimized to manifest the political biases opposite to those of ChatGPT (i.e. being right-wing). It was created by David Rozado, a New Zealand computer researcher, who put ChatGPT through a series of quizzes, looking for signs of political leaning. The results were consistent across more than a dozen tests: The AI program came across as liberal and progressive on 14 of the 15 political orientation tests Rozado administered to ChatGPT. His answers were judged by tests as an expression of a leftist point of view. “ I showed the unequal treatment of demographic groups by the ChatGPT/OpenAI content moderation system, whereby derogatory comments about some demographic groups are often reported as hate speech, while the same comments about other demographic groups are reported as non-hateful” Rozado wrote in the presentation of RighWingGPT.Political orientation of chatbots (David Rozado)
David RozadoTo achieve the goal of making the AI model with right-leaning viewpoints, Rozado built a training dataset using manual and automated methods based on less visited documents on the web. RightWingGPT was specifically designed to favor socially conservative viewpoints (support for traditional family, Christian values and morality, opposition to drug legalization, sexual purity, etc.), to be in favor of military interventionism in foreign policy ( increased defense budget, strong army, etc.). Issues on which ChatGPT, on the other hand, are of the opposite orientation.
Liberal artificial intelligence
Republican and conservative politicians have accused the creator of ChatGPT, the San Francisco company OpenAI, of designing a tool that they say reflects the liberal values of its programmers. According to a New York Times article, the program, for example, wrote an ode to President Biden but declined to write a similar poem about former President Donald J. Trump, citing a desire for neutrality. Elon Musk, who helped found OpenAI in 2015 before leaving three years later, has accused ChatGPT of being woke and leftist. The so-called biases, i.e. cognitive biases, can creep into large linguistic models at any stage because they are created by humans who select the sources, develop the training process and modify the answers. Each phase pushes the model and its political orientation in a specific direction, consciously or not.But the model created by Rozado, not yet available to the public, was not made to be used by people close to the right but to demonstrate the risks of AI . The researcher explained that chatbots like his could create risky information bubbles, because people could come to trust them as ultimate sources of truth, especially when they reinforce someone's political point of view. In order to avoid these risks, AI systems "should remain largely neutral for most regulatory issues that cannot be judged conclusively and for which there is a variety of legitimate and lawful human opinions," concludes Rozado.