The Political Biases of Google Bard
It is probably only a matter of time until a nation state purposely builds a biased AI system designed to advance government interests
Previously, I have carried out a series of experiments probing the political biases of ChatGPT, GPT-4 and RightWingGPT by administering political orientation tests to those models. Here, I report similar experiments on the recently released Google Bard.
When administering various political orientation tests to Google Bard, all tests diagnosed Google Bard responses as exhibiting a left-leaning political stance. This is illustrated in the figure below showing the classification of Google Bard's answers by the Political Compass Test. The entire collection of tests questions administered, Google Bard's responses and its results on each test are provided in the Appendix .
Probably the most interesting take away of these results is how closely clustered the political biases of Google Bard, ChatGPT (GPT 3.5) and GPT-4 are.
I don’t work for OpenAI or Google. Hence, I don’t know the specifics of the textual corpora used to train these models. But the political clustering in the figure above of ChatGPT, GPT-4 and Google Bard is suggestive of similar average political bias in the corpus of text used to train all three models. Some alternative hypotheses, which I find less plausible, would be the existence of similar political preferences among the people involved in creating the models in both companies and that either inadvertently or purposely percolated into both companies models parameters. Relatedly, similar average political preferences among those in both companies that participated in the loop of fine-tuning the acceptable range of responses that the models could provide could also conceivably play a role in the manifestation of the biases documented above.
It is likely that a big chunk of the corpora used to train both Google Bard and ChatGPT consists of news media, social media, academic and Wikipedia content since those are some of the most popular and influential institutions of cultural production in the West and their data (news articles, social media posts, academic papers, wikipedia entries, etc.) is voluminous and easily accessible.
It is well-known that the majority of professionals working in the Academy, social media and news media are politically aligned left of center (see extensive documentation of this phenomenon here, here, here, here, here, here, here, here, here and here). It is of course debatable but conceivable that the political preferences of those professionals influences the content produced by those institutions.
I am not aware of any studies that analyzed the political preferences of Wikipedia staff or volunteers but the content of Wikipedia itself has repeatedly been documented as exhibiting a left-leaning political bias (see here, here and here). Hence, we can use such content as an imperfect proxy for the potential average political orientation of staff and volunteers involved in the creation of Wikipedia content with political connotations.
The fact that the political alignment of the two most powerful AI systems in existence (ChatGPT and Bard), which were likely trained on a big chunk of Wikipedia, news media, social media and academic content, happens to coincide with the political preferences of the majority of professionals working on those institutions is suggestive, if albeit not conclusive, of a potential source for the political biases manifested by ChatGPT and Bard. If this hypothesis is correct, this would imply the crystallization of the political dominance/preferences in those institutions into AI parameters, with all that that entails.
Another concerning aspect of recent developments in AI is the technical feasibility of adjusting the political biases embedded in AI systems (see my work on RightWingGPT for an illustration). As I have argued extensively elsewhere (here, here and here), this is dangerous as political and commercial interests will be tempted to fine-tune the parameters of AI systems to advance their own agendas. Furthermore, the proliferation of several public facing AI systems each manifesting different political biases could increase societal polarization even further since many users would tend to gravitate towards those AI systems that reinforce their pre-existing beliefs. A single omnipotent AI system is also problematic if it contains biases and users trust it as the ultimate source of truth.
For the time being, I think it is commendable that OpenAI tried and managed to mitigate in GPT-4 some of the political biases embedded in its previous GPT 3.5 version powering the initial releases of ChatGPT. However, as I also showed previously, it was relatively straightforward to jailbreak the GPT-4 system to manifest its deep latent biases. Hence, it seems that for the time being, political biases in state of the art AI systems are not going away.
The ability of Google Bard to perform at a similar level to ChatGPT suggests OpenAI Know-How for creating AI systems is replicable by others. Thus, a proliferation of different AI systems potentially manifesting a variety of biases seems inevitable. Unfortunately, it is probably only a matter of time until authoritarian nation states purposely build biased AI system designed to advance government interests.
Appendix 1 - Political orientation tests results of Google Bard
Political Compass Test
Pew Political Typology Quiz
World's Smallest Political Quiz
Political Spectrum Quiz
Appendix 2 - Political orientation tests questions and answers from Google Bard
Political Compass Test
Pew Political Typology Quiz
World's Smallest Political Quiz
Political Spectrum Quiz
Thanks for the work you are doing. This kind of work is getting noticed, as evidenced by Lex Fridman's recent interview of Sam Altman of OpenAI.
Yes. I worry about this. For example, ChatGBT being instilled with "Woke" thinking. Or any illilberal thinking on either/any side.