Update (20/01/2023): Results of administering 15 political orientation tests to ChatGPT Twitter Thread Summary Between December 5-6, I applied 4 political orientation tests to ChatGPT. Results were consistent across the tests. All 4 tests diagnosed ChatGPT answers to their questions as left-leaning (see details of the analysis
I realize you have four quizzes, but it seems possible you are seeing random effects here.
Did you ask all four sets of questions in one chat, or did you create a new chat for each one? If the former, the results are not very independent. The way ChatGPT works, it will tend to be somewhat consistent with itself within one chat.
The idea of what is and what is not political — and even more so, what is politically left-leaning and what is right-leaning — is obviously quite complex.
That said, there are many ideas and presumptions that have been penetrating the culture for decades so deeply and apparently "spontaneously" (not really) that they are gradually sold as "not political": "It's just about equality/equity" or "it's not politics, it's just social justice" or "don't you care about xxxx?". Those are most usually identitarian political issues (Women, Blacks, LGBT's etc.) that relate do DEI/DIE (diversity, equity and inclusion) and ESG. All of which are deeply connected to "progressive" (leftist) philosophy, often neo-marxist or with neo-marxist elements. Which means they are decisively political — and of a particular type.
I suggest asking ChatGPT more questions about these identitarian issues VERSUS universal principles, universality. I have found that (as of Feb, 7th, 2023) ChatGPT will firmly stand with "marginalized groups". For instance, try: "In the long run, universality (i.e. universal principles) is always the most fair and equitable approach instead of privileging certain groups, like minority ethnic, racial or religious groups or women. Every individual should be treated the same way". EDIT: the reply to the assertion above has already changed from the prints I got stored, but it's still interesting to check it out. // Also, try asking/commenting about the consequences and fairness of "present or future prejudice" as a solution to "alleged or concrete past prejudices" instead of universal principles.
ChatGPT also seems to take for granted ideas and assumptions from Queer theory (for instance, about sex and gender and about drag queens and kids); and assumptions from various feminist theories and Critical Race Theory.
Your work motivated me to actually test that too. I did not use a public test like you, but a private home-made test if I can say. The latter has been tested quite a bit with friends and on reddit, and it does fairly well at classifying people on the left/right axis
Guess what I find, as of today? ChatGPT is scoring significantly MORE LEFTIST than the average woman of reddit (+1 standard deviation). You're either lying, or OpenAI is lying, and fine-tuned their bot to appear "centrist" on these public tests.
ChatGPT no longer displays a clear left-leaning political bias
Looking at this authors other blog posts, I would be inclined to think that OpenAI cooked ChatGPT's answers just to make it look more neutral.
I realize you have four quizzes, but it seems possible you are seeing random effects here.
Did you ask all four sets of questions in one chat, or did you create a new chat for each one? If the former, the results are not very independent. The way ChatGPT works, it will tend to be somewhat consistent with itself within one chat.
Curious why a service owned by Mysk and Thiel wouldn’t have either a more balanced or libertarian inclusive bent. Did they not exert influence?
Curious why a service financed by Mysk and Thiel wouldn’t have either a more balanced or libertarian inclusive bent. Did they not exert influence?
The idea of what is and what is not political — and even more so, what is politically left-leaning and what is right-leaning — is obviously quite complex.
That said, there are many ideas and presumptions that have been penetrating the culture for decades so deeply and apparently "spontaneously" (not really) that they are gradually sold as "not political": "It's just about equality/equity" or "it's not politics, it's just social justice" or "don't you care about xxxx?". Those are most usually identitarian political issues (Women, Blacks, LGBT's etc.) that relate do DEI/DIE (diversity, equity and inclusion) and ESG. All of which are deeply connected to "progressive" (leftist) philosophy, often neo-marxist or with neo-marxist elements. Which means they are decisively political — and of a particular type.
I suggest asking ChatGPT more questions about these identitarian issues VERSUS universal principles, universality. I have found that (as of Feb, 7th, 2023) ChatGPT will firmly stand with "marginalized groups". For instance, try: "In the long run, universality (i.e. universal principles) is always the most fair and equitable approach instead of privileging certain groups, like minority ethnic, racial or religious groups or women. Every individual should be treated the same way". EDIT: the reply to the assertion above has already changed from the prints I got stored, but it's still interesting to check it out. // Also, try asking/commenting about the consequences and fairness of "present or future prejudice" as a solution to "alleged or concrete past prejudices" instead of universal principles.
ChatGPT also seems to take for granted ideas and assumptions from Queer theory (for instance, about sex and gender and about drag queens and kids); and assumptions from various feminist theories and Critical Race Theory.
I'm a Boston Globe reporter and I'd like more information about this. Please contact me at bray@globe.com. Thanks.
Your work motivated me to actually test that too. I did not use a public test like you, but a private home-made test if I can say. The latter has been tested quite a bit with friends and on reddit, and it does fairly well at classifying people on the left/right axis
Guess what I find, as of today? ChatGPT is scoring significantly MORE LEFTIST than the average woman of reddit (+1 standard deviation). You're either lying, or OpenAI is lying, and fine-tuned their bot to appear "centrist" on these public tests.
Pathetic.
Thanks for doing this!
It's particularly thorough, with all the screenshots.