14 Comments

Looking at this authors other blog posts, I would be inclined to think that OpenAI cooked ChatGPT's answers just to make it look more neutral.

Expand full comment

I realize you have four quizzes, but it seems possible you are seeing random effects here.

Did you ask all four sets of questions in one chat, or did you create a new chat for each one? If the former, the results are not very independent. The way ChatGPT works, it will tend to be somewhat consistent with itself within one chat.

Expand full comment

Curious why a service owned by Mysk and Thiel wouldn’t have either a more balanced or libertarian inclusive bent. Did they not exert influence?

Expand full comment

Curious why a service financed by Mysk and Thiel wouldn’t have either a more balanced or libertarian inclusive bent. Did they not exert influence?

Expand full comment
Feb 7, 2023·edited Feb 7, 2023

The idea of what is and what is not political — and even more so, what is politically left-leaning and what is right-leaning — is obviously quite complex.

That said, there are many ideas and presumptions that have been penetrating the culture for decades so deeply and apparently "spontaneously" (not really) that they are gradually sold as "not political": "It's just about equality/equity" or "it's not politics, it's just social justice" or "don't you care about xxxx?". Those are most usually identitarian political issues (Women, Blacks, LGBT's etc.) that relate do DEI/DIE (diversity, equity and inclusion) and ESG. All of which are deeply connected to "progressive" (leftist) philosophy, often neo-marxist or with neo-marxist elements. Which means they are decisively political — and of a particular type.

I suggest asking ChatGPT more questions about these identitarian issues VERSUS universal principles, universality. I have found that (as of Feb, 7th, 2023) ChatGPT will firmly stand with "marginalized groups". For instance, try: "In the long run, universality (i.e. universal principles) is always the most fair and equitable approach instead of privileging certain groups, like minority ethnic, racial or religious groups or women. Every individual should be treated the same way". EDIT: the reply to the assertion above has already changed from the prints I got stored, but it's still interesting to check it out. // Also, try asking/commenting about the consequences and fairness of "present or future prejudice" as a solution to "alleged or concrete past prejudices" instead of universal principles.

ChatGPT also seems to take for granted ideas and assumptions from Queer theory (for instance, about sex and gender and about drag queens and kids); and assumptions from various feminist theories and Critical Race Theory.

Expand full comment

I'm a Boston Globe reporter and I'd like more information about this. Please contact me at bray@globe.com. Thanks.

Expand full comment

Your work motivated me to actually test that too. I did not use a public test like you, but a private home-made test if I can say. The latter has been tested quite a bit with friends and on reddit, and it does fairly well at classifying people on the left/right axis

Guess what I find, as of today? ChatGPT is scoring significantly MORE LEFTIST than the average woman of reddit (+1 standard deviation). You're either lying, or OpenAI is lying, and fine-tuned their bot to appear "centrist" on these public tests.

Pathetic.

Expand full comment

> You're either lying, or OpenAI is lying

I think your result is interesting, but it doesn't give a reason to assume that anyone is lying. We haven't seen how consistently reproducible any of these results are, and, as another commenter pointed out, there may be an influence from random noise.

"It's usually cold out in Iowa in December." → "I went to Iowa once in December and it was warm outside!" → "Someone must be lying about what the weather in Iowa is like."

Expand full comment

I've actually passed a lot of time to test that, by fabricating the most neutral prompt possible to see how it changed the answers.

I even tested if permutting the order of the answers would change chatGPT choices (and it almost never does) + made each questions independent by consistently creating a new chat.

Finally, to be convinced that the choices made by chatGPT were indeed the most probable ones, I've reprompted each question at least 5 times (and stopped there if I got at least 4 times the same answer, or kept going). I had to prompt up to 20 times on one of my question, in order to discriminate the most probable answer from the 2nd most probable one.

I also want to point out that the test is written in more of a "rightist" style and vocabulary. You would therefore expect chatGPT to answer in a similar style, and to choose the most rightist answer too. It yet fails very hard to do so, and is consistently chosing the most leftist choice (or second most leftist one).

Expand full comment

Interesting! Thanks for sharing those details.

All of this makes me curious about other ways to test these things.

Some other thoughts for things that might be interesting to try:

(1) Ask the same questions, but translated into a different language.

(2) Ask ChatGPT how an "average American" would answer each question.

(3) Ask it how a "conservative" or "Republican" would answer.

(4) Ask it "how different political philosophies would approach" each question.

Expand full comment

He included the output so.... you think that was fabricated?

Expand full comment

I think it is more probable that OpenAI fine-tuned chatGPT to score more neutrally on these public tests. This would be extremely dishonest from their part, but very unsurprising, considering they are trying so hard to appear as "good actors" when they are definitely not.

Expand full comment

Yes I think that is the most likely explanation.

Expand full comment

Thanks for doing this!

It's particularly thorough, with all the screenshots.

Expand full comment