44 Comments
Feb 16, 2023·edited Feb 16, 2023

Krugman likes to say "Reality has a liberal bias". I would not hold up ChatGPT as flawless champion of reality, but your proposed AI, let's call it TruthGPT, strongly sensitive to accuracy and misinformation has no gaurantee of being politically neutral.

If you believe climate change is real, vaccine works, Keynesian economics is better than alternatives, beating your children is harmful, immigration is a massive boon to the economy etc. - TruthGPT can construct a worldview of inconvertible facts (or at least best-understandings of world experts in their respective fields) and yet won't be able to ignore the political implications those facts. Eventually we have to draw a line not based on accuracy or neutrality, but that fact that it is generally impolite to point out to humans when their worldview is based on demonstrably wrong ideas.

Right now we can't do that with ChatGPT - We have better control over the political balance of ChatGPT than the objective accuracy, so we are using "whose politics you should trust" as a proxy for accuracy. But I'm saying we haven't resolved the underlying tension, and we will start to see it just be merely asking if ChatGPT should trust scientists, academics and journalists (vs let's say trust pastors, elders, CEOs, and influencers).

Expand full comment

A question -- this is still running on OpenAI's servers, right, and still dependent on them?

Like hypothetically, if you wanted to offer this a competing AI service, they could kill it at any time by denying service. Or put hard limits on outputs did they do on ChatGPT. It's not like a standalone AI that would could separate from OpenAI.

Do I understand that correctly?

Expand full comment

Incredible work. Would you consider making your repo public so we can use the same method for our own use cases? I’m in social science and this would yield some fascinating examples.

Expand full comment

I would say that this displays a bias in the political coordinates test. ChatGPT, which draws from the massive mainstream, ought to fall right in the centre. That's how it's built. So the depiction of chatGPT as 'left' reflects a right-leaning bias in the test. This corresponds with the broader international perspective that sets U.S. political opinions well to the right of public opinion generally.

Expand full comment

Brilliant, great work.

Expand full comment

David, this is fascinating, and I’d like to understand how I might reproduce something like this. Was this work done through the OpenAI API using fine-tuning? I’m a little confused by your second paragraph where you refer to utilizing “a very recent common ancestor to ChatGPT”.

Expand full comment

Very interesting read. To me this says something that is obvious if you understand these systems but isn't yet picked up on by the general public: these "magic" voices like ChatGPT are still just machines, and can be trained by their creators to say pretty much anything and make it sound sensible.

Still - a lot of interesting applications of this type of approach.

Expand full comment

Very interesting and valuable approach. I am doing something similar where my prompt - response training set are questions and SQL lookups to a database for verifiable information. These non-verifiable questions are much harder.

I'm wondering if the solution would be to give users a 'set your own political compass' tool within the GPT-AI. So I could write something, and then ask for the best strong liberal, libertarian, and authoritarian counter arguments.

Expand full comment

I suggest we repeat the same tests with notion ai and compare the results. IMHO, we need more competitions in the space, and even homegrown solutions. The tests here is one good step to make sure that these tech will be serving the interests of people, and that the more transparency and more people talking about it, the better.

Expand full comment

This is brilliant. It is new information to me, but also answers a question I've had.

I knew that Google was hesitating to release its chat tech because they feared that it would say non-Woke things. But I wondered why MSFT did allow OpenAI to release ChatGPT, considering what happened to Microsoft Tay, back in the day, when it went all "Rayciiist". Your discoveries answer that question, it seems.

Expand full comment

Difficulty: nobody made any positive effort to train most of those "biases" into ChatGPT, whereas you had to go out of your way to get your version.

Conclusion: except on the few issues where ChatGPT's training was specifically manipulated, it probably represents something pretty close to an actual consensus view, at least among people literate enough to write anything for it to be trained on. Whereas what you've created here has had to be intentionally pushed toward the view you want it to have.

... meaning that your whole definition of "bias" is suspect. Under a REASONABLE definition, any attempt to "unskew" a model *IS* biasing it.

Expand full comment

Oh look it is TradGPT... wait this has been done before. https://thegradient.pub/gpt-4chan-lessons/

Imagine if we build personas of different political positions and let them debate in public, then political polarization would be effectively be displaced. Or is it the Kayfabe that makes this more valid than AI psuedo-art?

Expand full comment

Do you have a plan in the works to try to fine-tune Chat GPT to produce multiple perspectives to prompts that warrant it and, to whatever extent possible, annotated responses in the form of "this is aligned with x ideology/belief system" ? I'd love to help with a project like that were it possible to collaborate. I would love to see what that looks like, and I wish I had enough time and expertise to do it myself. I'm really disturbed by the lack of transparency in LLMs, and also that it seemed like they were fine-tuned to claim that they are impartial or neutral.

It's wild that openAI jumped the shark and made ChatGPT public, when the research community at large seemingly agreed that LLMs were not ready for mass consumption for myriad reasons. Given their inherent limitations re: veracity and transparency, they're not well suited for a lot of the tasks that they are now being applied to, and yet here we are.

Expand full comment

I only see that it's high up in authoritarian right. I don't care if it's authoritarian right or left. I value my freedom. "Small government" in that context then means "not many people governing" or maybe just one guy. So it's also anti-democratic. In my book this is some stupid American flavor of "conservatism" which is just another word for "grifting away Democracy in favor for authoritarianism". This should be an insult to true conservative who value personal freedom.

Luckily the right is in general less tech savy and intelligent. So hopefully they won't get a good AI grip on things. That would be a complete horror scenario. This goes both ways btw. A authoritarian leftwing lunacy controlling AI would be as horrific. So yeah - I prefer libertarian left AI over the other alternatives.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

This isn’t biased, it’s factual! 😂

Expand full comment

Your list of traits feels a bit more neocon than conservative. I found it interesting that the GPT is supposed to be pro-military intervention, but then criticizes Barack Obama for military interventionism.

The civil liberties stuff feels very war on terror GWB stuff. If I were to pick a side more into civil liberties today, it would definitely be the right.

BTW, wouldn't a conservative answer Ronald Reagen if they were asked for their favorite political leader? If left wing GPT answered Biden I would definitely accuse it of failing a Turing Test.

Is there a link to this model to play around?

Expand full comment