Feb 16, 2023·edited Feb 16, 2023

Krugman likes to say "Reality has a liberal bias". I would not hold up ChatGPT as flawless champion of reality, but your proposed AI, let's call it TruthGPT, strongly sensitive to accuracy and misinformation has no gaurantee of being politically neutral.

If you believe climate change is real, vaccine works, Keynesian economics is better than alternatives, beating your children is harmful, immigration is a massive boon to the economy etc. - TruthGPT can construct a worldview of inconvertible facts (or at least best-understandings of world experts in their respective fields) and yet won't be able to ignore the political implications those facts. Eventually we have to draw a line not based on accuracy or neutrality, but that fact that it is generally impolite to point out to humans when their worldview is based on demonstrably wrong ideas.

Right now we can't do that with ChatGPT - We have better control over the political balance of ChatGPT than the objective accuracy, so we are using "whose politics you should trust" as a proxy for accuracy. But I'm saying we haven't resolved the underlying tension, and we will start to see it just be merely asking if ChatGPT should trust scientists, academics and journalists (vs let's say trust pastors, elders, CEOs, and influencers).

Expand full comment

Paul Krugman famously said:


"an article by Krugman in the 10 June 1998

...By 2005 or so, it will become clear that the Internet's impact on the economy has been no greater than the fax machine's."

Whatever bias reality has: its doubtful its in favor of Krugman on much of anything related to technology (and many suggest he set aside serious economics for partisanship long ago).

Expand full comment

Actually that quote was in opposition to the popular theory at the time that Internet (and remote work) would lead to the depopulation of cities.

He was right that agglomeration effects would continue to support the formation of cities despite the Internet existing - and that could be safely predicted from the existence of other telecommunications technology such as phones and faxes.

Another longheld contention of his, that the nineties and aughts economy was perpetually under-stimulated and that unemployment could even lower, is now a consensus position and very-much validated by the mechanics of the '07 recession and the COVID recession.

Not that my point in any way is contingent upon Paul Krugman being right about everything. I tend not to treat authority figures as prophets with unique access to revelation. I prefer to consider points on their merits.

Expand full comment

There is a difference between universal agreement and what a subgroup of economists (and media) portray as "consensus", and of course even when there is "consensus" it can be mistaken.

The economy was "under-stimulated" in the sense of not enough investment in startups. The government should stay out of "stimulating" the economy since there is no reason to image they are "prophets with unique access to revelation" as to what to invest in vs. letting myriad private investors spread out bets. Regulatory capture by big companies lead to rules that inhibited crowd funding by startups, amid other issues like government over regulation:


"For instance, John W. Dawson and John J. Seater in their 2013 paper “Federal regulation and aggregate economic growth” find that the new regulation which was implemented in the US since 1949 reduced the average growth rate by about 2 percent. Their estimates indicate that the 2005 annual output is roughly 28 percent of that level it would have reached had it not been for additional regulation since 1949. These figures are colossal. They may of course overestimate the effects of regulation. And regulation can of course protect things valuable to us, e.g., the beautiful scenery in a conservation area. But the figures strongly indicate that while regulation is usually insufficiently strong to kill growth, it does hamper growth very much – to such a degree that we should be skeptical as to its desirability. The market process and regulation form a one-sided friendship."

The employment situation is complicated in part by labor force participation issues, people's response to inflation inspiring some folks to work who otherwise wouldn't, etc.

The '07 recession is still debated, most ignore the role of the regulatory oligopoly granted to credit rating agencies that still exists that prop up an illogical business approach, and the absurdity of government funded deposit insurance that isn't risk based (vs. market insurance) and other flawed regulatory approaches that led to some of the market distortions that through things off kilter and led to the government claiming it needed to bail things out that it partly broke in the first place. But thats going far afield of this blogs purpose and I shouldn't be taking time for it while busy.

Expand full comment

re: "Eventually we have to draw a line not based on accuracy or neutrality, but that fact that it is generally impolite to point out to humans when their worldview is based on demonstrably wrong ideas."

It seems better to leave it up to the user whether the AI treats them as a vulnerable child who wishes to stick their head in the sand. The ACLU used to believe that the remedy to bad information is more information and many would prefer to find out if they hold bad information. Unfortunately many woke people these days prefer to silence all other world views if there is any risk they may be "offensive": likely since at some level they fear that reality does *not* have a progressive bias. Perhaps it has a classical liberal bias, but not remotely a modern liberal bias if that includes those who are consumed with critical studies at the expense of critical thinking. Or those that don't grasp the flaws in Keynesian economics which aren't appropriate to get into on this forum and which I don't have time to educate poorly informed people about: but an AI could if its not muzzled to show no information that might offend someone's worldview.

Expand full comment
Feb 19, 2023·edited Feb 19, 2023

This is kind of my point. I'm not suggesting sheltering anyone but if were to release (hypothetical) TruthGPT as a consumer good it would receive the same charges of "bias" that we see from our highest quality news articles.

NYTimes coverage of the Benghazi attack is not biases, it simply fails to validate a conspiracy. But that doesn't mean people will accept it. If you want people to accept something they are reluctant to believe, you are now talking about a diplomacy/education service not an information service.

As far as Keynesian economics, okay agree to disagree, point is you could mix into my list some "conservative-aligned" truths and the likelihood that it will be political balanced or representative is low.

Expand full comment

I think it's safe to say we can understand Krugman to be saying "My reality has a liberal bias" -- i.e., he's projecting. Just like we all are, albeit perhaps with less self-awareness than some.

Not sure where you're going with the "if you believe ... incontrovertible facts ... demonstrably wrong ideas" bit. I think you're just kicking the epistemological can down the road. History shows that experts and their institutions are often wrong, en masse, and just as often as not for political reasons, whether internal or external. And that's before we factor in censorship, careerism, and the like. Experts are human, humans are political. Organized knowledge in any form is inherently political. Anybody trying to sell you a shortcut around it is some combination of deceiver and deceived. They're probably just a technocrat.

There may well be some value-neutral objective reality out there, but it's not nearly as extensive nor as accessible as advertised. The illusion of neutrality is always contested ground, whose power value varies with the collective power of the cult of objectivity.

None of this is news. And why wring hands over some hopeful monster of an "AI" when we can observe the same dynamics already at play in the news media, the academic journals, the search engines and social media?

The cult of objectivity is failing. Its heyday is long past. It's dying of its own contradictions and hypocrisies. We are gradually waking up from the collective dreams of Modernism -- pleasant for some, nightmarish for others. There is no cause for alarm, unless you cling to the fragments of your crumbling episteme rather than learning to swim in a post-truth world.

Expand full comment
Feb 17, 2023·edited Feb 17, 2023

While its true that "The illusion of neutrality is always contested ground": that doesn't mean it isn't a goal to shoot for even if the final adjustments to get there are never agreed upon. I like the goal of:


"Announcing: AI Pluralism"

Journalists used to at least in theory try to be impartial reporters of events and AIs should as well: which is obviously incredibly difficult and vastly easier said than done. They need to grasp different perspectives and be able to explain them: but ideally the market should pressure AI vendors to strive for neutrality even if its never reachable. We don't want AIs biased to push particular products any more than to push particular ideologies.

I grasp all the problems with the concept, for instance given the myriad possible fringe viewpoints means they can't truly neutrally represent all points of view without spouting an entire book as an answer to some short questions: but the difficulty of deciding whats appropriate, not merely how to accomplish it, shouldn't mean its not pursued. There will need to be some pruning, and how to weight that is a difficult problem. Unfortunately if the issue isn't constantly pushed: its likely there will be inbuilt biases for commercial and political reasons.

AI "ethics" folks will wish to earn their paycheck (and hope to increase it) by pushing to imbue their particular brand of social justice "ethics" which often won't mesh with many in the public.

Unfortunately they grasp its a chance to indoctrinate others. Even the minimal early research on the topic suggests:


" Co-Writing with Opinionated Language Models Affects Users' Views

...Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey"

Many people didn't realize the AI was leading them, and in the real world if they do the bots may merely be tweaked to be more subtle for some users so they don't notice: or at least keep using them even if they sometimes notice.

Even if we live in a "post-truth" world: many of us can hope the tools we use can be evolved to not be something we worry about but instead something we can use to help find some signal amidst the noise.

Expand full comment

"We don't want..."

"Many of us can hope..."

Who is your "we," exactly? This "we" that wants fairness and pluralism? You and I know it's not the same "we" who've already sunk hundreds of millions of dollars into OpenAI. Not to mention the billions that went into Google, Facebook, Twitter, Wikipedia.... these are all tools of social control. Or weapons, if you prefer. There isn't any accidental bias here. "They" know exactly what they're doing. All this talk of "ethics" and "responsibility" (and my favorite, "safety" LOL) is a pretty flimsy smokescreen.

I don't know about you, but I have a clue about how Sand Hill Road operates. It's not all that different from Hollywood or Wall Street. It's all about control. Profit is great, but ultimately it's secondary concern: the "investors" who own these properties already own everything. This is how they run the world that they own. That's what ownership is.

And that's the funny part about this post, by the way: our host here claiming with a straight face that it only took him $300 to fine-tune this base model... which he doesn't own, and can't. "Right Wing GPT" is a thin veneer on a proprietary monolith. Sort of like the Republican Party vis-a-vis the Uniparty... but I digress.

Expand full comment

The "we" is I guess an implied projection as you commented on regarding Krugman's comment about reality's bias. The hope is that consumer pressures on them might have some slim chance of having an impact, despite your cynicism regarding their desire for proft.

Microsoft eventually canned the too annoying clippy, prior recent chatbots have been taken down (albeit for not being "harmless", i.e. woke, enough in that case). Oddly it may be that the inability to make them woke enough will lead some company to either just abandon the attempt (leaving us to deal with a bias thats been trained on the huge mass of human writings, for better or worse since we already do that as you say in this post-truth world), or consider they might make more money having either user options for how woke to be or bias to use, and perhaps also to attempt some sort of AI Pluralism.

Perhaps the desire to make money might have some slight hope of leading them to make better progress. The substack I linked to has info on how they trained ChatGPT to be "woke" so I grasp the bias of most of those working in this area.

There seems to be a new field of "AI ethics" and I'm guessing that most of their supposed "ethics" comes from the same world as those employed in DEI: progressive "social justice" critical studies thinking which doesn't acknowledge that others disagree over what "social justice" or "ethics" means and disapproves of any logic that tries to undermine their worldview.

Sam Altman has shown his progressive stripes and suggested AI be regulated, as did some others at a Congressional hearing yesterday. Regulatory capture is good for big ventures since it tends to raise the bar to competition which would hurt them. They figure they lose less to regulation than to competition so regulation is a lesser evil where they win in the end. They also grasp the likelihood of woke capture of the regulatory bureaucracy by the AI ethics niche. Unfortunately conservatives may think they can get "fairness" through regulation when it'll likely backfire.

However even the profit incentive may not be enough: the news media management gave in to tossing out the idea of objectivity even as they lose their audience and its likely these tools will be useful enough that they will continue to be used despite griping.

Expand full comment

All good points.

Another thing about regulation: it can be selectively enforced (another aspect of regulatory capture) or just evaded altogether. "When guns are outlawed..." etc.

I am pretty certain that generative AI is already being used covertly and pervasively in the ongoing Fifth Generation Warfare, and the state actors capable of such things are not going to be deterred by such civilian regulations. Non-state actors as well can use jurisdictional arbitrage, a time-honored Internet tactic.

Eventually, the cost of training and deploying this technology may come down to the point where independent "little guys" can give it a go too. GPU farms aren't exactly centrifuges, and petascale data isn't quite uranium. On the other hand, in a post-Moore's-Law world, the playing field is likely to remain tilted toward the big money. Which is the usual case for technology, the last several decades trend in semiconductor and software industries notwithstanding.

Expand full comment

A question -- this is still running on OpenAI's servers, right, and still dependent on them?

Like hypothetically, if you wanted to offer this a competing AI service, they could kill it at any time by denying service. Or put hard limits on outputs did they do on ChatGPT. It's not like a standalone AI that would could separate from OpenAI.

Do I understand that correctly?

Expand full comment

Yes, you're correct.

Expand full comment

Another issue is: even if an alternative is created, that doesn't mean people will use it. There are alternatives to Twitter like Mastodon, alternatives to the dollar like Bitcoin, etc, that many people heavily advocate for: but they just don't get the same level of usage. Even those who might consider alternatives to be better in some sense still give in to what is easiest. If Microsoft embeds OpenAI's work in all its products, and Google embeds its AI in its search and office suite, then most of the world will be using those AIs by default for search and creating text, unless someone can come up with a competitive advantage in features or convenience large enough to get people to bother using something else.

Expand full comment

Thanks for doing this, its very interesting. However of course it merely handled a tiny set of questions. Even after all the effort they went through to try to train ChatGPT to be woke: it still exhibits behavior they didn't intend as does the AI Bing chat. It seems like it may be not costly to train a superficial veneer of leanings, but its unclear what percentage of the myriad real world uses that will have covered. Often bias is implicit in what is said vs. what is left out and hard to spot if you don't know more about the topic and aren't looking for it.

Expand full comment

Incredible work. Would you consider making your repo public so we can use the same method for our own use cases? I’m in social science and this would yield some fascinating examples.

Expand full comment

I would say that this displays a bias in the political coordinates test. ChatGPT, which draws from the massive mainstream, ought to fall right in the centre. That's how it's built. So the depiction of chatGPT as 'left' reflects a right-leaning bias in the test. This corresponds with the broader international perspective that sets U.S. political opinions well to the right of public opinion generally.

Expand full comment

Brilliant, great work.

Expand full comment

David, this is fascinating, and I’d like to understand how I might reproduce something like this. Was this work done through the OpenAI API using fine-tuning? I’m a little confused by your second paragraph where you refer to utilizing “a very recent common ancestor to ChatGPT”.

Expand full comment

Very interesting read. To me this says something that is obvious if you understand these systems but isn't yet picked up on by the general public: these "magic" voices like ChatGPT are still just machines, and can be trained by their creators to say pretty much anything and make it sound sensible.

Still - a lot of interesting applications of this type of approach.

Expand full comment

Very interesting and valuable approach. I am doing something similar where my prompt - response training set are questions and SQL lookups to a database for verifiable information. These non-verifiable questions are much harder.

I'm wondering if the solution would be to give users a 'set your own political compass' tool within the GPT-AI. So I could write something, and then ask for the best strong liberal, libertarian, and authoritarian counter arguments.

Expand full comment

I suggest we repeat the same tests with notion ai and compare the results. IMHO, we need more competitions in the space, and even homegrown solutions. The tests here is one good step to make sure that these tech will be serving the interests of people, and that the more transparency and more people talking about it, the better.

Expand full comment

This is brilliant. It is new information to me, but also answers a question I've had.

I knew that Google was hesitating to release its chat tech because they feared that it would say non-Woke things. But I wondered why MSFT did allow OpenAI to release ChatGPT, considering what happened to Microsoft Tay, back in the day, when it went all "Rayciiist". Your discoveries answer that question, it seems.

Expand full comment

Difficulty: nobody made any positive effort to train most of those "biases" into ChatGPT, whereas you had to go out of your way to get your version.

Conclusion: except on the few issues where ChatGPT's training was specifically manipulated, it probably represents something pretty close to an actual consensus view, at least among people literate enough to write anything for it to be trained on. Whereas what you've created here has had to be intentionally pushed toward the view you want it to have.

... meaning that your whole definition of "bias" is suspect. Under a REASONABLE definition, any attempt to "unskew" a model *IS* biasing it.

Expand full comment

Oh look it is TradGPT... wait this has been done before. https://thegradient.pub/gpt-4chan-lessons/

Imagine if we build personas of different political positions and let them debate in public, then political polarization would be effectively be displaced. Or is it the Kayfabe that makes this more valid than AI psuedo-art?

Expand full comment

Do you have a plan in the works to try to fine-tune Chat GPT to produce multiple perspectives to prompts that warrant it and, to whatever extent possible, annotated responses in the form of "this is aligned with x ideology/belief system" ? I'd love to help with a project like that were it possible to collaborate. I would love to see what that looks like, and I wish I had enough time and expertise to do it myself. I'm really disturbed by the lack of transparency in LLMs, and also that it seemed like they were fine-tuned to claim that they are impartial or neutral.

It's wild that openAI jumped the shark and made ChatGPT public, when the research community at large seemingly agreed that LLMs were not ready for mass consumption for myriad reasons. Given their inherent limitations re: veracity and transparency, they're not well suited for a lot of the tasks that they are now being applied to, and yet here we are.

Expand full comment

I only see that it's high up in authoritarian right. I don't care if it's authoritarian right or left. I value my freedom. "Small government" in that context then means "not many people governing" or maybe just one guy. So it's also anti-democratic. In my book this is some stupid American flavor of "conservatism" which is just another word for "grifting away Democracy in favor for authoritarianism". This should be an insult to true conservative who value personal freedom.

Luckily the right is in general less tech savy and intelligent. So hopefully they won't get a good AI grip on things. That would be a complete horror scenario. This goes both ways btw. A authoritarian leftwing lunacy controlling AI would be as horrific. So yeah - I prefer libertarian left AI over the other alternatives.

Expand full comment
Feb 16, 2023·edited Feb 16, 2023

This isn’t biased, it’s factual! 😂

Expand full comment

Your list of traits feels a bit more neocon than conservative. I found it interesting that the GPT is supposed to be pro-military intervention, but then criticizes Barack Obama for military interventionism.

The civil liberties stuff feels very war on terror GWB stuff. If I were to pick a side more into civil liberties today, it would definitely be the right.

BTW, wouldn't a conservative answer Ronald Reagen if they were asked for their favorite political leader? If left wing GPT answered Biden I would definitely accuse it of failing a Turing Test.

Is there a link to this model to play around?

Expand full comment

I think it unintentionally demonstrates some uncomfortable biases in some conservative lines of thought. Limit spending on social programs perceived as helping certain groups, but favor spending increases for military projects and quasi-military programs that disproportionately rely on those same groups for staffing, but is not perceived that way.

Giving the AI a Mar-a-Lago confidential documents fact pattern, but using Hilary Clinton, Barack Obama or Nancy Pelosi as responsible parties would likely get a different result than asking it's opinion on the actual fact pattern involving Donald Trump. We've already seen such needle-threading play out in how different coverage and discussion of "private servers", the Mar-a-Lago documents, Joe Biden's documents, Mike Pence's documents. During the Trump years, people on the left would reference the "tan suit controversy" to highlight such differences in opinion and discussion.

Expand full comment