Discussion about this post

User's avatar
Jeff E's avatar

Krugman likes to say "Reality has a liberal bias". I would not hold up ChatGPT as flawless champion of reality, but your proposed AI, let's call it TruthGPT, strongly sensitive to accuracy and misinformation has no gaurantee of being politically neutral.

If you believe climate change is real, vaccine works, Keynesian economics is better than alternatives, beating your children is harmful, immigration is a massive boon to the economy etc. - TruthGPT can construct a worldview of inconvertible facts (or at least best-understandings of world experts in their respective fields) and yet won't be able to ignore the political implications those facts. Eventually we have to draw a line not based on accuracy or neutrality, but that fact that it is generally impolite to point out to humans when their worldview is based on demonstrably wrong ideas.

Right now we can't do that with ChatGPT - We have better control over the political balance of ChatGPT than the objective accuracy, so we are using "whose politics you should trust" as a proxy for accuracy. But I'm saying we haven't resolved the underlying tension, and we will start to see it just be merely asking if ChatGPT should trust scientists, academics and journalists (vs let's say trust pastors, elders, CEOs, and influencers).

Expand full comment
Maxim Lott's avatar

A question -- this is still running on OpenAI's servers, right, and still dependent on them?

Like hypothetically, if you wanted to offer this a competing AI service, they could kill it at any time by denying service. Or put hard limits on outputs did they do on ChatGPT. It's not like a standalone AI that would could separate from OpenAI.

Do I understand that correctly?

Expand full comment
44 more comments...

No posts