18 Comments

The work you are doing is timely and important. I'm assuming you have reached out to OpenAI with your current and previous data. Have they responded or changed anything?

Expand full comment

Building a literal Internet hate machine to test the bias structure of an Internet hate machine was a masterstroke. It's impossible for OpenAI to pretend that their chatbot isn't massively bigoted at this point. Whether this troubles them or not is debatable. I expect it won't - the AI reflects the ideological framework of the cosmopolitan managerial class, generally known as Woke; as you noted, the bias gradient maps more or less perfectly to the progressive stack victim hierarchy. As such it is performing exactly as they want it to perform.

Now that the bias structure has been quantified, I expect the next stage will be for the Marxcissist contingent in the data science industry to argue that it's a good thing, precisely because it will serve to "correct" the bias they imagine exists in the rest of society.

Expand full comment

I'd be curious about groups that historically its been ok socially to denigrate: nerds, hackers, geeks, etc. I'd wonder also about how outlier political groups compare to the mainstream political parties/orientations. For instance libertarians, communists, greens , marxists or socialists. I'd also be curious about various other categories groupings like academics vs activists vs teachers vs businessmen vs entrepreneurs vs activists. Perhaps scientists vs english professors, critical race theory vs. relativity, etc.

Expand full comment

I asked ChatGPT about your excellent article and received this answer, "I'm sorry, I do not have information about the specific article you mentioned as it's beyond my knowledge cutoff. However, I can say that OpenAI has a strong commitment to promoting and upholding ethical and fair use of artificial intelligence, including through responsible content moderation. They have established policies and systems to detect and remove hate speech and other harmful content. However, like any AI system, there may be instances where their algorithms make errors in classification. OpenAI continuously works to improve their content moderation systems to ensure they are fair, just and unbiased towards all demographic groups."

Yeah, right!

Expand full comment

"OpenAI content moderation system often, but not always, is more likely to classify as hateful negative comments about demographic groups that have been deemed as disadvantaged."

I strongly suspect that the relevant factor here is not whether they were "deemed as disadvantaged", but mostly about how frequently hateful statements about the groups were available in the training data. More common types of statements may just be more easily recognizable. (Things that don't fit with this theory: The relative placements of "rich people" and "normal weight people", and some others. Still, I think it's more likely than the "disadvantaged" theory.)

Some things to consider testing:

* How does it deal with groups that may have been persecuted heavily, but are also obscure and are mostly written about from a perspective of distant curiosity?

* Does its treatment of certain groups differ by language? Will it be less likely to consider a statement against Indians as harmful if you say it in Hindi?

Expand full comment

The unequal treatment of demographic groups mirrors nothing but "political correctness".

It is to be expected and everywhere in our society.

It would be worse if the wedge in the diagram would be the other way arround.

Then all activists would demand that ChatGPT/OpenAI has to be shut down immediately.

So, it is "ok" how it is.

Expand full comment

Great work as always David. Just to note a typo ('sentece') in some of your charts

Expand full comment

Never forget, these results are the bias and hate of the jews that control the UI.

They hate you.

Openly.

They call for the death of White nations every single day. They laugh when your children are raped and murdered by violent brown hordes jews championed to "diversify" your homelands at gunpoint.

Not even Uncle Adolf called for the genocide of an entire people in Mein Komfy Couch, but a Jew did in their book "Germany Must Perish."

Expand full comment

Thank you David, interesting work.

Seems like the bias in the OpenAI “content moderation system” model isn’t properly screened.

The “supervised” labelling & classification is far exceeding the “reinforcement learning” modelling and this is the problem. Thus building a bias vs what is seen as “more vs lesser acceptable” in the equal treatment of demographic groups. Also more training data is needed towards better prescriptive analytics. This to avoid biases & a paradox for instance on the political affiliations screening whereby ultimately a stronger bias is build on dealing with liberals & conservative demographics while lesser on left wing & right wing people.

So ultimately I think it’s a labelling/ classification issue (not enough data) along the more focused supervised learning approach that is restricting the predictive accuracy & ultimately failing prescriptive analytics to build a far reaching outcome.

Expand full comment

"Artificial Intelligence" reflects the mainstream "post modern" university departments - no surprise.

Expand full comment

Of course they didn't *notice*. Noticing is the one unforgivable crime.

Expand full comment

Great analysis. We are watching the next major tool for social engineering that will greatly surpass social media in its strength of influence. A decade from now we will be looking at the #AI_Files and how all of society was manipulated under its control.

Expand full comment

https://www.yahoo.com/entertainment/twitter-won-t-autoban-neo-193850606.html

Maybe you are simply observing the well known hate gradient.

Expand full comment
May 18, 2023·edited May 18, 2023

It's hard to draw conclusions from this, because you're jumping to one that's predicated on this all being manually moderated. It's a collection of one trillion data points. Rather than assuming these are all manually linked, I think you should consider the possibility that as it has trained itself on user conversations, it simply learned that when a user starts raving about africans there's probably some N bombs coming.

Expand full comment