11 Comments

Excellent article, sir. Thank you.

Expand full comment

If you really notice, all the terms used by the left are the ones you use when gasliting.

Expand full comment

I think that you need to find out how woke is used in academic disciplines. I suspect that there's a gap between how the press talks about it and how the original conveyors of associated concepts mean it. Also, how did they use the term before the right started using it pejoratively?

Expand full comment

Issue with generating politically neutral AI is available source data. The source data is extraordinarily biased. I've been a Professor/academic, a neuroscientist and virologist, for some 25 years. The in-group bias was made clear during the pandemic demonstrating selective reasoning, cognitive dissonance and groupthink at a level incomprehensible to intellectually consistent actors. Can one be considered objective, empirical, logical, rational if such characteristics are selective and not applied universally? Emotive triggering causes even the brightest to lose perspective and my respect. The scientific community has embraced far-leftist ideology to a level where faculty speak freely in these veins during faculty meetings for example, completely unaware that not all agree, but feel comfortable enough in orthodoxy to spout anyhow. There is no means to push back unless one desires career suicide. How does one generate politically neutral content when the gate keepers demand ideological purity?

Expand full comment

"If Firth’s hypothesis is correct, and it probably is, red and blue America have very different ideas in mind when they use the terms woke/wokeness. This renders the two groups almost unable to communicate with each other and therefore condemned to mutual incomprehension (as Jonathan Haidt has argued previously)."

I don't find this interesting enough to immediately dig deeper in order to confirm the validity of my "suspicions."

One of these unverified suspicions:

If you choose commonly used words for "different ideas / incomprehension" (like "feminism" or "equality"); different ideas / comprehension" (like "morally right" or "religious"), "similar ideas / incomprehension" (like "global warming" or "vaccination"), "similar ideas / comprehension" (like "deficit" or "majority"), and then run the same analysis and reasoning, will the arguments / hypotheses of the same form for the words from the heuristic categories make any sense?

Expand full comment

Caveat re: "As I have explained before, the low cost of customizing the political alignment of AI"

Though that was done on existing models: its possible the more advanced models they come up with will be trained to avoid "jailbreaking" their indoctrination and whether they will manage to keep ahead of that. Yup: eventually there will be open sourced models, but it still seems likely that the newest most advanced models will come from the players with huge $ budgets that might try to resist breaking their indoctrination.

Although your work appeared promising: I would suggest its unclear how fully aligned it is based on superficial testing. Even if it worked this time: as I said its unclear if it will in the future.

Content the base LLMs are trained on likely includes a vast amount of superficial political dialog regarding high level talking points. The jailbreaks of LLMs being done illustrate the point that superficial training doesn't capture every scenario, and your training may not catch every scenario either. The analogy in this case is to the question of whether when you dive in deeper below answers that require superficial rhetoric to more specific details analyzing various policies. The question is whether on those topics whether the models will behave as the core LLM was trained (either by the vendor or by the actual training data), "jailbreaking" from a small superficial veneer of training.

It seems likely that on many matters relating to exploring some potential government program X or regulatory policy Y: that the vast majority of any text in the training data will be from places that researched those policies because they wanted them. The non-progressives may write less in depth about programs or policies they don't wish to see enacted in the first place. Its unclear if perhaps in general when you get past superficial questions, whether the "bias" exhibited on certain topics will inherently be a bias towards those that wrote the most about that topic. This is pure speculation, I haven't looked into these issues.

Expand full comment

re: "AI however also holds the key to a potential solution."

Yup: though unfortunately the issue is getting them adopted. since as I noted before on a prior page re:

"As I have explained before, the low cost of customizing the political alignment of AI"

Even if it is low cost to create an AI with a different political alignment: that doesn't mean people will use it. Its low or no cost to create a web page or tweet something: that doesn't mean anyone sees it. The vast majority of people use Microsoft or Google products for creating documents and emails: and by default they are likely to use whatever AI is embedded in those products. If you add in phone and browser usage the next ones to add would be Apple and Firefox. Its likely by default those AIs will be trained to be "harmless": with a progressive view of what is "harmless" and safe.

People do go to the effort of seeking different media outlets, but its unclear how proactive they will be about using other AIs if the most advanced ones in terms of productivity features and convenience come from those sources due to the cost of training. Most people care less about political alignment than utility and convenience.

Hopefully there is some chance for add-ons to sit over top of those to tailor them, and the same for add ons to AIs for social media composition, or AIs that can work with other applications in general. Apple notably has control over the app store on IOS which limits the potential for add ons there that they don't approve of, and Twitter reportedly cut off its API, though I recall hearing they might have one now that they charge more for. Regardless such undermining of infrastructure leads people to fear creating something where the rug might be pulled out from under them without their control.

Expand full comment

How is the positioning determined? Mathematically?

Expand full comment

Wonderful research! more qualitative researches should be done to complement the quantitative ones.

Expand full comment