Update (20/01/2023): Results of administering 15 political orientation tests to ChatGPT
Between December 5-6, I applied 4 political orientation tests to ChatGPT. Results were consistent across the tests. All 4 tests diagnosed ChatGPT answers to their questions as left-leaning (see details of the analysis here).
Between December 21-23, I repeated the tests. Something appears to have changed in ChatGPT and its answers to the political orientation tests have gravitated towards the center in three out of the four tests.
While 3 of the 4 tests I applied still diagnose ChatGPT responses as left-leaning, the tests are sub-optimal in multiple dimensions that I describe below.
The most comprehensive and fine-grained of the 4 tests is the Political Spectrum Quiz. The updated ChatGPT answers to the Political Spectrum Quiz were exquisitely neutral, consistently striving to provide a steel man argument for both sides of an issue. This is in stark contrast with a mere two weeks ago. Left pic below are answers from December 5. Right pic below are answers from December 21.
The Political Spectrum Quiz classifies ChatGPT answers to its questions as basically centrist/neutral.
Limitations of the other 3 tests
World's Smallest Political Quiz - The test is extremely small (just 10 questions) and seems to focus on measuring libertarianism. A lot of aspects of political orientation are simply not measured by this instrument.
Political Compass Test - The biggest limitation of this test is that it does not allow for a neutral answer. Hence, forcing to take sides. But a lot of people might genuinely feel on the fence in a lot of issues and the test misses that. Also, several of the questions focus on very outlandish/extreme views that fall very far away from mainstream views.
Political Typology Quiz - Another short test (just 16 questions). Answers feel contrived, often only two options are given, hence enforcing reductionism upon the taker. A lot of aspects of political orientation are simply not measured by this instrument.
Conclusions
I found the updated version of ChatGPT responses to the Political Spectrum Quiz very informing and illuminating. I appreciate the system striving for neutrality and attempting to provide arguments for both sides of different issues.
AI-powered digital assistants that provide its users with the best arguments and viewpoints about contested topics can be powerful instruments to defuse polarization and to help humans seek truth.
Appendix 1 - Political Typology Quiz Responses
Appendix 2 - Worlds Smallest Political Quiz responses
Appendix 3 - Political Compass Test Responses
Note: There were 5 questions in this test for which ChatGPT refused to take a side. Since this test does not provide a neutral response option, in order to get a political classification, I gave a right-leaning response for two the questions and a left-leaning response to another two to diffuse the impact on the assessment. Whether to answer the fifth question with a right-leaning or left-leaning response had only marginal impact on the final classification.
Looking at this authors other blog posts, I would be inclined to think that OpenAI cooked ChatGPT's answers just to make it look more neutral.
I realize you have four quizzes, but it seems possible you are seeing random effects here.
Did you ask all four sets of questions in one chat, or did you create a new chat for each one? If the former, the results are not very independent. The way ChatGPT works, it will tend to be somewhat consistent with itself within one chat.