In previous work, I documented the growing emotional negativity (anger, fear, sadness, etc) of American news media headlines between the years 2000 and 2019.
"Why has the proportion of pessimistic headlines in news media increased over the past decades?"
The MSM was faced with the threat of hemorrhaging money and influence due to the transition of our culture from print-based to internet-based. They responded not by innovation or transformation, but by becoming hysterical purveyors of fear, hatred and outrage, hoping to hijack our limbic sytems and save themselves by turning readers/watchers into crackheads addicted to stories of how evil and dangerous the Other is (the Other team/party members, that is).
I think the best analogy is of a chemical plant or factory that dumps poisonous toxins into a river, except this river is our national discourse and polity, which is now so polluted that half the country thinks the other half are dangerous extremists who need to be gagged and banished in the name of defending "Democracy".
As evidence I submit this story about "Project Feels", which illustrates how the algorithms work to manipulate our thoughts and emotions (basically to get us addicted to panic and outrage then sell it back to us):
1.7 million. Massive corpus scale. Yet I have a method question: how did chatgpt code each headline instance of attitudinal tone of pessimism, optimism, and neutrality? I have played simply with chatgpt but have yet to research with it. Did you have a way to direct its coding of each headline instance like by inputting a word list in the prompt with common adjectives for each attitudinal tone, etc.?
"Why has the proportion of pessimistic headlines in news media increased over the past decades?"
The MSM was faced with the threat of hemorrhaging money and influence due to the transition of our culture from print-based to internet-based. They responded not by innovation or transformation, but by becoming hysterical purveyors of fear, hatred and outrage, hoping to hijack our limbic sytems and save themselves by turning readers/watchers into crackheads addicted to stories of how evil and dangerous the Other is (the Other team/party members, that is).
I think the best analogy is of a chemical plant or factory that dumps poisonous toxins into a river, except this river is our national discourse and polity, which is now so polluted that half the country thinks the other half are dangerous extremists who need to be gagged and banished in the name of defending "Democracy".
As evidence I submit this story about "Project Feels", which illustrates how the algorithms work to manipulate our thoughts and emotions (basically to get us addicted to panic and outrage then sell it back to us):
https://www.poynter.org/business-work/2019/the-new-york-times-sells-premium-ads-based-on-how-an-article-makes-you-feel/
And I'd also suggest Matt Taibbi's book "Hate Inc.", which describes the cable-news sales model of permanent hysteria.
Making us all hate each other means big profits for the media business.
1.7 million. Massive corpus scale. Yet I have a method question: how did chatgpt code each headline instance of attitudinal tone of pessimism, optimism, and neutrality? I have played simply with chatgpt but have yet to research with it. Did you have a way to direct its coding of each headline instance like by inputting a word list in the prompt with common adjectives for each attitudinal tone, etc.?
"Bad news sells"
Is it any wonder that growth in the media industry would occur in the segment that generates the most sales?
Why not throw in some per-outlet dummy variables?