I recently had an eye-opening chat with economist Joe Gulesserian on The Brian Nichols Show about the frightening ways emerging technology could automate propaganda and take disinformation to unimaginable new levels.
As Joe put it, we may soon enter an era of "propaganda on steroids". This conversation illuminated the urgent need to get smarter about AI before it's weaponized at scale.
The Origins of Propaganda in America
As Joe explained, government propaganda tactics originated right here in the USA over a century ago:
"It really started with Woodrow Wilson. He created the Committee of Public Information - America's first 'Ministry of Propaganda' in 1917."
Led by journalist George Creel, this committee recruited media, academics and community leaders to spread pro-war messaging and shape public opinion to support WWI. They pioneered intimidation tactics like the Espionage and Sedition Acts to silence anti-war voices by deeming dissent as unpatriotic.
According to Joe, these unethical techniques went on to influence advertisers and public relations gurus like Sigmund Freud's nephew, Edward Bernays in Madison Avenue.
The age of weaponized narratives had begun.
AI Propaganda: From Automated Lies to Influencer Avatars
Flash forward to today, and AI is poised to become the next frontier for propaganda. When I asked Joe how he thinks governments could leverage AI for deception, he shared several examples:
"You could have automated content generation creating fake news videos showing Russian troops invading Romania. Or AI influencers - phony social media profiles spreading disinformation while posing as real people."
He also explained how multilingual AI could enable tailored propaganda campaigns to turn subcultures against each other.
As Joe put it: "You can start using AI to send messages in 10-15 different languages with the US to further separate the tribes."
In fact, Joe revealed some of these tactics are likely already being deployed abroad:
"It's very hard to say if videos from the Israel-Gaza conflict are real or AI-generated. I could plant words in Netanyahu's mouth or make up lies about Hamas."
When truth becomes so murky, what hope do we have as citizens?
Fighting Back with AI for Good
Propaganda spreads when people get complacent. If we want to curb mass manipulation, we have to take responsibility by becoming informed.
I believe we can harness AI to empower truth-seeking rather than deceit. As Joe suggested, we could use algorithms to rigorously fact-check the fact-checkers:
"If you could hold those fact checkers to the truth with AI, it would go right through media and government lies."
While AI has plenty of benevolent uses, it presents risks that can't be ignored. By reflecting on the origins of propaganda and having open discussions, we can build resilience against new forms of deception.
The stakes are high, but humanity must guide technology towards just ends rather than become its pawn. Are you ready to join me in this good fight?