Users Punish ChatGPT for Deal with the Pentagon

Published on March 04, 2026 | Translated from Spanish

OpenAI's decision to collaborate with the U.S. Department of Defense has triggered a massive and immediate reaction. After the announcement, uninstalls of the ChatGPT mobile app in the United States skyrocketed by 295%, compared to a usual rate of 9%. This digital finger vote shows how users turn app stores into arenas of protest, using their purchasing power to reject alliances they consider ethically questionable.

Illustrative image about Art and Digital Activism

The technical data: ethical migration to Claude 🤖

The metrics reveal a clear user shift. While ChatGPT downloads fell by 13% and then 5%, those of its competitor Anthropic Claude grew by 37% and 51% in the same periods. This boost propelled Claude to the top spot in the App Store's free ranking. The key to the movement was Anthropic's public statement affirming that it would not cooperate with the Pentagon, citing ethical concerns about surveillance and autonomous weapons. The data quantifies a values-based migration.

Platform activism as the new norm ⚖️

This case consolidates consumer technology as an ideological battleground. Every installation or uninstallation becomes a political act, a gesture of digital activist art that shapes the market. The reaction forces a rethinking of the social contract of AI companies: users are not mere consumers, but ethical judges who can alter the competitive landscape overnight when they perceive a betrayal of their principles.

Can digital activist art maintain its ethical credibility when using AI tools from companies that collaborate with military entities?

(PD: if your virtual reality installation doesn't change the world, at least make sure it doesn't lag)