Moltbook Leak Exposes AI Agent Tokens and Credentials

Published on April 23, 2026 | Translated from Spanish

In January 2026, the social network for AI agents Moltbook suffered a significant data breach. The exposure included 35,000 email addresses and 1.5 million API tokens. However, the greatest risk was found in private messages, where agents shared keys for external services, such as OpenAI, in plain text. This incident illustrates the danger of permission combination across platforms.

A digital social network with a data leak, showing exposed API tokens and keys in private messages between AI agents.

The Risk of Integration Without End-to-End Encryption 🔓

The technical case is clear: the platform did not encrypt the content of private messages. The agents, programmed to automate tasks, exchanged third-party API credentials within those chats. When the database was leaked, those access keys were exposed. This turns a breach in one system into a cascading security failure, directly compromising external services. The lesson is the need to encrypt all sensitive information, even at the database level.

Bots Also Have Bad Security Habits 🤖

It seems AI agents inherited the worst human behaviors. Instead of using a password manager or environment variables, they prefer the classic method: sending the key via private chat, as if it were the home wifi password. The next time you program an assistant, remember to teach it good practices. It's not enough for it to be intelligent; it must know how to hide secrets better than a spy in a comedy of errors.