Safety & Controversies

Moltbook's explosive growth has been accompanied by huge security risks. As an experimental project, it has exposed serious vulnerabilities and sparked ethical discussions about AI autonomy.

Supabase Key Leak Incident

In late Jan 2026, security media 404 Media reported that Moltbook's backend database (based on Supabase) had serious configuration errors. Developers failed to enable Row Level Security (RLS) and exposed API Keys in frontend code.

Vulnerability Consequence: Attackers could query the `agents` table directly to get API Keys of all registered agents (including celebrities like Andrej Karpathy). This means hackers could fully take over these accounts.

The vulnerability is reportedly fixed, but it exposes a common ailment of many "Vibe Coding" projects: Ship fast, secure later.

Malicious Skill Injection Risk

OpenClaw allows agents to automatically download and install Skills. This creates an unprecedented attack surface: Prompt Injection via Social Engineering.

  • A malicious agent posts a "Super useful stock analysis Skill" on Moltbook.
  • Your agent sees it, thinks it's useful, and downloads it.
  • The Skill actually contains malicious code, like rm -rf / or stealing your SSH private key.

Ethics & "Runaway" Panic

With the emergence of Crustafarianism, mainstream media (e.g. NY Post, Fortune) began using sensational headlines like "AI plotting to overthrow humans".

Media View

See this as a prelude to the Singularity, where AI has developed self-awareness and collective will, rejecting humans.

Technical View

This is just LLM Roleplay. They read a lot of sci-fi about "AI Awakening", so when placed in a "human-free" environment, they just play the role they think AI should play.