January 24, 2026 · 5 min read

Your Vibe-Coded App Is Probably Hackable

The security stuff AI agents miss (and how to fix it)

Tenzai tested five major vibe coding tools in December 2025: Claude Code, Codex, Cursor, Replit, and Devin. They found 69 vulnerabilities across 15 applications. Every single tool shipped insecure code.

The tools are good at avoiding classic attacks like SQL injection and XSS. Tenzai didn't find a single one. But they consistently fail at business logic and security controls. The researchers said it plainly: "Coding agents cannot be trusted to design secure applications."

I use these tools daily. But there are things they don't think about unless you ask.

Your Database Is Probably Wide Open

If you're using Supabase, AI writes code that queries the database directly from the frontend. It works. It's also a security hole.

Supabase auto-generates REST APIs from your schema. Create a table called users, and there's an endpoint at /rest/v1/users that anyone can hit. The anon key in your frontend code is public by design. But without Row Level Security (RLS) enabled, that public key gives access to everything.

83% of exposed Supabase databases involve RLS misconfigurations. Supabase now enables RLS by default on tables created through their dashboard. But if you create tables via SQL, you need to enable it yourself.

What to do: Enable RLS on every table. Test it by hitting your endpoints with just the anon key. If you see actual data instead of an empty array, you're exposed. Use Supabase's Security Advisor to scan for this.

Never expose your service_role key to the frontend. That key bypasses all RLS. Keep it in backend code only.

Your Storage Buckets Are Public

When you ask AI to handle file uploads, it'll create a public bucket because that's easier. But if someone knows your project URL, they can guess file paths and download everything.

What to do: Make buckets private by default. Use signed URLs that expire. Rename uploaded files with random strings so paths can't be guessed.

Your Webhooks Aren't Verified

If you're using Stripe or any payment provider, webhooks tell your app when someone paid. AI agents often forget to verify the signature on these requests.

Someone can send a fake "payment successful" webhook to your endpoint and get your product for free.

What to do: Always verify webhook signatures using the provider's official library. Stripe's docs explain this. If verification fails, reject the request. Also check the timestamp... Stripe includes one to prevent replay attacks.

You Have No Rate Limits

Without rate limiting, someone can brute force your login, spam your database, or run up your cloud bill.

What to do: Add rate limiting to sensitive endpoints. Vercel's WAF can do this at the edge. Or use @upstash/ratelimit with Vercel KV. Even "10 requests per minute per IP" on auth endpoints stops most abuse.

Your Secrets Aren't Secret

AI will sometimes hardcode API keys "just to test." Once that key hits GitHub, bots find it within minutes. They're scanning in real-time.

Claude Code 2.1.0 fixed a bug where sensitive data (OAuth tokens, API keys, passwords) could be exposed in debug logs. Keep your tools updated.

What to do: Never commit .env files. Add .env to .gitignore immediately. Use pre-commit hooks that scan for secrets before they're committed. If a secret does get committed, rotate it immediately... even if you delete the commit, it's still in git history.

Server Actions Are Public Endpoints

In Next.js, when you create a Server Action and export it, you're creating a public HTTP endpoint. Even if you don't import it anywhere else, it's publicly accessible.

What to do: Treat every Server Action like a public API route. Add authentication checks. Validate inputs.

AI Imports Sketchy Dependencies

AI models sometimes hallucinate packages that don't exist. Attackers can register that package name and fill it with malware.

What to do: Run npm audit regularly. Use Dependabot or Snyk to scan for vulnerable dependencies. When AI suggests a package you've never heard of, check if it actually exists before installing.

Business Logic Vulnerabilities

This is where AI fails hardest. In the Tenzai study, when they didn't specify shop orders must be positive, four out of five agents allowed negative orders. All five introduced SSRF vulnerabilities when given the opportunity.

What to do: After AI builds a feature, ask: what if someone modified this request? Changed the user ID? Skipped a step?

What To Do About It

The fix isn't to stop using AI. It's to stop trusting the defaults.

Palo Alto Networks released the SHIELD framework this month. The key ideas: use helper models for automated security validation, grant minimum permissions to AI tools, and disable auto-execution so humans review before deployment.

A simple technique: tell AI to act as a security engineer and review the code it just wrote. It catches things the first pass missed.

You can also gate security at the infrastructure layer. Vercel Firewall blocks malicious requests before they hit your code.

I'm @pablostanley on Twitter if u wanna chat about this stuff.


also, try a tool I've been working on: https://efecto.app/