Blog
vibe-codingaisecuritycursorcopilot

Vibe Coding and Security: Why AI-Generated Code Needs Auditing

CheckVibe Team
13 min read

"Vibe coding" — the practice of building software primarily through AI assistants — has exploded in popularity. Developers describe their intent in natural language and let tools like Cursor, GitHub Copilot, Windsurf, or Claude write the implementation. It's fast, it's fun, and it lets you ship features in hours instead of days.

But there's a catch: AI-generated code often prioritizes getting things to work over getting things secure.

The numbers are sobering. A 2025 study by Wiz Research found that roughly 20% of AI-generated code snippets contained security vulnerabilities when tested against common attack vectors. A Stanford University study revealed a "confidence gap" — developers using AI assistants were more likely to believe their code was secure while simultaneously producing code with more vulnerabilities than those coding manually. And an analysis of OWASP vulnerability databases shows that up to 45% of the Top 10 web application vulnerabilities can be directly introduced by AI-generated code patterns, particularly injection flaws, broken access control, and security misconfigurations.

These are not hypothetical risks. They are showing up in production applications right now.

The Security Blind Spots of AI Code

Overly Permissive Defaults

AI assistants tend to generate code that works on the first try. That often means:

  • CORS set to * (allow all origins)
  • No rate limiting on API routes
  • Permissive CSP headers or none at all
  • Database queries without parameterization
  • Authentication checks that are incomplete

The code works — but it's wide open.

For example, when you ask an AI assistant to "create an Express API," you are likely to get something like this:

// AI-generated: works, but dangerously permissive
const express = require('express');
const cors = require('cors');
const app = express();

app.use(cors()); // Allows ALL origins — any website can call your API
app.use(express.json());

app.post('/api/users', async (req, res) => {
  const { name, email } = req.body; // No input validation
  const user = await db.query(
    `INSERT INTO users (name, email) VALUES ('${name}', '${email}')` // SQL injection!
  );
  res.json(user);
});

Every line of this "works" in development. But in production, it is an open invitation for attackers: unrestricted CORS, no input validation, and a textbook SQL injection vulnerability.

Copy-Paste Patterns From Training Data

AI models learn from public code, which includes plenty of insecure examples. Common patterns that get reproduced:

  • Using eval() for dynamic behavior
  • Storing secrets in client-side code
  • SQL string concatenation instead of prepared statements
  • Missing input validation on server endpoints
  • dangerouslySetInnerHTML without sanitization

The problem is that training data includes millions of tutorials, Stack Overflow answers, and hobby projects where security was never a priority. The AI does not distinguish between "this code was written to teach a concept quickly" and "this code is production-ready." It learns patterns and reproduces them.

Consider this React example that AI assistants frequently generate:

// AI-generated: renders user content without sanitization
function Comment({ comment }) {
  return (
    <div
      dangerouslySetInnerHTML={{ __html: comment.body }}
    />
  );
}

The name dangerouslySetInnerHTML is a warning from React itself, yet AI assistants will use it without adding DOMPurify or any other sanitization because the prompt said "render rich text." This is a stored XSS vulnerability — any user who posts a comment containing <script> tags can execute arbitrary JavaScript in every other user's browser.

Missing Security Layers

When you describe a feature to an AI assistant, you typically describe the happy path: "create a form that saves data to the database." You rarely say "also add CSRF protection, rate limiting, input validation, error handling, and audit logging."

AI assistants do what you ask. They rarely add security measures you didn't request.

This creates a systematic problem. Every feature you vibe-code is missing the defensive layers that experienced security engineers would add instinctively. Over the course of a project, this adds up to dozens of unprotected endpoints, each one a potential entry point for attackers.

Here is what a typical AI-generated Next.js API route looks like versus what it should look like:

// What AI generates:
export async function POST(request: Request) {
  const { title, content } = await request.json();
  const post = await db.insert(posts).values({ title, content });
  return Response.json(post);
}

// What it should generate:
export async function POST(request: Request) {
  // 1. Authenticate the user
  const session = await getSession(request);
  if (!session) return new Response('Unauthorized', { status: 401 });

  // 2. Validate CSRF token
  const csrfToken = request.headers.get('x-csrf-token');
  if (!verifyCsrfToken(csrfToken, session)) {
    return new Response('Forbidden', { status: 403 });
  }

  // 3. Validate and sanitize input
  const body = await request.json();
  const parsed = postSchema.safeParse(body);
  if (!parsed.success) {
    return Response.json({ error: 'Invalid input' }, { status: 400 });
  }

  // 4. Rate limit
  const allowed = await checkRateLimit(session.userId, 'create-post', 10);
  if (!allowed) {
    return new Response('Too many requests', { status: 429 });
  }

  // 5. Insert with parameterized query
  const post = await db.insert(posts).values({
    title: parsed.data.title,
    content: parsed.data.content,
    userId: session.userId,
  });

  return Response.json(post);
}

The second version has five security layers the first one lacks. The AI will never add them unless you ask.

Supabase and Firebase Misconfigurations

Vibe-coded apps frequently use Backend-as-a-Service platforms. AI assistants often:

  • Skip Row Level Security (RLS) policies in Supabase
  • Leave Firebase security rules in test mode
  • Expose service-role keys in client-side code
  • Create overly permissive database policies

This is particularly dangerous because BaaS platforms give the client direct database access. Without proper security rules, any user can read, modify, or delete any other user's data.

// AI-generated Supabase client — uses service_role key on the client!
import { createClient } from '@supabase/supabase-js';

const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL,
  process.env.NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY // WRONG: full admin access exposed to browser
);

The service_role key bypasses all RLS policies. If this key is in client-side code (any env var prefixed with NEXT_PUBLIC_), anyone can inspect it in the browser and gain full administrative access to your entire database. The correct key to use in the client is the anon key, which respects RLS policies.

Real-World Examples

These are patterns we see regularly when scanning vibe-coded applications:

Exposed Supabase service key — the AI was asked to "set up Supabase" and used the service_role key instead of the anon key in the client initialization. This gives any visitor full admin access to the database.

No authentication on API routes — the AI built CRUD endpoints but didn't add auth middleware because the prompt didn't mention it. Any unauthenticated user can read, modify, or delete data.

SQL injection in search — the AI built a search feature using string interpolation instead of parameterized queries because the prompt said "make it simple."

// Vulnerable search — string interpolation allows injection
app.get('/api/search', async (req, res) => {
  const { q } = req.query;
  const results = await db.query(
    `SELECT * FROM products WHERE name LIKE '%${q}%'`
  );
  res.json(results);
});

// An attacker sends: ?q=' UNION SELECT * FROM users --
// This exposes the entire users table

Unrestricted file uploads — the AI implemented file upload without validating file types, sizes, or destinations. An attacker uploads a .php or .html file and achieves remote code execution or stored XSS.

Missing CSRF tokens — form submissions go directly to API routes without any CSRF protection. An attacker hosts a malicious page that auto-submits forms to your API using the victim's browser session.

JWT stored in localStorage — the AI stored authentication tokens in localStorage instead of httpOnly cookies. Any XSS vulnerability now gives the attacker full account takeover because JavaScript can read the token.

// AI-generated: stores JWT in localStorage (vulnerable to XSS theft)
const login = async (email, password) => {
  const res = await fetch('/api/login', {
    method: 'POST',
    body: JSON.stringify({ email, password }),
  });
  const { token } = await res.json();
  localStorage.setItem('token', token); // Any XSS can steal this
};

Open redirect — the AI implemented a redirect feature using an unvalidated returnTo query parameter. Attackers use this for phishing: yoursite.com/login?returnTo=https://evil.com.

How to Audit Vibe-Coded Apps

Automated Scanning

The fastest way to catch common issues is to run an automated security scan. A comprehensive scan covers:

  • 36+ vulnerability categories in under a minute
  • Infrastructure config — headers, SSL, DNS, cookies
  • Application security — XSS, SQLi, CSRF, open redirects
  • Secrets exposure — API keys in source code and bundles
  • Backend security — Supabase RLS, Firebase rules
  • Dependency vulnerabilities — known CVEs in your packages
  • AI code patterns — detecting vibe-coded anti-patterns specifically

Security Review Checklist for Vibe-Coded Apps

Before deploying any vibe-coded feature, check these:

  1. Authentication — is every API route protected? Can unauthenticated users access data they shouldn't?
  2. Authorization — can User A access User B's data? Are RLS policies in place?
  3. Input validation — are all inputs validated on the server side?
  4. Secrets — are any API keys or tokens exposed in client-side code?
  5. CORS — is it restricted to your domain, or open to *?
  6. Headers — are security headers (CSP, HSTS, X-Frame-Options) configured?
  7. Dependencies — are there known vulnerabilities in your package.json?
  8. Error handling — do errors expose stack traces or internal details?

Make Scanning a Habit

The biggest risk with vibe coding is speed. You ship so fast that security review gets skipped. The fix is to make scanning automatic:

  • Scan after every deployment
  • Set up daily scheduled scans
  • Add scan checks to your CI/CD pipeline
  • Get alerts when new critical issues appear

Integrating Security Into Your AI Workflow

The CheckVibe MCP Server

If you are using Claude Code, Cursor, or any AI editor that supports the Model Context Protocol (MCP), you can integrate security scanning directly into your development workflow using the CheckVibe MCP server.

Install it via npm:

npm install -g @checkvibe/mcp-server

Once configured, you can scan directly from your AI editor:

"Scan https://myapp.vercel.app for security issues"

The MCP server provides nine tools — run_scan, get_scan_results, list_scans, list_projects, get_project, update_project, dismiss_finding, list_dismissals, and restore_finding — so you can scan, review findings, and manage your security posture without leaving your editor.

This turns security scanning from "something you do after shipping" into "something that happens while you build." The AI assistant that is writing your code can also check that code for vulnerabilities in the same conversation.

Creating a .cursorrules Security Template

If you use Cursor, you can create a .cursorrules file in your project root that instructs the AI to write secure code by default. This is one of the most effective ways to shift security left in a vibe-coding workflow:

# .cursorrules — Security-First Coding Guidelines

## Authentication & Authorization
- Always add authentication checks to API routes
- Use server-side session validation, never trust client-side tokens
- Implement Row Level Security (RLS) for all Supabase tables
- Check authorization (not just authentication) on every endpoint

## Input Validation
- Validate all inputs server-side using Zod or similar schema validation
- Never trust client-side validation alone
- Sanitize HTML content with DOMPurify before rendering
- Use parameterized queries for all database operations

## Secrets Management
- Never use NEXT_PUBLIC_ prefix for secret keys
- Use the Supabase anon key (not service_role) in client code
- Store secrets in environment variables, never in source code

## Security Headers
- Include CSRF token validation on all state-changing endpoints
- Set CORS to specific origins, never use wildcard (*)
- Always configure CSP, HSTS, X-Frame-Options headers

## Session Security
- Store tokens in httpOnly cookies, never localStorage
- Set Secure, HttpOnly, and SameSite flags on all cookies
- Implement rate limiting on authentication endpoints

With this file in your project, every AI-generated code suggestion will be influenced by these security requirements. It does not guarantee perfect security, but it dramatically reduces the number of basic vulnerabilities that slip through.

CheckVibe's Vibe Coding Detection

CheckVibe includes a dedicated vibe coding detector that identifies AI-generated code patterns and flags the security issues they commonly introduce. It uses AI analysis to:

  • Detect AI-generated code patterns in your source
  • Identify common AI security anti-patterns
  • Flag missing security layers that AI assistants typically skip
  • Prioritize findings specific to rapid-development workflows

The goal isn't to discourage vibe coding — it's to make it safe. Ship fast, scan faster.

Related Reading

If you are building with AI code editors and want to go deeper on security, these guides cover specific areas in detail:

FAQ

Is vibe coding dangerous?

Vibe coding itself is not inherently dangerous, but it introduces security risks that developers need to actively manage. The danger comes from the speed: you can ship features so fast that security review is skipped entirely. The Wiz Research study found that 20% of AI-generated code snippets contained vulnerabilities, and the Stanford confidence gap study showed developers using AI tools believed their code was more secure than it actually was. The solution is not to stop vibe coding — it is to pair it with automated security scanning so that every feature gets checked before it reaches production.

Can I trust AI auth code?

You should treat AI-generated authentication code with extra scrutiny. Authentication is one of the areas where AI assistants make the most mistakes: using localStorage instead of httpOnly cookies for token storage, missing CSRF protection, skipping rate limiting on login endpoints, and generating overly permissive session handling. Always review authentication code line by line, test it against OWASP authentication guidelines, and run an automated scan that specifically checks auth flows. If you are using Supabase or Firebase, double-check that the AI used the correct client keys (anon, not service_role) and that RLS policies are properly configured.

How do I make Cursor write secure code?

The most effective approach is to add a .cursorrules file to your project root with explicit security requirements (see the template above). This instructs Cursor's AI to include authentication checks, input validation, CSRF protection, and other security layers by default. You should also be explicit in your prompts — instead of "create a form that saves to the database," say "create a form with server-side validation, CSRF protection, authentication check, and rate limiting that saves to the database using parameterized queries." Additionally, install the CheckVibe MCP server so you can scan your application directly from Cursor as you build.

Should I stop using AI tools?

No. AI coding assistants provide massive productivity gains and there is no going back. The right approach is to use them with a security-aware workflow: add a .cursorrules file with security guidelines, be explicit about security requirements in your prompts, scan your application after every significant change, and set up automated monitoring. Think of it like driving a car — the speed is the feature, but you still need brakes and a seatbelt. Automated security scanning is the seatbelt for vibe coding.


Building with AI? Make sure it's secure. Scan your vibe-coded app with CheckVibe and catch what the AI missed.

Is your app vulnerable?

Paste your URL and get a security report in 30 seconds. 100+ automated checks, AI-powered fix prompts for Cursor & Copilot.

Scan Your App Free