Blog
vibe-codingai-securitycursorcopilotsecurity

How to Secure Your Vibe-Coded App: A Developer's Guide

CheckVibe Team
17 min read

You just built an entire SaaS app in a weekend using Cursor. The landing page looks great, the auth works, Stripe is integrated, and you are ready to launch. But somewhere in those thousands of AI-generated lines, there are security holes you never asked the AI to close.

This is not hypothetical. We scan hundreds of vibe-coded apps every week, and the same vulnerabilities show up again and again. This guide covers the specific security issues AI code editors introduce and how to find them before someone else does.

Why AI-Generated Code Has Security Gaps

AI coding assistants optimize for one thing: making your code work on the first try. When you prompt Cursor with "add a Stripe webhook handler" or tell Copilot to "create a user settings API," the generated code will handle the happy path perfectly. What it will not do is add the security layers you did not ask for.

This creates a predictable pattern. The AI writes functional code with:

  • No authentication on API routes unless you specifically request it
  • Hardcoded secrets or environment variables exposed to the client bundle
  • Database queries built with string concatenation instead of parameterized queries
  • CORS configured as Access-Control-Allow-Origin: * because it eliminates errors during development
  • Missing input validation on every server endpoint

The result is an app that works flawlessly in your demo but is wide open in production.

The root cause is training data. AI models learn from millions of open-source repositories, tutorials, and Stack Overflow answers. Most of that code prioritizes clarity and brevity over security. A tutorial showing how to connect to a database will rarely include parameterized queries, input validation, and error handling in the same snippet. The AI absorbs these shortcuts and reproduces them faithfully.

This does not mean AI-generated code is inherently insecure. It means security is never the default. You have to ask for it explicitly, and even then, the AI may only address the specific concern you raised while leaving adjacent vulnerabilities untouched. If you want a deeper look at the systemic risks, we covered them in our analysis of vibe coding security risks.

The 7 Most Common Vulnerabilities in Vibe-Coded Apps

1. API Keys Leaked in Client-Side Code

This is the single most common issue we see. AI assistants frequently place API keys directly in frontend code or use environment variable names that expose them to the browser.

// Generated by AI -- key is exposed in the client bundle
const supabase = createClient(
  'https://xyz.supabase.co',
  'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIn0...'
);

The service_role key above bypasses all Row Level Security. If it appears in your client bundle, anyone can read, modify, or delete every row in your database.

The fix: Use the anon key on the client. Keep the service role key in server-side code only. In Next.js, never prefix secret keys with NEXT_PUBLIC_.

// Safe -- uses the anon key, relies on RLS for access control
const supabase = createClient(
  process.env.NEXT_PUBLIC_SUPABASE_URL!,
  process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY!
);

Here is a more complete example showing how to structure your environment variables so the AI cannot accidentally leak them:

# .env.local

# These are public -- safe to expose in the browser
NEXT_PUBLIC_SUPABASE_URL=https://xyz.supabase.co
NEXT_PUBLIC_SUPABASE_ANON_KEY=eyJhbGciOi...

# These are private -- NEVER prefix with NEXT_PUBLIC_
SUPABASE_SERVICE_ROLE_KEY=eyJhbGciOi...
STRIPE_SECRET_KEY=sk_live_abc123...
OPENAI_API_KEY=sk-abc123...
DATABASE_URL=postgresql://user:password@host/db

The distinction matters because Next.js inlines any NEXT_PUBLIC_ variable into the client JavaScript bundle at build time. Once it is there, it is visible to anyone who opens browser DevTools. For a full checklist of Supabase-specific security settings, see our Supabase security checklist.

2. Unprotected API Routes

When you ask an AI to "create a CRUD API for projects," it generates the endpoints but rarely adds auth middleware. Every route is publicly accessible.

// AI-generated -- no authentication check
export async function DELETE(req: Request, { params }: { params: { id: string } }) {
  const { error } = await supabase
    .from('projects')
    .delete()
    .eq('id', params.id);

  return Response.json({ success: !error });
}

Anyone who discovers this endpoint can delete any project. There is no check that the request comes from an authenticated user who owns that project.

The fix: Validate the session and check ownership on every state-changing route.

export async function DELETE(req: Request, { params }: { params: { id: string } }) {
  const supabase = await createClient();
  const { data: { user } } = await supabase.auth.getUser();

  if (!user) {
    return Response.json({ error: 'Unauthorized' }, { status: 401 });
  }

  const { error } = await supabase
    .from('projects')
    .delete()
    .eq('id', params.id)
    .eq('user_id', user.id); // ownership check

  return Response.json({ success: !error });
}

A more robust pattern wraps this into a reusable helper so every route in your app is protected consistently:

// lib/api-auth.ts -- reusable auth guard
import { createClient } from '@/lib/supabase/server';

export async function requireAuth() {
  const supabase = await createClient();
  const { data: { user }, error } = await supabase.auth.getUser();

  if (error || !user) {
    throw new Response(JSON.stringify({ error: 'Unauthorized' }), {
      status: 401,
      headers: { 'Content-Type': 'application/json' },
    });
  }

  return { user, supabase };
}

// Usage in any route
export async function DELETE(req: Request, { params }: { params: { id: string } }) {
  const { user, supabase } = await requireAuth();

  const { data, error } = await supabase
    .from('projects')
    .delete()
    .eq('id', params.id)
    .eq('user_id', user.id)
    .select()
    .single();

  if (!data) {
    return Response.json({ error: 'Not found' }, { status: 404 });
  }

  return Response.json({ success: true });
}

3. CORS Wildcard on Authenticated Endpoints

AI assistants add Access-Control-Allow-Origin: * to eliminate CORS errors during development. This stays in production and allows any website to make requests to your API.

// AI default -- allows all origins
return new Response(JSON.stringify(data), {
  headers: {
    'Access-Control-Allow-Origin': '*',
    'Access-Control-Allow-Credentials': 'true',
  },
});

The combination of * and credentials: true is especially dangerous. It means a malicious site can make authenticated requests to your API and read the responses. The browser sends cookies automatically, and the wildcard origin tells the browser to allow the malicious site to read the response.

The fix: Use an explicit origin allowlist.

const ALLOWED_ORIGINS = [
  'https://yourdomain.com',
  'https://www.yourdomain.com',
];

// Only add localhost in development
if (process.env.NODE_ENV === 'development') {
  ALLOWED_ORIGINS.push('http://localhost:3000');
}

const origin = req.headers.get('origin') || '';

return new Response(JSON.stringify(data), {
  headers: {
    'Access-Control-Allow-Origin': ALLOWED_ORIGINS.includes(origin) ? origin : '',
    'Access-Control-Allow-Credentials': 'true',
    'Access-Control-Allow-Methods': 'GET, POST, PUT, DELETE, OPTIONS',
    'Access-Control-Allow-Headers': 'Content-Type, Authorization',
  },
});

For a comprehensive breakdown of CORS issues and how to test for them, read our CORS misconfiguration guide.

4. SQL Injection via String Interpolation

When AI generates search or filter functionality, it sometimes builds queries with string interpolation instead of parameterized queries, especially when the prompt says "keep it simple."

// Vulnerable -- user input directly in query
const { data } = await supabase
  .rpc('search_users', { query: `%${searchTerm}%` });

// Even worse -- raw SQL concatenation
const result = await pool.query(
  `SELECT * FROM users WHERE name LIKE '%${searchTerm}%'`
);

An attacker can submit a search term like '; DROP TABLE users; -- and your database will execute it. With the raw concatenation example, the query becomes:

SELECT * FROM users WHERE name LIKE '%'; DROP TABLE users; --%'

That deletes your entire users table.

The fix: Always use parameterized queries.

// Safe -- parameterized query, database driver handles escaping
const result = await pool.query(
  'SELECT * FROM users WHERE name LIKE $1',
  [`%${searchTerm}%`]
);

If you are using an ORM like Prisma or Drizzle, stick to their query builders and avoid raw SQL unless absolutely necessary. When you must use raw SQL, always use the parameterized form:

// Prisma -- safe parameterized raw query
const users = await prisma.$queryRaw`
  SELECT * FROM users WHERE name LIKE ${`%${searchTerm}%`}
`;

// Drizzle -- safe query builder
const users = await db
  .select()
  .from(usersTable)
  .where(like(usersTable.name, `%${searchTerm}%`));

5. Missing Security Headers

AI-generated Next.js apps almost never include security headers. No CSP, no HSTS, no X-Frame-Options. The defaults leave your app vulnerable to clickjacking, MIME sniffing attacks, and cross-site scripting.

The fix: Add headers in next.config.ts:

async headers() {
  return [{
    source: '/:path*',
    headers: [
      { key: 'X-Frame-Options', value: 'DENY' },
      { key: 'X-Content-Type-Options', value: 'nosniff' },
      { key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
      { key: 'Strict-Transport-Security', value: 'max-age=31536000; includeSubDomains; preload' },
      {
        key: 'Content-Security-Policy',
        value: "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; connect-src 'self' https://*.supabase.co",
      },
      { key: 'Permissions-Policy', value: 'camera=(), microphone=(), geolocation=()' },
    ],
  }];
}

Each header serves a specific purpose:

  • X-Frame-Options: DENY prevents your pages from being embedded in iframes, blocking clickjacking attacks
  • X-Content-Type-Options: nosniff stops browsers from guessing MIME types, preventing script injection via uploaded files
  • Strict-Transport-Security forces HTTPS for all future requests, with preload allowing inclusion in browser preload lists
  • Content-Security-Policy controls which resources (scripts, styles, images) can load, dramatically reducing XSS impact
  • Permissions-Policy disables browser APIs you do not use, reducing your attack surface

For a deep dive into every header and how to configure it, see our complete guide to security headers.

6. No Input Validation on Server Endpoints

AI writes forms that send data to API routes, but the server side rarely validates what it receives. An attacker can send any payload they want, bypassing client-side validation entirely.

// AI-generated -- trusts the client payload completely
export async function POST(req: Request) {
  const body = await req.json();
  await supabase.from('profiles').update(body).eq('id', userId);
  return Response.json({ success: true });
}

This allows an attacker to update any field, including plan, role, or is_admin. They simply open DevTools, modify the request body, and send it.

The fix: Validate and pick only the fields you expect.

import { z } from 'zod';

const updateSchema = z.object({
  name: z.string().min(1).max(100),
  bio: z.string().max(500).optional(),
  website: z.string().url().optional().or(z.literal('')),
});

export async function POST(req: Request) {
  const parsed = updateSchema.safeParse(await req.json());
  if (!parsed.success) {
    return Response.json(
      { error: 'Invalid input', details: parsed.error.flatten() },
      { status: 400 }
    );
  }

  // Only validated fields reach the database
  await supabase.from('profiles').update(parsed.data).eq('id', userId);
  return Response.json({ success: true });
}

A common variant of this vulnerability is mass assignment through Supabase. If your RLS policies allow updates but you pass the raw request body, an attacker can set plan: 'enterprise' or scan_limit: 999999 on their own profile. Always use Zod, Valibot, or a similar schema library to extract exactly the fields you expect.

7. Exposed Debug Endpoints and Stack Traces

AI assistants frequently generate debug routes, verbose error responses, and console logging that leaks internal state. These are helpful during development and dangerous in production.

// AI-generated debug route left in production
export async function GET() {
  return Response.json({
    env: process.env,
    dbUrl: process.env.DATABASE_URL,
    nodeEnv: process.env.NODE_ENV,
  });
}

This is not just a theoretical risk. We regularly scan apps that expose routes like /api/debug, /api/test, /api/health (with full env dumps), or /api/admin with no authentication. An attacker who finds one of these routes gets your database credentials, API keys, and service tokens in a single request.

The fix: Remove all debug routes before deploying. Return generic error messages in production. Never expose environment variables or stack traces in API responses.

// Safe -- health endpoint reveals nothing sensitive
export async function GET() {
  return Response.json({ status: 'ok', timestamp: new Date().toISOString() });
}

// Safe -- error handler logs internally, returns generic message
export async function POST(req: Request) {
  try {
    const result = await processRequest(req);
    return Response.json(result);
  } catch (error) {
    // Log the real error for your monitoring system
    console.error('[API Error]', {
      path: req.url,
      error: error instanceof Error ? error.message : 'Unknown',
      stack: error instanceof Error ? error.stack : undefined,
    });

    // Return a generic message to the client
    return Response.json(
      { error: 'An unexpected error occurred' },
      { status: 500 }
    );
  }
}

Search your codebase for process.env in any route file and verify that no route returns environment variables. A quick grep will catch most cases:

grep -r "process.env" src/app/api/ --include="*.ts" --include="*.tsx"

Automating Security With the CheckVibe MCP Server

If you use Cursor, Windsurf, or any AI editor that supports the Model Context Protocol (MCP), you can integrate security scanning directly into your development workflow. The CheckVibe MCP server gives your AI assistant access to security scanning tools so it can find and fix vulnerabilities as part of its normal workflow.

Install it globally:

npm install -g @checkvibe/mcp-server

Then add it to your MCP configuration (e.g., .cursor/mcp.json):

{
  "mcpServers": {
    "checkvibe": {
      "command": "npx",
      "args": ["@checkvibe/mcp-server"],
      "env": {
        "CHECKVIBE_API_KEY": "cvd_live_your_key_here"
      }
    }
  }
}

Once configured, your AI assistant can run scans, retrieve results, and even dismiss false positives without leaving the editor. You can prompt it naturally:

  • "Scan my app at https://myapp.vercel.app and tell me what's wrong"
  • "Show me the critical findings from my last scan"
  • "Are there any new vulnerabilities since last week?"

The MCP server exposes nine tools: run_scan, get_scan_results, list_scans, list_projects, get_project, update_project, dismiss_finding, list_dismissals, and restore_finding. Your AI assistant uses these tools to interact with CheckVibe programmatically, making security part of the conversation rather than a separate step.

Adding a .cursorrules Security Template

You can teach Cursor to write more secure code from the start by adding security rules to your .cursorrules file. This file lives in your project root and provides instructions that Cursor follows for every prompt.

Create .cursorrules in your project root with these security rules:

## Security Rules

1. Never use `NEXT_PUBLIC_` prefix for secret keys (service role keys, API secrets, database URLs)
2. Every API route must check authentication with `supabase.auth.getUser()` before any database operation
3. Every API route that accepts POST/PUT/DELETE must validate the Origin header against an allowlist
4. Always validate request bodies with Zod schemas -- never pass raw `req.json()` to database operations
5. Never use `dangerouslySetInnerHTML` without DOMPurify sanitization
6. Never use string interpolation in SQL queries -- always use parameterized queries
7. Never set `Access-Control-Allow-Origin: *` -- use an explicit origin allowlist
8. Never return `error.message` or `error.stack` in API responses -- log internally, return generic message
9. Never commit `.env` files -- use `.env.local` and verify `.gitignore` includes it
10. Always generate random filenames for user uploads -- never use the original filename

These rules do not guarantee security, but they prevent the most common AI-generated vulnerabilities. The AI will follow them for every code generation and edit within your project.

The 30-Minute Security Sprint

If you have a vibe-coded app that is already live, or you are about to deploy one, here is a structured 30-minute checklist to cover the most critical gaps. You do not need to do everything at once, but these items are ordered by impact.

Minutes 0-5: Secrets audit

  1. Search your entire codebase for hardcoded keys: grep -r "sk_live\|sk_test\|eyJhbG\|service_role" src/
  2. Verify no secret keys use the NEXT_PUBLIC_ prefix in .env.local
  3. Confirm .env.local is in .gitignore
  4. Check your Git history for accidentally committed secrets: git log --all -p -- .env

Minutes 5-10: Authentication check

  1. List every file in src/app/api/: each one is a public endpoint
  2. Verify every route calls supabase.auth.getUser() or equivalent before database operations
  3. Check that DELETE and UPDATE routes include an ownership clause (.eq('user_id', user.id))

Minutes 10-15: Input validation

  1. Search for routes that pass await req.json() directly to database operations
  2. Add Zod schemas to every route that accepts user input
  3. Check that no route passes raw user input to SQL queries

Minutes 15-20: Headers and CORS

  1. Add security headers in next.config.ts if they are missing
  2. Search for Access-Control-Allow-Origin: * and replace with an origin allowlist
  3. Add CSRF origin validation to every POST/PUT/DELETE route

Minutes 20-25: Debug cleanup

  1. Search for routes that return process.env or error.message
  2. Remove any /api/debug, /api/test, or similar development-only routes
  3. Verify error responses return generic messages

Minutes 25-30: Automated scan

  1. Run a CheckVibe scan on your production URL
  2. Review critical and high findings
  3. Fix anything the manual steps above missed

This sprint will not make your app bulletproof, but it closes the gaps that attackers exploit first. For an even more thorough pre-launch review, check our Cursor and Copilot security audit guide.

How to Audit Your Vibe-Coded App in 60 Seconds

You could manually review every file the AI generated. Or you could run an automated scan that checks for all of the above, plus 30 more vulnerability categories, in under a minute.

Here is what to look for in scan results:

  • Critical findings need immediate attention: exposed secrets, SQL injection, missing auth
  • High findings should be fixed before launch: CORS misconfigs, missing headers, CSRF gaps
  • Medium findings are next: weak cookies, information disclosure, outdated dependencies
  • Low findings are improvements: performance headers, best practice recommendations

The goal is not to stop vibe coding. It is to add a 60-second security check to your workflow so you can ship fast without shipping vulnerabilities.

FAQ

Can AI fix its own security bugs?

Partially. If you point an AI assistant at a specific vulnerability and explain the issue clearly, it can usually generate a correct fix. The problem is that AI does not proactively identify security issues in its own output. It wrote the insecure code because it did not recognize the risk, so it will not spontaneously find and fix the same pattern elsewhere in your codebase. This is why automated scanning matters -- it catches what the AI misses, and then you can use the AI to implement the fixes. The CheckVibe MCP server is designed for exactly this workflow: the scanner finds the issues, and your AI editor fixes them.

What is the fastest way to secure a vibe-coded app?

Run an automated security scan first to get a prioritized list of issues. Fix critical findings immediately (leaked keys, missing auth, SQL injection). Then work through highs (CORS, headers, CSRF). This approach is faster than manual code review because the scanner tests your live app the way an attacker would, hitting every endpoint and checking every header. If you want a structured approach, follow the 30-minute security sprint above. For most vibe-coded apps, the combination of a scan plus the sprint will close 90% of the attack surface in under an hour.

Do I need a security expert?

For most indie projects and early-stage SaaS apps, no. The vulnerabilities in vibe-coded apps are well-documented and follow predictable patterns. If you can follow the code examples in this guide and run an automated scanner, you can fix the vast majority of issues yourself. Where you might want expert help is if you are handling sensitive data (health records, financial information, PII at scale), building in a regulated industry, or preparing for a SOC 2 audit. For everything else, a good scanner and the secure coding patterns in this guide will get you to a defensible security posture.

How many vulnerabilities does a typical vibe-coded app have?

Based on our scans, a typical vibe-coded app built in a weekend has between 8 and 25 findings across all severity levels. The most common breakdown we see is 1-3 critical (usually leaked keys or missing auth), 3-6 high (CORS, headers, CSRF), 4-10 medium (cookies, info disclosure, weak configs), and 2-8 low (best practices, performance headers). Apps built with Supabase tend to have fewer critical findings because the Supabase client library encourages RLS usage, but they still average 2-4 high findings around missing headers and CORS. The number drops significantly after a single scan-and-fix cycle -- most teams get to zero critical and high findings in their second scan.

Scan Your App Now

CheckVibe scans your app for all 7 of the vulnerabilities above, plus 30 more categories, in under 60 seconds. It crawls your site, tests every discovered endpoint, and gives you a prioritized list of findings with fix guidance.

Built something with Cursor, Copilot, or Windsurf? Scan it for free and find out what the AI missed.

Is your app vulnerable?

Paste your URL and get a security report in 30 seconds. 100+ automated checks, AI-powered fix prompts for Cursor & Copilot.

Scan Your App Free