Nearly half of all AI-generated code contains at least one security vulnerability from the OWASP Top 10. That is not speculation — multiple peer-reviewed studies from Stanford, UIUC, and NYU have confirmed it since 2024. The code that Cursor, GitHub Copilot, Windsurf, and Claude Code generate works. It compiles, it runs, it does what you asked. But "it works" and "it's secure" are two very different things.
If you are building with AI coding assistants — and in 2026, most of us are — you need a structured approach to auditing the output. This guide covers exactly why AI tools create security blind spots, the specific vulnerabilities to look for, and how to automate the audit process using CheckVibe's MCP server directly inside your editor.
Why AI Coding Tools Create Security Blind Spots
Understanding why AI assistants produce insecure code is the first step toward fixing the problem. It is not about the tools being bad. It is about what they are optimized for.
They Optimize for Functionality, Not Security
When you prompt Cursor with "create a Stripe webhook handler" or ask Copilot to "add user authentication," the model generates code that handles the happy path. It makes your feature work. What it does not do — because you did not ask — is add rate limiting, validate webhook signatures properly, implement CSRF protection, or restrict CORS to your domain. Security is a non-functional requirement, and AI models treat it as optional unless you explicitly demand it.
They Reproduce Insecure Patterns from Training Data
AI models learn from public repositories, and public repositories are full of tutorials, prototypes, and proof-of-concept code that was never meant for production. When Copilot suggests Access-Control-Allow-Origin: *, it is not being malicious — it is reproducing the most common pattern it saw in thousands of training examples. The same goes for hardcoded API keys, string-concatenated SQL queries, and missing input validation. These patterns are everywhere in public code because they are easy to write and "work" in development.
They Do Not Add Security Layers You Do Not Ask For
AI assistants are reactive. They respond to your prompt. If your prompt says "create an API route to delete a project," you get an API route that deletes a project. You do not get authentication middleware, ownership verification, audit logging, or soft-delete patterns — unless your prompt includes those requirements. The gap between what you ask for and what a secure implementation requires is where vulnerabilities live.
Package Hallucination Is a Real Attack Vector
Roughly 5% of package recommendations from AI coding tools reference packages that do not exist. This is not a harmless quirk. Attackers have caught on. They register these hallucinated package names on npm and PyPI, filling them with malicious code. When a developer follows the AI's suggestion and runs npm install, they pull in a supply chain attack. This technique — sometimes called "slopsquatting" — has already been documented in the wild. Always verify that a package exists, has real downloads, and is maintained before installing it.
The 10 Most Common Vulnerabilities in AI-Generated Code
We scan hundreds of AI-built applications every week at CheckVibe. These are the ten issues that appear most frequently, ranked by how often we see them.
1. Hardcoded API Keys and Secrets
The most common vulnerability we detect. AI assistants routinely place secret keys directly in client-side code or use environment variable names that expose them to the browser bundle.
// AI-generated — Stripe secret key exposed in the client bundle
const stripe = new Stripe('sk_live_51abc123...', {
apiVersion: '2024-12-18.acacia',
});
In a Next.js app, any variable not prefixed with NEXT_PUBLIC_ should never appear in frontend code. But AI models do not consistently enforce this boundary. Your Stripe secret key, Supabase service role key, or database connection string can end up shipped to every browser that loads your app.
The fix: Keep secrets in server-side environment variables. Use NEXT_PUBLIC_ only for truly public values like your Supabase anon key or site URL. Audit your client bundle for leaked secrets with a tool like CheckVibe's API key leak detection.
2. Missing Authentication on API Routes
When you ask an AI to "create a CRUD API for users," it generates the endpoints. It does not add authentication. Every route is publicly accessible by default.
// AI-generated — no auth check, anyone can delete any user
export async function DELETE(
req: Request,
{ params }: { params: { id: string } }
) {
const { error } = await supabase
.from('users')
.delete()
.eq('id', params.id);
return Response.json({ success: !error });
}
The fix: Validate the session and verify ownership on every state-changing route. Never trust that a request is legitimate just because it reached your server.
export async function DELETE(
req: Request,
{ params }: { params: { id: string } }
) {
const supabase = await createClient();
const { data: { user } } = await supabase.auth.getUser();
if (!user) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
const { error } = await supabase
.from('users')
.delete()
.eq('id', params.id)
.eq('user_id', user.id); // ownership check
return Response.json({ success: !error });
}
3. SQL Injection via String Concatenation
AI models sometimes build database queries using template literals instead of parameterized queries, especially when the prompt says "keep it simple" or "quick search."
// Vulnerable — user input directly interpolated into SQL
const result = await pool.query(
`SELECT * FROM products WHERE name LIKE '%${searchTerm}%'`
);
An attacker can pass '; DROP TABLE products; -- as the search term and wipe your database.
The fix: Always use parameterized queries. Every database library supports them.
const result = await pool.query(
'SELECT * FROM products WHERE name LIKE $1',
[`%${searchTerm}%`]
);
4. CORS Set to Allow All Origins
AI assistants set Access-Control-Allow-Origin: * because it eliminates CORS errors during development. This configuration stays in production, allowing any website to make authenticated requests to your API.
// AI default — every origin in the world can call your API
return new Response(JSON.stringify(data), {
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': 'true',
},
});
The fix: Use an explicit allowlist of your own domains. For a deeper dive, read our CORS misconfiguration guide.
5. Missing Input Validation on Server Endpoints
When AI generates a form handler, it assumes the input is well-formed. There is no length check, no type validation, no sanitization. An attacker can send a 50MB JSON payload to crash your server, or inject HTML/JavaScript into fields that get rendered later.
The fix: Validate every input on the server side using a schema library like Zod.
import { z } from 'zod';
const createProjectSchema = z.object({
name: z.string().min(1).max(100),
url: z.string().url().max(2048),
description: z.string().max(500).optional(),
});
export async function POST(req: Request) {
const body = await req.json();
const parsed = createProjectSchema.safeParse(body);
if (!parsed.success) {
return Response.json({ error: 'Invalid input' }, { status: 400 });
}
// proceed with parsed.data
}
6. Insecure Direct Object References (IDOR)
AI-generated code often uses predictable, sequential IDs and does not verify that the authenticated user has access to the requested resource. If your user settings endpoint is /api/users/42/settings, an attacker can change 42 to 43 and access someone else's data.
The fix: Always verify the authenticated user owns or has permission to access the resource. Use UUIDs instead of sequential integers for resource identifiers.
7. Missing Rate Limiting
AI-generated API routes never include rate limiting. This means an attacker can brute-force your login endpoint, spam your contact form, exhaust your OpenAI API quota, or hammer your database — all without restriction. See our full API security checklist for rate limiting patterns.
8. Exposed Debug Endpoints and Verbose Error Messages
AI assistants create /api/debug, /api/health, or /api/test routes during development that expose internal state — database schemas, environment variable names, user data, or stack traces. These get deployed to production because nobody remembers to remove them.
The fix: Audit your app/api/ directory for any route that is not essential to your application. Remove debug routes. Return generic error messages in production.
// Bad — leaks internal details
return Response.json({ error: error.message, stack: error.stack }, { status: 500 });
// Good — generic message, log details server-side
console.error('API error:', error);
return Response.json({ error: 'Internal server error' }, { status: 500 });
9. Missing CSRF Protection
AI-generated forms and API handlers rarely implement CSRF tokens. If your app uses cookie-based authentication, any malicious website can trick your users' browsers into making state-changing requests to your API.
The fix: Validate the Origin header on every state-changing request. Use SameSite=Lax or SameSite=Strict on session cookies. For critical operations, implement double-submit cookie patterns or CSRF tokens.
10. Insecure File Upload Handling
When AI generates a file upload endpoint, it typically accepts any file type, does not check file size, and stores files in a publicly accessible location. This opens the door to path traversal attacks, malware uploads, and storage exhaustion.
The fix: Validate file type against an allowlist, enforce a size limit, generate random filenames, and store uploads in a private bucket with signed URLs for access.
How to Audit Your Cursor/Copilot Project
Here is a step-by-step process you can run on any AI-generated codebase. It takes 30 minutes manually — or under 2 minutes with automated scanning.
Step 1: Search for Exposed Secrets
Run these commands from the root of your project:
# Search for hardcoded API keys and secrets
grep -r "sk_live\|sk_test\|STRIPE_SECRET\|service_role\|DATABASE_URL" --include="*.ts" --include="*.tsx" --include="*.js" src/
# Check for secrets in client-side environment variables
grep -r "process\.env\." --include="*.tsx" --include="*.ts" src/app/ src/components/
# Look for .env files committed to git
git log --all --full-history -- "*.env" ".env*"
Any secret that appears in a file under src/app/ (a client component) or src/components/ is potentially exposed. Server-only secrets must live in API routes or server components.
Step 2: Check Every API Route for Authentication
List all your API routes and verify each one checks for a valid session:
# Find all API route files
find src/app/api -name "route.ts" -o -name "route.tsx"
For each route, look for supabase.auth.getUser(), getServerSession(), or your auth middleware. Any route that modifies data (POST, PUT, PATCH, DELETE) without an authentication check is a vulnerability.
Step 3: Test Your CORS Configuration
Open your browser console on a different domain (or use a CORS testing tool) and send a request to your API:
fetch('https://yourapp.com/api/user', {
credentials: 'include',
}).then(r => console.log(r.headers.get('access-control-allow-origin')));
If the response header is *, your API is open to cross-origin attacks. Fix it by implementing an origin allowlist.
Step 4: Validate Input on Every Form and Endpoint
Search your API routes for direct use of req.json() or req.body without validation:
grep -r "req\.json()" --include="*.ts" src/app/api/
Every endpoint that reads request data should validate it with Zod, Yup, or a similar library before processing.
Step 5: Run Automated Scanning with CheckVibe
Manual audits are essential but slow and incomplete. Automated scanning catches the issues you miss — and does it in seconds. CheckVibe runs 36 security scanners against your app, covering everything from exposed API keys and missing headers to XSS, SQL injection, CORS misconfiguration, and more.
You can run a scan from the CheckVibe dashboard in one click. But the real power is running scans directly from your editor using the MCP server.
Using CheckVibe's MCP Server in Cursor
This is where CheckVibe is fundamentally different from every other security scanner. Instead of switching to a separate tool or dashboard, you can scan your app and get fix suggestions directly inside Cursor, Claude Code, or any MCP-compatible editor.
MCP (Model Context Protocol) is an open standard that lets AI assistants connect to external tools. CheckVibe's MCP server gives your AI assistant the ability to run security scans, read results, and generate targeted fix prompts — all within your normal coding flow.
Install the MCP Server
npm install -g @checkvibe/mcp-server
Add It to Cursor's MCP Config
Open your Cursor MCP settings (.cursor/mcp.json in your project root or ~/.cursor/mcp.json globally) and add CheckVibe:
{
"mcpServers": {
"checkvibe": {
"command": "npx",
"args": ["-y", "@checkvibe/mcp-server"],
"env": {
"CHECKVIBE_API_KEY": "cvd_live_your_api_key_here"
}
}
}
}
Get your API key from the CheckVibe dashboard under API Keys.
Run a Scan from Cursor
Once the MCP server is connected, you can ask Cursor's AI to scan your project directly in chat:
"Scan https://myapp.vercel.app for security vulnerabilities"
Cursor calls the run_scan tool via MCP, which triggers all 36 CheckVibe security scanners against your live app. The results come back directly in the chat — no tab switching, no copy-pasting URLs.
Get Fix Suggestions in Context
This is the part that matters most. When the scan finds issues, you can ask Cursor to fix them with full context:
"Show me the security scan results and fix the critical issues"
The AI reads the scan results via get_scan_results, sees the specific vulnerabilities (e.g., "Missing Content-Security-Policy header," "API key exposed in client bundle," "CORS allows all origins"), and generates targeted fixes for your actual codebase. It knows your file structure, your framework, and your patterns — so the fixes work on the first try.
Available MCP Tools
The CheckVibe MCP server provides nine tools that cover the full scan lifecycle:
| Tool | What It Does |
|------|--------------|
| run_scan | Trigger a full security scan on any URL |
| get_scan_results | Read detailed results from a completed scan |
| list_scans | View scan history for a project |
| list_projects | See all your CheckVibe projects |
| get_project | Get details for a specific project |
| update_project | Update project configuration |
| dismiss_finding | Mark a finding as false positive or accepted risk |
| list_dismissals | View dismissed findings |
| restore_finding | Un-dismiss a previously dismissed finding |
Claude Code Integration
If you use Claude Code (Anthropic's CLI), the MCP server works the same way. Add it to your ~/.claude/mcp.json:
{
"mcpServers": {
"checkvibe": {
"command": "npx",
"args": ["-y", "@checkvibe/mcp-server"],
"env": {
"CHECKVIBE_API_KEY": "cvd_live_your_api_key_here"
}
}
}
}
Then ask Claude Code to scan and fix — same workflow, same results.
Why MCP Changes the Security Workflow
Traditional security scanners generate a PDF report that sits in someone's inbox. The developer has to read the report, understand the findings, find the relevant files, and figure out the fix. Most findings never get fixed because the friction is too high.
With MCP, the workflow is: scan, read results, fix — all in the same editor session, all assisted by AI. The scanner finds the vulnerability, and the same AI assistant that wrote the vulnerable code now has the context to fix it. Friction drops to near zero.
Cursor Rules for Security
Cursor supports project-level rules via a .cursorrules file in your project root. You can use this to enforce secure coding patterns on every generation. This is a proactive defense — catching issues before they are written.
Create a .cursorrules file with security-focused instructions:
# Security Rules
## API Routes
- Every API route that modifies data MUST check authentication via supabase.auth.getUser()
- Every API route MUST validate the request body using Zod schemas
- Never return raw error messages to the client. Log them server-side and return generic errors.
- Always verify resource ownership (user_id check) before UPDATE or DELETE operations.
## Environment Variables
- Never use NEXT_PUBLIC_ prefix for secret keys (Stripe secret, Supabase service role, database URLs)
- Always access secrets via process.env in server-side code only
## CORS
- Never use Access-Control-Allow-Origin: *
- Always use an explicit origin allowlist from environment variables
## Database
- Never build SQL queries with string concatenation or template literals
- Always use parameterized queries or ORM methods
- Always use Row Level Security (RLS) policies in Supabase
## Input Validation
- Validate all user input on the server side using Zod
- Enforce maximum lengths on all string fields
- Validate file uploads: check type, size, and generate random filenames
## Authentication
- Use HttpOnly, Secure, SameSite=Lax cookies for sessions
- Implement CSRF protection on all state-changing endpoints
- Never trust client-side authorization checks alone
These rules get included in every prompt Cursor sends to the AI model, so the generated code follows your security requirements from the start.
Frequently Asked Questions
Is Cursor itself secure?
Cursor encrypts your code in transit and offers privacy modes that prevent your code from being used for training. The security concern is not Cursor's infrastructure — it is the code Cursor generates. The AI model does not inherently understand your threat model, your deployment environment, or your compliance requirements. You need to audit the output.
Does Copilot generate insecure code?
Yes. A 2024 Stanford study found that developers using AI assistants — including Copilot — produced code with 40% more security vulnerabilities than those writing code manually. Copilot is trained on public repositories, many of which contain insecure patterns. It reproduces what it learned. This is not a Copilot-specific problem; it affects Cursor, Windsurf, Claude Code, and every other AI coding tool.
Can I trust AI-generated authentication code?
Not without review. AI-generated auth code typically handles the basic flow (sign up, log in, session management) but misses edge cases: token rotation, session invalidation on password change, brute-force protection, and secure cookie configuration. Always audit auth code manually or use a proven auth library like NextAuth, Clerk, or Supabase Auth rather than letting AI implement authentication from scratch.
What is the fastest way to audit my AI-built app?
Run an automated security scan. CheckVibe runs 36 specialized scanners against your live app and returns results in under two minutes. For the fastest workflow, use the MCP server integration to scan and fix directly from Cursor or Claude Code without leaving your editor.
What is MCP and how does it help with security?
MCP (Model Context Protocol) is an open standard created by Anthropic that allows AI assistants to connect to external tools and data sources. CheckVibe's MCP server lets your AI coding assistant run security scans, read vulnerability reports, and generate fixes — all within your editor. Instead of context-switching between a scanner dashboard and your IDE, you get a single workflow: ask the AI to scan, review the results, and apply fixes. It turns security scanning from a separate chore into part of your normal development process.
Stop Shipping Vulnerabilities
AI coding tools are not going away, and neither are the security gaps they create. The solution is not to stop using Cursor or Copilot — it is to build a security check into your workflow.
Here is what to do right now:
- Add a
.cursorrulesfile with the security rules above - Install the CheckVibe MCP server to scan from your editor:
npm install -g @checkvibe/mcp-server - Run a scan on your live app and fix the critical findings today
- Set up scheduled monitoring to catch regressions as you ship new features
Your AI assistant writes the code. CheckVibe makes sure it is secure. Scan your app now — it takes less than two minutes.
Further reading: How to Secure Your Vibe-Coded App | Vibe Coding Security Risks | API Security Checklist | API Key Leak Detection