AI coding assistants like Cursor, Copilot, and Windsurf have changed how developers ship software. You describe what you want, and the AI writes the code. This workflow — often called "vibe coding" — is fast, intuitive, and increasingly popular among indie hackers and small teams.
But there's a catch: AI-generated code routinely introduces security vulnerabilities.
Not because AI models are malicious, but because they optimize for what you asked — working features — not what you didn't ask for — secure features.
The numbers tell the story. A 2025 Stanford study found that developers using AI assistants produced code with 40% more security vulnerabilities than those writing code manually. Research from NYU's Tandon School of Engineering found that 45% of AI-generated code contained OWASP Top 10 vulnerabilities. And a 2025 analysis by Lasso Security found that AI models hallucinate non-existent packages at a rate of roughly 5.2% — creating a supply chain attack vector that didn't exist before AI coding tools.
These aren't theoretical risks. They're measurable, reproducible patterns. Here's what goes wrong and how to fix it.
The 8 Most Common Security Mistakes in Vibe-Coded Apps
1. Exposed API Keys in Frontend Code
AI assistants frequently hardcode API keys, database URLs, and secrets directly in client-side code. The model treats them as configuration strings, not secrets.
Vulnerable (AI-generated):
// AI-generated: works, but exposes your Stripe key to anyone
const stripe = Stripe('sk_live_abc123...');
// AI-generated: Supabase service role key in the browser
import { createClient } from '@supabase/supabase-js';
const supabase = createClient(
'https://abc.supabase.co',
'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJyb2xlIjoic2VydmljZV9yb2xlIn0...'
);
Fixed:
// Server-side only (e.g., API route or server action)
const stripe = Stripe(process.env.STRIPE_SECRET_KEY);
// Browser uses the anon key (safe to expose), not service_role
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY
);
Why AI gets this wrong: The model completes patterns it's seen in tutorials and Stack Overflow answers. Many tutorials use hardcoded keys for simplicity. The AI doesn't distinguish between "code that works" and "code that's safe to deploy."
2. Missing Input Validation
When you ask AI to "build a contact form," you get a form that submits data. You don't get validation against SQL injection, XSS payloads, or oversized inputs — unless you specifically ask.
Vulnerable (AI-generated):
// API route: accepts anything the user sends
export async function POST(req: Request) {
const { name, email, message } = await req.json();
await db.insert('messages', { name, email, message });
return Response.json({ success: true });
}
Fixed:
import { z } from 'zod';
import DOMPurify from 'isomorphic-dompurify';
const contactSchema = z.object({
name: z.string().min(1).max(100).trim(),
email: z.string().email().max(254),
message: z.string().min(1).max(5000).trim(),
});
export async function POST(req: Request) {
const body = await req.json();
const result = contactSchema.safeParse(body);
if (!result.success) {
return Response.json({ error: 'Invalid input' }, { status: 400 });
}
const { name, email, message } = result.data;
const sanitizedMessage = DOMPurify.sanitize(message);
await db.insert('messages', {
name,
email,
message: sanitizedMessage,
});
return Response.json({ success: true });
}
Why AI gets this wrong: You asked for a form, not a secure form. The model fulfills the stated requirement. Validation, sanitization, and length limits are implicit security concerns that the model won't add unless prompted.
3. Permissive CORS Configuration
AI models default to Access-Control-Allow-Origin: * because it "just works." This allows any website to make authenticated requests to your API.
Vulnerable (AI-generated):
// Express middleware: allows any origin
app.use(cors({ origin: '*', credentials: true }));
// Next.js API route: wide open
export async function GET(req: Request) {
return new Response(JSON.stringify(data), {
headers: {
'Access-Control-Allow-Origin': '*',
'Access-Control-Allow-Credentials': 'true',
},
});
}
Fixed:
// Express: explicit allowlist
const allowedOrigins = [
'https://yourdomain.com',
'https://app.yourdomain.com',
];
app.use(cors({
origin: (origin, callback) => {
if (!origin || allowedOrigins.includes(origin)) {
callback(null, true);
} else {
callback(new Error('CORS not allowed'));
}
},
credentials: true,
}));
Why AI gets this wrong: Wildcard CORS eliminates development friction. The model optimizes for "it works" over "it's locked down." For a deep dive into the attack patterns this enables, see our CORS misconfiguration guide.
4. No Rate Limiting
AI-generated API routes rarely include rate limiting. An attacker can brute-force login endpoints, spam form submissions, or exhaust your API quotas in minutes.
Vulnerable (AI-generated):
// Login route: no rate limiting — brute-force friendly
export async function POST(req: Request) {
const { email, password } = await req.json();
const user = await db.query('SELECT * FROM users WHERE email = $1', [email]);
if (!user || !await bcrypt.compare(password, user.password_hash)) {
return Response.json({ error: 'Invalid credentials' }, { status: 401 });
}
return Response.json({ token: generateJWT(user) });
}
Fixed:
import { Ratelimit } from '@upstash/ratelimit';
import { Redis } from '@upstash/redis';
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(5, '60 s'), // 5 attempts per minute
prefix: 'login',
});
export async function POST(req: Request) {
const ip = req.headers.get('x-forwarded-for') ?? '127.0.0.1';
const { success, remaining } = await ratelimit.limit(ip);
if (!success) {
return Response.json(
{ error: 'Too many attempts. Try again later.' },
{ status: 429, headers: { 'Retry-After': '60' } }
);
}
const { email, password } = await req.json();
const user = await db.query('SELECT * FROM users WHERE email = $1', [email]);
if (!user || !await bcrypt.compare(password, user.password_hash)) {
// Generic message prevents account enumeration
return Response.json({ error: 'Invalid credentials' }, { status: 401 });
}
return Response.json({ token: generateJWT(user) });
}
5. Client-Side Authorization
AI often puts authorization logic in the frontend — checking if (user.role === 'admin') in React components instead of enforcing it server-side. Anyone with browser dev tools can bypass this.
Vulnerable (AI-generated):
// React component: admin check only in the UI
function AdminPanel() {
const { user } = useAuth();
if (user.role !== 'admin') {
return <p>Access denied</p>;
}
return (
<div>
<button onClick={() => fetch('/api/users/delete', {
method: 'POST',
body: JSON.stringify({ userId: targetId }),
})}>
Delete User
</button>
</div>
);
}
// API route: no server-side auth check
export async function POST(req: Request) {
const { userId } = await req.json();
await db.query('DELETE FROM users WHERE id = $1', [userId]);
return Response.json({ success: true });
}
Fixed:
// API route: server-side authorization
export async function POST(req: Request) {
const session = await getServerSession();
if (!session) {
return Response.json({ error: 'Unauthorized' }, { status: 401 });
}
// Check role server-side against the database, not the JWT
const admin = await db.query(
'SELECT role FROM users WHERE id = $1',
[session.user.id]
);
if (admin?.role !== 'admin') {
return Response.json({ error: 'Forbidden' }, { status: 403 });
}
const { userId } = await req.json();
await db.query('DELETE FROM users WHERE id = $1', [userId]);
return Response.json({ success: true });
}
Why AI gets this wrong: The AI sees your React component context and writes authorization logic where the conditional rendering happens. It doesn't automatically add the corresponding server-side check.
6. Insecure Cookie Configuration
Session cookies created by AI code often lack HttpOnly, Secure, and SameSite attributes, making them vulnerable to theft via XSS or CSRF attacks.
Vulnerable (AI-generated):
// Sets a session cookie with no security attributes
res.setHeader('Set-Cookie', `session=${token}; Path=/`);
Fixed:
// Secure cookie with all protective attributes
res.setHeader('Set-Cookie', [
`session=${token}`,
'Path=/',
'HttpOnly', // Prevents JavaScript access (XSS protection)
'Secure', // Only sent over HTTPS
'SameSite=Lax', // CSRF protection
`Max-Age=${60 * 60 * 24 * 7}`, // 7 day expiry
].join('; '));
Why AI gets this wrong: The minimal cookie syntax works. The AI produces the shortest working solution. Security attributes are defensive additions that don't affect functionality in development.
7. Missing Security Headers
AI-generated Next.js and Express apps almost never include security headers like Content-Security-Policy, X-Frame-Options, or Strict-Transport-Security. These are your first line of defense against common attacks.
Vulnerable (AI-generated):
// next.config.ts: no security headers configured
const nextConfig = {
// AI generates config for features, not security
};
Fixed:
// next.config.ts: security headers on all routes
const nextConfig = {
async headers() {
return [{
source: '/:path*',
headers: [
{ key: 'X-Frame-Options', value: 'DENY' },
{ key: 'X-Content-Type-Options', value: 'nosniff' },
{ key: 'Referrer-Policy', value: 'strict-origin-when-cross-origin' },
{ key: 'Strict-Transport-Security', value: 'max-age=31536000; includeSubDomains; preload' },
{
key: 'Content-Security-Policy',
value: "default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'; img-src 'self' data: https:; font-src 'self'; frame-ancestors 'none'"
},
],
}];
},
};
For a comprehensive walkthrough of every security header and how to configure them on any platform, see our security headers complete guide.
8. Package Hallucination
This is a risk unique to AI-generated code. LLMs sometimes suggest importing packages that don't exist. They hallucinate plausible-sounding package names based on patterns in their training data.
Vulnerable (AI-generated):
// This package doesn't exist — AI hallucinated it
import { sanitize } from 'express-input-sanitizer';
// Or: import { validateSchema } from 'next-api-validator';
Why this is dangerous: An attacker can register the hallucinated package name on npm, fill it with malicious code, and wait for developers to npm install it. This is called a dependency confusion or package squatting attack. A 2025 study by Lasso Security found that AI models hallucinate package names at a rate of approximately 5.2%, and many of those names are available for registration on public registries.
How to protect yourself:
- Verify every package name before installing. Check it exists on npmjs.com and has a reasonable download count
- Use
npm auditoryarn auditafter installing new dependencies - Pin exact versions in your
package.json(use--save-exact) - Consider tools like Socket.dev that flag suspicious packages
The Confidence Gap
The most dangerous aspect of vibe coding isn't the vulnerabilities themselves — it's the false sense of security they create.
The Stanford study that found 40% more vulnerabilities in AI-assisted code also discovered something worse: developers using AI assistants rated their code as significantly more secure than developers writing code manually. The code was less secure, but the developers were more confident.
This confidence gap has measurable consequences:
- Fewer code reviews. If the AI "knows what it's doing," why double-check?
- Skipped security testing. The code looks clean and professional, so it must be safe.
- Faster shipping. Vibe coding accelerates development velocity, which means vulnerabilities reach production faster.
- Less learning. When the AI writes your auth logic, you don't build the mental model to spot when it's wrong.
The result is a growing population of applications that look polished on the outside but have fundamental security flaws in their foundations. This is especially acute for solo founders and indie hackers who may not have security expertise on their team.
For a deeper comparison of how different AI tools handle security, see our Cursor vs Copilot security audit.
Why This Matters
A 2025 Stanford study found that developers using AI assistants produced code with 40% more security vulnerabilities than those writing code manually, yet rated their code as more secure. This confidence gap is the real danger.
When you're shipping fast — especially as a solo founder or small team — security reviews get skipped. The code works, users are happy, and the security holes sit quietly until someone finds them.
The consequences are real:
- Data breaches — Exposed API keys and missing input validation lead to unauthorized data access
- Account takeover — Weak cookies and client-side auth let attackers hijack user sessions
- Financial loss — Brute-forced endpoints can drain API credits, and regulatory fines for data breaches start at $50k+ under GDPR
- Reputation damage — A single security incident can destroy user trust that took months to build
How to Catch These Issues
You have three options:
- Manual security audits — expensive ($5k-$20k per engagement) and slow (weeks)
- Learn security yourself — valuable but time-consuming
- Automated scanning — fast, affordable, and catches the patterns AI gets wrong
Tools like CheckVibe are built specifically for this gap. Paste your URL, get a scan in 30 seconds, and receive AI-powered fix prompts you can paste right back into your coding assistant. The same AI workflow that created the vulnerability can fix it.
A Practical Workflow
Here's the security workflow we recommend for vibe-coded projects:
- Build your feature with Cursor, Copilot, or your preferred AI tool
- Scan immediately — run an automated security scan before merging
- Review the findings — focus on Critical and High severity issues first
- Feed fixes back to AI — paste the finding description and fix suggestion into your AI assistant and ask it to patch the code
- Re-scan — verify the fixes didn't introduce new issues
- Repeat per feature — make scanning part of your development loop, not a one-time event
For a step-by-step hardening guide you can apply to any vibe-coded project, see How to Secure Your Vibe-Coded App.
The Bottom Line
Vibe coding isn't going away. If anything, it's accelerating. The solution isn't to stop using AI — it's to add a security check to your workflow. Scan after you ship, fix what's found, and scan again. Make it a habit, not a one-time audit.
The developers who will build the most successful AI-assisted products aren't the ones who blindly trust AI output — they're the ones who treat AI as a first draft and add verification as a standard step. For the broader context on securing AI-assisted development, see our Vibe Coding Security overview.
Scan your vibe-coded app now — free scan, 36 security checks, AI-powered fix prompts.
FAQ
Is vibe coding safe?
Vibe coding is safe when combined with security verification. The AI-generated code itself routinely contains vulnerabilities — studies show 40-45% of AI-generated code has security issues — but the risk comes from deploying that code without review, not from using AI in the first place. The workflow matters more than the tool. If you scan and fix security issues before deploying (and after each major feature addition), vibe coding can be as secure as traditional development. The danger is treating AI output as production-ready without a security pass.
Which AI coding tool is most secure?
No AI coding tool consistently produces secure code. In our testing across Cursor, GitHub Copilot, Windsurf, and Claude-based tools, all of them generate the same categories of vulnerabilities: exposed secrets, missing input validation, permissive CORS, and absent rate limiting. Some models are slightly better at specific patterns (Claude tends to add more input validation unprompted, Copilot is better at recognizing secret patterns), but none of them reliably produce production-secure code without explicit security prompting. The best approach is tool-agnostic: scan the output regardless of which AI wrote it.
Can AI fix its own security bugs?
Yes — and this is one of the most practical aspects of the AI security workflow. When you give an AI assistant a specific vulnerability description and ask it to fix the code, it performs significantly better than when generating code from scratch. The model has a concrete problem to solve rather than an open-ended feature to build. In our experience, AI assistants correctly fix about 80-85% of security issues when given clear descriptions of the vulnerability and the affected code. The remaining 15-20% typically involve architectural issues (like moving authorization from client to server) that require more context than a single prompt provides.
What percentage of AI code has vulnerabilities?
Research consistently finds that 40-45% of AI-generated code contains at least one security vulnerability. The 2025 Stanford study found 40% more vulnerabilities compared to manual coding. NYU's Tandon School found 45% of Copilot-generated code contained OWASP Top 10 issues. The most common categories are injection vulnerabilities (SQL injection, XSS), authentication and authorization flaws, and security misconfiguration (missing headers, permissive CORS, insecure cookies). These numbers represent code generated without explicit security prompting — when developers include security requirements in their prompts, the rate drops significantly, though it never reaches zero.