The OWASP Top 10 is the gold standard list of web application security risks. But if you read the official documentation, it's written for enterprise security teams — not for a solo developer shipping a SaaS with Next.js and Supabase.
Here's the same list, translated into real examples from the stack you actually use. For each category, you get a concrete code example, a quick fix you can apply today, and an honest assessment of how much it matters when you're an indie hacker.
Priority Ranking: Which of the 10 Matter Most for Indie Hackers
Before diving in, here's how I'd rank the OWASP Top 10 by real-world impact for a typical indie SaaS:
| Priority | OWASP Category | Why It Matters for You | |----------|---------------|----------------------| | 1 | Broken Access Control | Users accessing each other's data will kill your business overnight | | 2 | Injection | SQL injection on a small SaaS is just as devastating as on a Fortune 500 app | | 3 | Cryptographic Failures | Leaked API keys and exposed secrets are the #1 way indie apps get compromised | | 4 | Security Misconfiguration | Missing headers and default configs are the low-hanging fruit attackers look for | | 5 | Authentication Failures | Weak auth means anyone can take over your users' accounts | | 6 | Vulnerable Components | One outdated npm package can open the door wide | | 7 | Insecure Design | Hard to fix later — better to think about it now | | 8 | Software and Data Integrity Failures | Supply chain attacks are increasing but less likely to target small apps | | 9 | SSRF | Only relevant if your app fetches user-provided URLs | | 10 | Logging and Monitoring Failures | Won't prevent an attack, but you'll be blind when one happens |
Now let's break each one down.
1. Broken Access Control
What it means: Users can do things they shouldn't — view other users' data, access admin pages, or modify records they don't own.
In your stack: Your Supabase RLS policies have gaps. A user can call your API with someone else's user_id and see their data.
-- Bad: no RLS, anyone can read everything
SELECT * FROM orders;
-- Good: RLS ensures users only see their own
CREATE POLICY "Users read own orders"
ON orders FOR SELECT USING (user_id = auth.uid());
Here's a more complete example showing how broken access control manifests in a Next.js API route:
// Bad: No ownership check — any authenticated user can view any order
export async function GET(req: Request) {
const { searchParams } = new URL(req.url);
const orderId = searchParams.get("id");
const { data } = await supabase
.from("orders")
.select("*")
.eq("id", orderId)
.single();
return Response.json(data);
}
// Good: Verify the requesting user owns this resource
export async function GET(req: Request) {
const { searchParams } = new URL(req.url);
const orderId = searchParams.get("id");
const {
data: { user },
} = await supabase.auth.getUser();
const { data } = await supabase
.from("orders")
.select("*")
.eq("id", orderId)
.eq("user_id", user.id) // ownership check
.single();
if (!data) return Response.json({ error: "Not found" }, { status: 404 });
return Response.json(data);
}
Quick check: Can you change the id parameter in your API URL and see someone else's data? If yes, you have broken access control.
Quick fix: Enable RLS on every single Supabase table. Then write policies that filter by auth.uid(). In your API routes, always include a user_id check in your queries. Never trust client-supplied IDs without validating ownership.
For a detailed checklist, see our OWASP Top 10 Checklist and SaaS Security Checklist.
2. Cryptographic Failures
What it means: Sensitive data isn't properly protected — passwords stored in plaintext, API keys in source code, or no HTTPS.
In your stack: Your .env.local is committed to git. Your Supabase service role key is exposed in client-side code. You're storing user data without encryption at rest.
// Bad: Exposing a service role key in client code
// Any variable prefixed with NEXT_PUBLIC_ is visible in the browser
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_SERVICE_ROLE_KEY! // NEVER do this
);
// Good: Use the anon key on the client, service role only on the server
// Client-side
const supabase = createClient(
process.env.NEXT_PUBLIC_SUPABASE_URL!,
process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY! // safe for browser
);
// Server-side only (API routes, server components)
const supabaseAdmin = createClient(
process.env.SUPABASE_URL!,
process.env.SUPABASE_SERVICE_ROLE_KEY! // no NEXT_PUBLIC_ prefix
);
Another common mistake is storing sensitive user data in plaintext:
// Bad: Storing API tokens in plaintext
await supabase.from("integrations").insert({
user_id: user.id,
slack_token: slackOAuthToken, // plaintext in the database
});
// Good: Encrypt before storing
import { encrypt, decrypt } from "@/lib/encryption";
await supabase.from("integrations").insert({
user_id: user.id,
slack_token: encrypt(slackOAuthToken), // AES-256-GCM encrypted
});
Quick fix: Run git log --all --full-history -- "*.env*" to check if you've ever committed secrets. If you have, rotate those keys immediately (changing the file isn't enough — the old values are in git history). Add .env* to .gitignore. Use NEXT_PUBLIC_ prefix only for values that are genuinely safe to expose in the browser.
3. Injection
What it means: Untrusted data is sent to an interpreter as part of a command or query.
In your stack: If you're building raw SQL queries with string concatenation instead of using Supabase's query builder or parameterized queries.
// Bad: SQL injection via user input
const { data } = await supabase.rpc('search', {
query: `%${userInput}%` // if userInput contains SQL...
});
// Good: Supabase's query builder handles escaping
const { data } = await supabase
.from('products')
.select()
.ilike('name', `%${userInput}%`);
XSS (cross-site scripting) is another form of injection that indie hackers often miss:
// Bad: Rendering user HTML content directly
function UserComment({ comment }: { comment: string }) {
return <div dangerouslySetInnerHTML={{ __html: comment }} />;
}
// Good: Let React handle escaping, or use a sanitizer
import DOMPurify from "dompurify";
function UserComment({ comment }: { comment: string }) {
// Option 1: Just render as text (React escapes by default)
return <div>{comment}</div>;
// Option 2: If you need HTML, sanitize first
return (
<div
dangerouslySetInnerHTML={{
__html: DOMPurify.sanitize(comment),
}}
/>
);
}
And for API routes that accept user input for database operations:
// Bad: Dynamic column names from user input
const sortBy = searchParams.get("sort"); // could be "name; DROP TABLE users--"
const { data } = await supabase
.from("products")
.select()
.order(sortBy!); // untrusted input in query
// Good: Validate against an allowlist
const ALLOWED_SORT_COLUMNS = ["name", "price", "created_at"];
const sortBy = searchParams.get("sort");
if (!sortBy || !ALLOWED_SORT_COLUMNS.includes(sortBy)) {
return Response.json({ error: "Invalid sort parameter" }, { status: 400 });
}
const { data } = await supabase.from("products").select().order(sortBy);
Quick fix: Never concatenate user input into SQL strings. Use Supabase's query builder or parameterized queries. For any user-provided values that become part of query structure (column names, table names, sort orders), validate against a strict allowlist. For HTML output, rely on React's default escaping and never use dangerouslySetInnerHTML without DOMPurify.
Check out our API Security Checklist for more injection prevention patterns.
4. Insecure Design
What it means: The application was designed without security in mind. No threat modeling, no abuse case analysis.
In your stack: Your pricing page lets users modify the price parameter in the checkout request. Your signup flow doesn't rate-limit account creation.
// Bad: Trusting client-side price data
export async function POST(req: Request) {
const { priceInCents, plan } = await req.json();
const session = await stripe.checkout.sessions.create({
line_items: [
{
price_data: {
currency: "usd",
unit_amount: priceInCents, // user controls the price!
product_data: { name: plan },
},
quantity: 1,
},
],
mode: "subscription",
});
return Response.json({ url: session.url });
}
// Good: Use server-side price IDs from your catalog
const PRICE_MAP: Record<string, string> = {
starter_monthly: "price_1AbCdEfGhIjKlMnOp",
starter_annual: "price_2AbCdEfGhIjKlMnOp",
pro_monthly: "price_3AbCdEfGhIjKlMnOp",
pro_annual: "price_4AbCdEfGhIjKlMnOp",
};
export async function POST(req: Request) {
const { planId } = await req.json();
const priceId = PRICE_MAP[planId];
if (!priceId) {
return Response.json({ error: "Invalid plan" }, { status: 400 });
}
const session = await stripe.checkout.sessions.create({
line_items: [{ price: priceId, quantity: 1 }],
mode: "subscription",
});
return Response.json({ url: session.url });
}
Another common insecure design pattern is unlimited resource creation:
// Bad: No limits on free accounts
export async function POST(req: Request) {
const { user } = await getSession();
const project = await createProject(user.id, req.body);
return Response.json(project);
}
// Good: Check plan limits atomically
export async function POST(req: Request) {
const { user } = await getSession();
const { data: limit } = await supabase.rpc("check_project_limit", {
p_user_id: user.id,
});
if (!limit.allowed) {
return Response.json(
{
error: `Plan limit reached (${limit.current_count}/${limit.project_limit})`,
},
{ status: 403 }
);
}
const project = await createProject(user.id, req.body);
return Response.json(project);
}
Quick fix: Never trust any value from the client that determines pricing, permissions, or resource limits. Map client inputs to server-side configurations. Use atomic database functions for limit checking (avoid read-then-write patterns which are vulnerable to race conditions). Rate-limit all creation endpoints.
5. Security Misconfiguration
What it means: Default settings left unchanged, unnecessary features enabled, missing security headers.
In your stack: Your Next.js app has no Content-Security-Policy header. Your Supabase project still has the default anon key permissions. Debug mode is on in production.
// next.config.ts — Add security headers
const securityHeaders = [
{
key: "Content-Security-Policy",
value: [
"default-src 'self'",
"script-src 'self' 'unsafe-inline' 'unsafe-eval'",
"style-src 'self' 'unsafe-inline'",
"img-src 'self' data: https:",
"connect-src 'self' https://*.supabase.co https://api.stripe.com",
"frame-ancestors 'none'",
].join("; "),
},
{ key: "X-Frame-Options", value: "DENY" },
{ key: "X-Content-Type-Options", value: "nosniff" },
{ key: "Referrer-Policy", value: "strict-origin-when-cross-origin" },
{
key: "Strict-Transport-Security",
value: "max-age=63072000; includeSubDomains; preload",
},
{
key: "Permissions-Policy",
value: "camera=(), microphone=(), geolocation=()",
},
];
export default {
async headers() {
return [{ source: "/(.*)", headers: securityHeaders }];
},
};
Also check your package.json for debug flags:
// Bad: Debug mode in production scripts
{
"scripts": {
"start": "NODE_ENV=production DEBUG=* next start"
}
}
// Good: No debug flags in production
{
"scripts": {
"start": "next start"
}
}
Quick fix: Add security headers to next.config.ts (see the example above). Run your site through a security scanner to see what you're missing. Remove x-powered-by (Next.js does this by default with poweredByHeader: false). Disable directory listing. Remove any /api/debug or /api/test routes before going to production.
6. Vulnerable Components
What it means: You're using libraries with known security vulnerabilities.
In your stack: Run npm audit right now. If you see "high" or "critical" vulnerabilities, you're running vulnerable components. This is common in fast-moving JavaScript ecosystems.
# Check for known vulnerabilities
npm audit
# Fix what can be auto-fixed
npm audit fix
# See what's outdated
npx npm-check-updates
# For a full dependency tree analysis
npx depcheck
Here's how to set up automated checks in your CI pipeline:
# .github/workflows/security.yml
name: Security Audit
on:
push:
branches: [main]
schedule:
- cron: "0 9 * * 1" # Every Monday at 9am
jobs:
audit:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
with:
node-version: 20
- run: npm ci
- run: npm audit --audit-level=high
continue-on-error: false
Quick fix: Run npm audit fix today. Set up npm audit in your CI pipeline so it runs on every push. Use Dependabot or Renovate to get automatic PRs when dependencies have security updates. Pin your GitHub Actions to specific SHA hashes rather than tags (e.g., actions/checkout@abc123 instead of actions/checkout@v4).
7. Authentication Failures
What it means: Weak password requirements, no brute-force protection, broken session management.
In your stack: Your login page allows unlimited attempts. Your password reset token doesn't expire. Your session cookies lack HttpOnly and Secure flags.
// Bad: No rate limiting on login
export async function POST(req: Request) {
const { email, password } = await req.json();
const { data, error } = await supabase.auth.signInWithPassword({
email,
password,
});
// attacker can try millions of passwords
return Response.json(data);
}
// Good: Rate limit by IP + email
import { Ratelimit } from "@upstash/ratelimit";
import { Redis } from "@upstash/redis";
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(5, "60 s"), // 5 attempts per minute
});
export async function POST(req: Request) {
const { email, password } = await req.json();
const ip = req.headers.get("x-forwarded-for") ?? "unknown";
const { success } = await ratelimit.limit(`login:${ip}:${email}`);
if (!success) {
return Response.json(
{ error: "Too many attempts. Try again later." },
{ status: 429 }
);
}
const { data, error } = await supabase.auth.signInWithPassword({
email,
password,
});
// Return generic error message to prevent account enumeration
if (error) {
return Response.json(
{ error: "Invalid email or password" },
{ status: 401 }
);
}
return Response.json(data);
}
Quick fix: If you're using Supabase Auth, most of the session management is handled for you. But you still need to: (1) add rate limiting to login and signup endpoints, (2) return generic error messages like "Invalid email or password" instead of "No account found for this email" (which reveals whether an email is registered), (3) enforce minimum password length (Supabase defaults to 6 characters — consider raising it to 8 or 10 via the dashboard), and (4) enable email confirmation for new signups.
8. Software and Data Integrity Failures
What it means: Code and data aren't verified for integrity. CI/CD pipelines can be compromised.
In your stack: Your GitHub Actions workflow uses actions/checkout@latest instead of pinning to a specific SHA. Anyone who compromises that action can inject code into your builds.
# Bad: Using mutable tags
- uses: actions/checkout@v4
- uses: actions/setup-node@v4
# Good: Pin to specific commit SHA
- uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
- uses: actions/setup-node@60edb5dd545a775178f52524783378180af0d1f8 # v4.0.2
Another common integrity failure is not verifying webhook signatures:
// Bad: Trusting webhook payloads without verification
export async function POST(req: Request) {
const event = await req.json();
// Anyone can send a fake webhook to this endpoint!
await processStripeEvent(event);
return Response.json({ received: true });
}
// Good: Verify the webhook signature
import Stripe from "stripe";
const stripe = new Stripe(process.env.STRIPE_SECRET_KEY!);
export async function POST(req: Request) {
const body = await req.text();
const signature = req.headers.get("stripe-signature")!;
let event: Stripe.Event;
try {
event = stripe.webhooks.constructEvent(
body,
signature,
process.env.STRIPE_WEBHOOK_SECRET!
);
} catch (err) {
return Response.json({ error: "Invalid signature" }, { status: 400 });
}
await processStripeEvent(event);
return Response.json({ received: true });
}
Quick fix: Pin all GitHub Actions to commit SHAs. Verify signatures on every webhook endpoint (Stripe, GitHub, Slack, etc.). Use npm ci instead of npm install in CI to ensure reproducible builds from your lockfile. Enable npm's package-lock verification.
9. Logging and Monitoring Failures
What it means: You can't detect attacks because you're not logging security events.
In your stack: You have no idea who logged in, what API keys were used, or whether someone is brute-forcing your endpoints. When something goes wrong, you're blind.
// Minimal security logging for a Next.js API route
export async function POST(req: Request) {
const ip = req.headers.get("x-forwarded-for") ?? "unknown";
const userAgent = req.headers.get("user-agent") ?? "unknown";
const { email, password } = await req.json();
const { data, error } = await supabase.auth.signInWithPassword({
email,
password,
});
if (error) {
// Log failed login attempts
console.warn(
JSON.stringify({
event: "login_failed",
email,
ip,
userAgent,
reason: error.message,
timestamp: new Date().toISOString(),
})
);
return Response.json({ error: "Invalid credentials" }, { status: 401 });
}
// Log successful logins
console.info(
JSON.stringify({
event: "login_success",
userId: data.user.id,
ip,
timestamp: new Date().toISOString(),
})
);
return Response.json(data);
}
For structured logging in production, consider a service like Axiom, Datadog, or even a simple table:
-- Simple security events table
CREATE TABLE security_events (
id uuid DEFAULT gen_random_uuid() PRIMARY KEY,
event_type text NOT NULL, -- 'login_failed', 'api_key_used', 'rate_limited'
user_id uuid REFERENCES auth.users,
ip_address text,
metadata jsonb DEFAULT '{}',
created_at timestamptz DEFAULT now()
);
-- RLS: only service_role can insert, users can read their own
ALTER TABLE security_events ENABLE ROW LEVEL SECURITY;
CREATE POLICY "Users read own events"
ON security_events FOR SELECT
USING (user_id = auth.uid());
Quick fix: At a minimum, log failed login attempts, API key usage, and permission denials. Use structured JSON logging so you can search and alert on patterns. Set up alerts for spikes in failed logins or 403 responses. If you use Vercel, their built-in logs retain data for 1 hour on free plans — consider forwarding to a persistent log store.
10. Server-Side Request Forgery (SSRF)
What it means: An attacker can make your server send requests to internal resources.
In your stack: Your app has a "preview URL" feature that fetches any URL the user provides. An attacker enters http://169.254.169.254/latest/meta-data/ and reads your cloud provider's metadata (including IAM credentials).
// Bad: Fetching any URL the user provides
export async function POST(req: Request) {
const { url } = await req.json();
const response = await fetch(url); // SSRF — user controls the URL
const html = await response.text();
return Response.json({ preview: html });
}
// Good: Validate URL against allowlist and block private IPs
import { isPrivateIP } from "@/lib/url-validation";
const BLOCKED_HOSTS = ["localhost", "127.0.0.1", "0.0.0.0", "metadata.google"];
export async function POST(req: Request) {
const { url } = await req.json();
let parsed: URL;
try {
parsed = new URL(url);
} catch {
return Response.json({ error: "Invalid URL" }, { status: 400 });
}
// Only allow HTTP(S)
if (!["http:", "https:"].includes(parsed.protocol)) {
return Response.json({ error: "Invalid protocol" }, { status: 400 });
}
// Block private/internal addresses
if (
BLOCKED_HOSTS.some((h) => parsed.hostname.includes(h)) ||
parsed.hostname.endsWith(".local") ||
parsed.hostname.endsWith(".internal") ||
(await isPrivateIP(parsed.hostname))
) {
return Response.json({ error: "URL not allowed" }, { status: 400 });
}
const response = await fetch(url, {
redirect: "error", // don't follow redirects to internal IPs
signal: AbortSignal.timeout(5000), // 5s timeout
});
const html = await response.text();
return Response.json({ preview: html.slice(0, 10000) }); // limit response size
}
Quick fix: If your app fetches user-provided URLs (link previews, favicon fetching, webhook delivery, URL imports), validate the URL scheme, resolve the hostname to an IP address and check if it's private before making the request. Block localhost, 127.0.0.1, 169.254.x.x, 10.x.x.x, 172.16-31.x.x, 192.168.x.x, and any .local/.internal domains. Don't follow redirects blindly — an attacker can redirect from a public URL to an internal one.
What to Do About It
You don't need to become a security expert. You need to:
- Know your attack surface — which of these 10 apply to your app?
- Scan regularly — automated tools catch the obvious stuff
- Fix incrementally — address critical and high severity issues first
- Make it a habit — scan on every deploy, not once a quarter
The OWASP Top 10 hasn't changed dramatically in years because the same mistakes keep happening. The good news: they're all fixable.
If you're looking for a practical walkthrough, check out our OWASP Top 10 Checklist for a step-by-step approach to addressing each vulnerability category. And before you launch, run through our SaaS Security Checklist to make sure nothing slips through.
For API-specific concerns (which many indie hackers underestimate), our API Security Checklist covers authentication, rate limiting, input validation, and more.
FAQ
Do indie hackers really get hacked?
Yes, and more often than you'd think. Small SaaS apps are attractive targets specifically because attackers assume security is weak. Automated bots don't care whether you have 10 users or 10 million — they scan the entire internet for known vulnerabilities like exposed .env files, unprotected admin routes, and SQL injection points. If your app processes payments, stores user data, or has API keys, you're a target. The attacks are automated and indiscriminate. A Shodan or Censys scan will find your app within days of it going live.
Which OWASP risk should I fix first?
Start with Broken Access Control (A01). It's the #1 risk for a reason, and it's the one most likely to cause real damage to your users. If someone can access another user's data, everything else is secondary. After that, tackle Injection (A03) and Cryptographic Failures (A02) — leaked API keys and SQL injection are the two most common ways small apps get compromised. You can address Security Misconfiguration (A05) in an afternoon by adding security headers and running a quick scan. The rest can be prioritized based on your specific architecture.
Is the OWASP Top 10 enough?
The OWASP Top 10 is a great starting point, but it's not exhaustive. It covers the most common and impactful web application security risks, but it intentionally leaves out things like business logic flaws, API-specific vulnerabilities (covered separately in the OWASP API Security Top 10), client-side security issues, and infrastructure misconfigurations. Think of it as the minimum bar, not the finish line. For a more comprehensive approach, combine it with regular automated scanning, dependency auditing, and periodic manual review of your most sensitive code paths.
How do I check for broken access control?
The simplest test: log in as User A, copy a URL that includes a resource ID (like /api/projects/abc123), then log in as User B and paste that URL. If User B can see User A's data, you have broken access control. For API routes, use a tool like curl or Postman to send requests with one user's auth token but another user's resource IDs. In Supabase specifically, check that every table with user data has RLS enabled (ALTER TABLE x ENABLE ROW LEVEL SECURITY) and a policy that filters by auth.uid(). You can also run an automated scan with a tool like CheckVibe that tests for access control issues across your entire application surface.