Security Research
Vibe Coding Security Risks
Vibe coding security risks are the vulnerabilities introduced when AI tools like Cursor, Lovable, Bolt, Replit, and v0 generate application code. After scanning 1,430+ AI-built applications and finding 5,711 vulnerabilities, the most common risks include missing row-level security policies (still the #1 issue), exposed API keys in client bundles, client-side-only authentication, and hallucinated npm packages that attackers can hijack. Below are 24 specific vulnerabilities organized by category, based on real scan data.
Why are vibe-coded apps vulnerable?
AI models optimize for working code, not secure code. They reproduce patterns from training data without understanding your threat model. The result: apps that work perfectly in demos but expose user data, payment flows, and admin access in production.
What are the code generation risks in vibe coding?
Hallucinated Security Functions
CriticalAI invents non-existent security libraries or methods that look legitimate but provide zero protection. Your app appears secure but has no actual defenses.
Outdated Vulnerability Patterns
CriticalAI models trained on older code reproduce deprecated patterns with known CVEs. You inherit vulnerabilities from code written years ago.
Copy-Paste Propagation
HighA single insecure pattern generated early in a session gets repeated across your entire codebase as the AI references its own output.
Incomplete Error Handling
HighAI generates try-catch blocks with empty handlers or generic catches that swallow critical security errors silently.
What authentication risks do AI coding tools introduce?
Client-Side Auth Checks Only
CriticalAI tools often generate auth guards in React/Vue but skip server-side validation entirely. Anyone with dev tools can bypass them.
Hardcoded API Keys
CriticalAI frequently embeds Supabase anon keys, Firebase configs, and API secrets directly in frontend code visible to anyone.
Missing Row-Level Security
CriticalSupabase and Firebase apps built with AI rarely have proper RLS policies. Any authenticated user can read/modify any data.
Weak Session Management
HighAI generates predictable session tokens, skips expiration logic, or stores tokens insecurely in localStorage.
How does AI-generated code expose sensitive data?
Over-Permissive CORS
HighAI defaults to cors({ origin: "*" }) which lets any website make authenticated requests to your API.
Verbose Error Messages
HighStack traces, database schemas, and internal paths leaked to users through AI-generated error handlers.
Excessive API Responses
MediumAI returns entire database records including sensitive fields like password hashes, emails, and internal IDs.
Unprotected Admin Endpoints
CriticalAI creates admin routes without authentication middleware, assuming the frontend will handle access control.
What are the dependency risks of AI-generated code?
Phantom Dependencies
HighAI suggests packages that do not exist on npm/PyPI. Attackers register these names and publish malicious code.
Outdated Package Versions
MediumAI recommends specific versions from its training data that now have known security vulnerabilities.
Unnecessary Dependencies
MediumAI imports heavy libraries for simple operations, expanding your attack surface with code you do not need.
Missing Lock Files
MediumAI-generated projects often skip lock file configuration, allowing dependency versions to drift silently.
What deployment security risks does vibe coding create?
Secrets in Source Code
CriticalDatabase URLs, JWT secrets, and payment keys committed to git repositories because AI put them in config files.
Missing HTTPS Enforcement
HighAI configures HTTP servers without TLS redirects, leaving data transmitted in plaintext.
Debug Mode in Production
HighAI leaves development flags enabled: debug logging, hot reload endpoints, source maps exposed to users.
No Rate Limiting
HighAI-generated APIs have zero throttling. Attackers can brute-force login, scrape data, or run up your cloud bill.
What business logic vulnerabilities do AI tools create?
Payment Bypass
CriticalAI implements payment flows that can be skipped by modifying client-side state or calling API endpoints directly.
Race Conditions
HighAI generates concurrent database operations without transactions or locks, enabling double-spending and data corruption.
Insecure Direct Object References
HighAI uses sequential IDs in URLs without ownership checks. Users can access other users' data by changing the ID.
Missing Input Validation
HighAI trusts all user input. No length limits, type checks, or sanitization on form fields and API parameters.
Which AI coding tools are most affected by security risks?
Every AI coding tool produces these security risks. Full-stack tools like Lovable and Bolt that control the entire application have the highest risk because they generate both frontend and backend code. Code assistants like Cursor and Copilot have lower risk because a developer reviews each change. The severity depends on how much of the stack the tool controls:
Full-stack builders
Highest risk. Generate entire apps including auth, database, and deployment.
IDE assistants
Medium risk. Generate code within existing projects but may miss security context.
Code completion
Lower risk. Suggest snippets but developer controls architecture decisions.
Related Resources
Common Security Flaws
Detailed code examples of each vulnerability with fixes
Token Leak Checker
Free tool to scan for exposed API keys in your app
Vibe Code Scanner
Universal security scanner for AI-generated apps
Firebase Security Scanner
Check Firestore rules and Cloud Storage config
Node.js Security Scanner
Find Express misconfigs and dependency vulnerabilities
Is Lovable Safe?
Security analysis of the most popular AI app builder
Find these risks in your app
VibeEval scans for all 24 risk categories automatically. Paste your URL and get a security report in under 5 minutes.
Scan your app for free