Secure AI Coding Practices
Learn how to craft security-focused prompts and follow best practices when using AI coding assistants. Generate more secure code from Copilot, Cursor, and other AI tools with proper prompt engineering.
Security Requires Explicit Prompting
AI coding tools optimize for functionality, not security. Generic prompts like "add user login" will produce working but insecure code. You must explicitly request secure implementations in every prompt.
Secure Prompting Checklist
Follow these 12 practices when prompting AI coding assistants. Critical items should be included in every security-sensitive prompt.
Include security context in prompts
Explicitly request secure implementations: "Generate secure authentication using bcrypt" rather than just "add login".
Specify security libraries
Name established security libraries in prompts: "Use express-validator for input sanitization" or "Implement JWT with jsonwebtoken library".
Request input validation
Always ask for validation: "Add input validation and sanitization for all user inputs" when generating endpoints.
Demand parameterized queries
Explicitly state: "Use parameterized queries" or "Prepare statements" when working with databases.
Ask for error handling
Request proper error handling: "Add try-catch with safe error messages that do not expose system details".
Specify environment variables
Prompt for config management: "Store API keys in environment variables, never hardcode" when adding integrations.
Request rate limiting
Include throttling requirements: "Add rate limiting to prevent brute force attacks" for authentication endpoints.
Ask for authorization checks
Explicitly request: "Verify user has permission to access this resource" when building protected endpoints.
Specify secure defaults
Request secure configurations: "Set secure CORS policy" or "Configure CSP headers" when setting up servers.
Request security headers
Ask for headers: "Add security headers including CSP, HSTS, X-Frame-Options" when configuring middleware.
Demand logging best practices
Specify: "Log security events but never log passwords or sensitive data" when implementing logging.
Review and iterate
Never accept first output. Review generated code, identify security gaps, and refine with security-focused follow-up prompts.
Prompt Examples: Bad vs Good
Bad Prompt
BadAdd user login
Good Prompt
GoodImplement secure user authentication using bcrypt for password hashing, with rate limiting and session management. Store secrets in environment variables.
Bad Prompt
BadCreate API to get user data
Good Prompt
GoodCreate authenticated API endpoint that returns user data. Verify JWT token, check user authorization, validate input IDs, use parameterized queries, return only necessary fields.
Related Resources
Verify Your AI-Generated Code
Even with secure prompts, AI-generated code needs verification. VibeEval automatically scans for security issues in code from Copilot, Cursor, and other AI tools.
Start Free Security Scan