
Security testing has historically been treated as a separate discipline — something done by specialized security engineers, pentesting firms, or compliance teams, largely disconnected from the daily development workflow.
This model is increasingly inadequate. Security vulnerabilities are introduced by developers (and now by AI coding agents), and the best time to catch them is during development, not in a quarterly security audit. This guide covers the security testing practices that developers can and should own as part of their normal workflow.
Why Security Testing Belongs in the Development Workflow
The shift-left principle applies to security as much as it does to functionality: the earlier a security issue is caught, the cheaper it is to fix.
A SQL injection vulnerability caught in a code review or automated test takes minutes to fix. The same vulnerability caught in a penetration test takes days — the pen tester's report has to be triaged, prioritized, reproduced, and fixed, typically weeks after the code was written. Caught in production after a data breach, it's a regulatory incident with potentially devastating consequences.
For teams using AI coding tools, developer-owned security testing is especially important. AI coding agents generate code quickly and often correctly from a functional standpoint, but they can introduce security issues that look plausible: authentication checks in the wrong place, missing input validation, overly permissive CORS settings, SQL built from user input without parameterization. These aren't exotic vulnerabilities — they're the OWASP Top 10 applied to AI-generated implementations.
The Key Security Tests Every Developer Should Run
Authentication and Authorization Testing
The most common and most severe security vulnerabilities in web applications are authentication and authorization failures. Test these explicitly:
Authentication tests:
Unauthenticated requests to protected endpoints return 401, not 200 or a redirect that leaks data
Authentication tokens expire correctly and expired tokens are rejected
Logout invalidates the session — the token can't be reused after logout
Brute force protection exists on login endpoints (rate limiting, lockout)
Authorization tests (the harder category):
A user can only access their own resources, not other users' data (IDOR — Insecure Direct Object Reference)
Privilege escalation is not possible: a regular user cannot access admin endpoints
Horizontal privilege escalation is not possible: user A cannot access user B's data by guessing user B's ID
IDOR vulnerabilities are one of the most common security bugs in AI-generated code. AI coding agents often implement endpoints that accept a user ID as a parameter and return data for that ID — without verifying that the authenticated user is the same as the requested user ID.
TestSprite's agentic testing engine includes authorization testing as a standard part of its test generation. When it reads your requirements and detects user-specific resources, it generates tests that verify each resource is properly access-controlled.
Input Validation Testing
Test that your application correctly handles malicious or malformed input:
SQL injection: Does the application sanitize user input before using it in database queries?
XSS: Does the application escape user-provided content before rendering it in HTML?
Path traversal: Does the application validate file paths before accessing the filesystem?
Oversized input: Does the application handle unexpectedly large inputs without crashing?
Special characters: Does the application handle Unicode, null bytes, and other special characters gracefully?
For AI-generated code, focus especially on any endpoint that accepts user input and uses it in a database query, file operation, or HTML rendering context.
API Security Testing
Modern applications are largely API-driven, and APIs have specific security concerns:
Missing authentication: Every API endpoint should require authentication unless explicitly designed as a public endpoint. Test that authenticated endpoints reject unauthenticated requests.
Excessive data exposure: Does the API return more data than the client needs? API responses should return the minimum necessary fields.
Mass assignment: Does the API accept and apply arbitrary properties in request bodies? Only whitelisted properties should be applied.
Rate limiting: Are sensitive operations (login, password reset, email verification) rate-limited to prevent abuse?
Dependency Vulnerability Scanning
A significant portion of security vulnerabilities in modern applications come from dependencies with known vulnerabilities, not from application code. Automated scanning is essential:
Integrate dependency scanning into your CI/CD pipeline. Don't merge PRs that introduce high or critical severity dependency vulnerabilities.
Security Testing in CI/CD
A practical security testing setup for developer-owned security:
On every PR:
Static analysis (ESLint security plugins, Bandit for Python, etc.)
Dependency vulnerability scanning
TestSprite's authorization tests (runs as part of the standard E2E suite)
Weekly:
Full SAST (Static Application Security Testing) scan
Dependency audit with full severity reporting
Before major releases:
Dynamic security scanning against staging (OWASP ZAP in automated mode)
Manual review of authentication and authorization logic for new features
What AI Coding Tools Get Wrong on Security
Specific patterns to watch for in AI-generated code:
Trusting user input. AI coding agents often generate code that accepts user-provided IDs directly in database queries or file paths without validation. Always verify that the authenticated user is authorized to access the requested resource.
Returning too much data. AI-generated API responses often return full database records when a subset of fields is appropriate. Review what each API endpoint exposes.
Hardcoded secrets in generated files. AI coding agents sometimes generate example code with placeholder secrets that look like real values. Audit generated configuration files and ensure no real credentials were generated as examples.
Permissive CORS. AI-generated server configuration sometimes uses permissive CORS settings (*) appropriate for development but not production.
Missing rate limiting. AI coding agents generate functional authentication endpoints that often lack rate limiting. Add rate limiting explicitly to all authentication, password reset, and email verification endpoints.
TestSprite's security testing coverage verifies authorization logic, authentication enforcement, and input handling as part of its standard agentic testing suite — catching the most common AI-generated security issues before they reach production.
