Writing tests is one of those things every developer knows they should do more of. It’s also one of the first things that gets cut when deadlines loom. AI changes this equation significantly—not by making testing optional, but by making it fast enough that skipping it stops feeling justified.
This article covers how to use AI effectively to write better tests, find edge cases you’d miss, and build a testing habit that actually sticks.
Why Testing Gets Skipped
Before getting into how AI helps, it’s worth being honest about why tests get skipped in the first place:
- Writing tests takes time, especially for complex business logic
- Figuring out what to test (and what not to) requires thought
- Mocking dependencies is tedious boilerplate
- Edge cases are hard to imagine systematically
- Tests for someone else’s code feel foreign
AI addresses most of these directly. The tedium disappears. The edge case imagination improves. The time cost drops dramatically.
Starting with What You Have
The fastest win is generating tests for existing code. You don’t need to change your workflow—just feed AI the code you’ve already written.
Here is a TypeScript service method:
[paste the function]
Write Jest unit tests for this function. Cover:
- The happy path
- All early return conditions
- Error cases
- Boundary values for numeric inputs
This prompt alone will produce something useful in under a minute. The key is to be explicit about what you want covered—AI won’t automatically guess your risk tolerance.
A Concrete Example
Say you have this payment validation function:
function validatePaymentAmount(amount: number, currency: string): ValidationResult {
if (amount <= 0) {
return { valid: false, error: "Amount must be positive" };
}
if (amount > 1_000_000) {
return { valid: false, error: "Amount exceeds maximum limit" };
}
const supportedCurrencies = ["USD", "EUR", "GBP"];
if (!supportedCurrencies.includes(currency)) {
return { valid: false, error: `Currency ${currency} is not supported` };
}
return { valid: true };
}
A prompt asking AI to “write tests for this” might produce basic happy/sad path coverage. But a more directed prompt will produce much more:
Write Jest tests for validatePaymentAmount. I want:
1. Positive assertions for all valid currency/amount combinations
2. Boundary tests: amount = 0, amount = 1, amount = 1_000_000, amount = 1_000_001
3. All invalid currency cases
4. Tests that document the exact error messages returned
5. A test that verifies the function is pure (same input = same output)
That second prompt produces tests that actually serve as documentation.
Finding Edge Cases You’d Miss
This is where AI genuinely outperforms solo thinking. When you write tests yourself, you test the cases you imagined while writing the code. AI brings a different perspective.
The Adversarial Prompt
You are a QA engineer trying to break this function. List 10 edge cases
that a developer might not think to test for. Consider:
- Internationalization and encoding issues
- Concurrency if applicable
- Extreme values and overflow conditions
- Empty, null, and undefined inputs
- Inputs that look valid but aren't
- Race conditions or timing issues
Function: [paste your function]
Apply this to the payment validator and you’ll get cases like:
- What if
currencyis an empty string? - What if
amountisNaNorInfinity? - What if
amountis-0? - What if
currencyis lowercase"usd"? - What if
amountis a floating-point value with precision issues like1_000_000.000001?
These are real bugs waiting to happen. The adversarial prompt surfaces them before they reach production.
Writing Tests Before Code
The real power shift comes when you flip the order: write tests first with AI, then implement to make them pass. This is traditional TDD, but with the friction removed.
The Test-First Workflow
Step 1: Describe the behavior you need in natural language.
I need to build a rate limiter for our API. It should:
- Allow N requests per time window per user
- Return remaining quota in the response
- Block requests when quota is exhausted
- Reset quota at the start of each new window
- Handle concurrent requests without race conditions
Write comprehensive Jest tests for a RateLimiter class that implements this.
Don't implement the class yet—just the tests.
Step 2: Review and refine the tests. This forces you to think through the requirements before touching implementation code.
Step 3: Ask AI to implement the class to satisfy the tests.
Here are my Jest tests for RateLimiter:
[paste the tests]
Now implement the RateLimiter class to make all these tests pass.
Use an in-memory store for now. The implementation should be clean
and handle the concurrency case correctly.
The tests become your specification. You catch requirement gaps before writing a line of production code.
Integration and E2E Test Generation
AI isn’t just useful for unit tests. Integration and end-to-end tests have even more boilerplate—and AI handles that boilerplate well.
Integration Test Pattern
I have a NestJS controller endpoint:
POST /orders
- Validates the request body
- Creates an order in PostgreSQL
- Publishes an event to RabbitMQ
- Returns the created order with 201
Write Supertest integration tests for this endpoint. Use a test database.
Include tests for:
- Successful order creation
- Validation failures (missing fields, invalid types)
- Database connection failure
- Message queue failure (should still return success but log the error)
This produces a scaffold you can drop into your test file and fill in with your actual DTOs and services. The structure is right; you just wire in your specifics.
E2E Test Generation
For browser-based E2E tests, describe the user journey:
Write a Playwright test for the checkout flow:
1. User has items in their cart
2. User clicks "Proceed to checkout"
3. User fills in shipping address
4. User selects payment method
5. User clicks "Place order"
6. User sees order confirmation with order number
Include assertions at each step. Handle the case where payment fails.
Use page object model pattern.
AI will produce a well-structured Playwright test complete with page objects, assertions, and the failure scenario. You save the 30 minutes of boilerplate setup.
Generating Tests from Documentation
If you have API documentation, requirements documents, or even Jira tickets, AI can generate tests directly from them.
Here is our API documentation for the user authentication endpoint:
[paste OpenAPI spec or written requirements]
Generate a comprehensive test suite covering all documented behaviors,
status codes, and error conditions. Use Jest and supertest.
This is particularly valuable when inheriting a codebase with documentation but no tests. You can bootstrap coverage quickly without having to read all the code first.
Improving Existing Tests
AI isn’t only for writing new tests—it’s useful for improving tests you already have.
The Test Review Prompt
Review these tests and identify:
1. Cases that are missing
2. Assertions that are too weak (e.g., just checking it doesn't throw)
3. Tests that are testing implementation details rather than behavior
4. Opportunities to consolidate repetitive tests with parameterization
5. Any tests that might give false confidence
[paste existing tests]
This is a quick way to strengthen a test suite without starting from scratch.
Parameterizing Repetitive Tests
A common smell is writing the same test 10 times with slightly different inputs:
// Before: repetitive and fragile
it("should reject empty email", () => { ... });
it("should reject email without @", () => { ... });
it("should reject email without domain", () => { ... });
Ask AI to refactor these into parameterized tests:
Refactor these tests into a single it.each or describe.each block.
Keep the test descriptions readable. Maintain all the existing assertions.
[paste the repetitive tests]
The result is a cleaner test file and easier maintenance when the cases change.
Handling Mocks and Test Doubles
Mocking is where tests get painful. Dependencies on databases, external APIs, and message queues add complexity that discourages test writing. AI handles this well.
This service depends on:
- UserRepository (TypeORM repository)
- EmailService (external HTTP API wrapper)
- CacheService (Redis wrapper)
Write Jest mocks for all three dependencies and use them in tests for
the following method: [paste method]
Use jest.mock() for the modules and jest.spyOn() for specific methods
where I want to verify calls. Show me the full test file structure.
AI will produce realistic mock implementations with correct TypeScript types, spy setups, and assertion patterns. What normally takes 20 minutes of documentation-reading takes 30 seconds.
Mocking the API layer for E2E and integration tests
Jest mocks work well at the unit level, but for integration and E2E tests you often need something at the HTTP layer — a real server that responds to requests the same way the backend would, so your frontend or service-under-test doesn’t need to know it’s talking to a stub.
This is where I use mockr, an open-source CLI tool I built for exactly this use case. You define routes and stub responses in a config file, start the server, and point your tests at it instead of the real backend. No code changes to the service under test, no interceptors, no patching at the HTTP client level.
A minimal mockr config for the order API used in the integration test example above:
[[routes]]
method = "POST"
match = "/orders"
enabled = true
fallback = "created"
[routes.cases.created]
status = 201
json = '{"id": "{{uuid}}", "status": "confirmed", "created_at": "{{now}}"}'
[routes.cases.validation_error]
status = 422
json = '{"error": "amount must be positive"}'
[routes.cases.gateway_timeout]
status = 504
json = '{"error": "payment gateway timeout"}'
delay = 3
Start it before your test suite:
mockr --config ./mocks --port 4000
Then in your Playwright or Supertest setup, point the base URL at http://localhost:4000. Switch between response scenarios by changing fallback in the config — no code changes, no test resets. The delay field on gateway_timeout lets you test timeout handling at a realistic speed without actually waiting for a network call.
This pairs well with AI-generated tests: ask AI to write the Playwright or Supertest suite against http://localhost:4000, and ask it to generate the mockr config alongside it. You get the full integration test setup — server stub and test file — in one pass.
Test Coverage as a Feedback Loop
One pattern that works well: run your coverage report, then use AI to fill the gaps.
My coverage report shows these uncovered lines in payments/refund.service.ts:
Lines 45-52: Error handling branch when PaymentGateway throws
Lines 78-84: The retry logic path
Line 103: The case where refundAmount > originalAmount
Write tests that cover exactly these branches. Here is the full service
file for context: [paste file]
This is more precise than asking for “more tests.” You tell AI exactly what’s uncovered, it targets those paths specifically. Coverage climbs without wasted effort on paths already tested.
Building a Testing Habit
The biggest change AI makes is lowering the activation energy for testing. When writing a test takes 5 minutes instead of 30, the cost-benefit calculation changes.
A practical habit to build:
For every new function you write, immediately follow it with this prompt:
Write unit tests for the function I just wrote. Aim for 90%+ branch coverage.
[paste the function]
It takes 2 minutes. You review, adjust, and move on. Over time this becomes the path of least resistance.
For every bug you fix, add a regression test before closing the ticket:
This bug occurred because [explain the bug].
Write a test that would have caught this bug.
The test should fail on the broken code and pass after the fix.
[paste the affected code]
Regression tests written this way are precise and meaningful. They document why the fix exists.
What AI Can’t Do
Be realistic about the limits:
- AI doesn’t know your system’s actual behavior: It will write tests based on what the code appears to do, not what it should do. If the code has a bug, the test might encode that bug.
- AI-generated mocks can drift: If your dependencies change, AI-generated mocks won’t automatically update. Treat them as starting points.
- AI can’t test for requirements it doesn’t know: If a business rule isn’t in the prompt, it won’t be tested. You still own the requirements.
- Generated test names can be vague: Review and rewrite descriptions to be precise.
"should return error"tells you nothing."should return 422 when email is already registered"does.
Practical Checklist
- Add “write tests for this” to your post-implementation routine
- Use the adversarial prompt on any function handling money, auth, or external data
- Try test-first for your next feature with AI-generated tests as the spec
- Run a coverage report and target uncovered branches with specific prompts
- Review existing tests with AI and fix the weakest ones
- Build mock templates for your most common dependencies
- Add regression tests for every bug fix going forward
Conclusion
The argument against testing has always been time. That argument gets weaker every month as AI tooling improves. The tests that used to take an hour to write now take minutes.
What doesn’t change is the thinking behind good tests: understanding what the code should do, what can go wrong, and what confidence you need before shipping. AI accelerates the execution; you still own the judgment.
The best-tested codebases won’t be the ones where developers had the most discipline. They’ll be the ones where testing became the easy path—and AI is what makes that possible.
Start with one function you wrote today. Write tests for it now. You’ll see why this becomes addictive.
