/

Software Testing

Unit Testing: What It Is, What It Misses, and When to Use It

|

Yunhao Jiao

Unit testing is the most widely practiced form of automated testing and the most often misunderstood. It's the foundation of most testing strategies — and the layer most likely to give teams false confidence when used as their only testing approach.

This guide covers what unit testing actually does, what it doesn't do, and how to use it effectively as one layer in a complete testing strategy.

What is Unit Testing?

A unit test is an automated test that verifies a single, isolated unit of code — typically a function or a class method — in complete isolation from its dependencies.

The isolation is the defining characteristic. Unit tests use mocks, stubs, or fakes to replace all external dependencies (databases, APIs, file systems, other modules) so that the test only exercises the logic of the unit under test.

A unit test answers the question: does this function, given this specific input, return this specific output? It doesn't ask whether the function is called correctly by its consumers, whether the data it processes is realistic, or whether the system it's part of works end-to-end.

What Unit Tests Are Good For

Pure business logic. Tax calculation, pricing rules, discount logic, date arithmetic, string manipulation — functions that take inputs and return outputs without side effects. Unit tests are the right tool here: fast, deterministic, easy to maintain.

Complex branching logic. Functions with many code paths (many if conditions, error cases, edge cases) benefit from unit tests that verify each path. The speed and isolation of unit tests makes it practical to test every branch.

Utility functions and libraries. Shared functions used across the codebase should have thorough unit test coverage. A bug in a utility function can affect many consumers; catching it at the unit level is far cheaper than catching it through E2E failures.

Regression prevention for known bugs. When a bug is fixed, adding a unit test that would have caught it prevents recurrence. This is one of the most reliable quality practices in software development.

What Unit Tests Can't Do

They Can't Verify Integration Points

Unit tests mock all dependencies. This means they can't verify that:

  • Your code calls the database correctly and gets the right results

  • Your API request formatting matches what the real server expects

  • Your third-party integration handles the real API's response schema

  • Your authentication check actually blocks unauthorized requests in the real request context

Every mocked dependency is a trust claim: "I believe this dependency behaves this way." Unit tests verify your code works assuming those claims are true. Integration and E2E tests verify the claims themselves.

They Can't Catch Intent Gaps

This is the most important limitation for teams using AI coding tools. A unit test is typically written after (or by) the same system that wrote the implementation. If the implementation is wrong — if it does something plausible but doesn't match the actual requirement — unit tests usually confirm the wrong implementation.

Consider: an AI coding agent implements a discount calculation function. The function is internally correct — the math works. But it calculates the discount against the pre-tax total when the requirement specified post-tax. Unit tests written for this function verify the math and pass. The requirement isn't met.

Only requirement-derived tests — derived from the specification, not from the implementation — catch this class of bug. TestSprite's spec-driven agentic testing is designed specifically for this.

They Can't Catch User Experience Issues

Unit tests operate at the code level, not the user level. A suite of 500 passing unit tests tells you nothing about whether users can actually complete your sign-up flow, whether the checkout button is visible on mobile, or whether the loading state renders correctly.

They Can't Catch Emergent Behavior

Some bugs only appear when multiple components interact. A timing bug between two services, a race condition in concurrent request handling, a cascade failure when one dependency slows down — these are emergent behaviors that unit tests (which test in isolation) structurally cannot detect.

Unit Testing Best Practices

Test behavior, not implementation. Tests should verify what a function does, not how it does it. Tests that verify internal implementation details break on refactoring, even when behavior is unchanged. Write tests that would still be valid after the internal implementation is rewritten.

One assertion per test (mostly). Tests that verify multiple behaviors simultaneously are harder to diagnose when they fail. Each test should clearly express one thing that must be true.

Name tests descriptively. The test name should be a sentence describing what it verifies: calculates_discount_as_percentage_of_post_tax_total rather than test_discount_1.

Test edge cases explicitly. Don't just test the happy path. For each function, explicitly test: empty inputs, null/undefined inputs, minimum and maximum boundary values, error conditions.

Keep unit tests fast. Unit tests should run in milliseconds. Tests that take seconds have dependencies that should be mocked. Slow unit test suites are the ones that teams skip.

Unit Tests in the Context of a Complete Strategy

Unit tests are necessary but not sufficient. A complete testing strategy uses:

  • Unit tests for logic correctness (fast feedback on implementation)

  • Integration tests for contract and boundary correctness

  • E2E tests for user flow and requirement correctness — TestSprite handles this autonomously

The common antipattern is teams with high unit test coverage and no E2E coverage, discovering that their well-tested functions compose into a broken user experience. The functions all work; the product doesn't.

Unit tests tell you the parts are correct. E2E tests tell you the product is correct. You need both.

Add requirement-based E2E coverage to your unit test suite →