Minimal Fixture everywhere

Marcin Gryszko
8 min readOct 3, 2020


Photo by Glen Carrie on Unsplash

When people hear the word fixture in the context of tests, they may imagine a huge dataset in the database or REST service mocks. The test fixture is a broader concept. And usually has a much smaller scope than you would expect.

A test fixture is defined as everything you need to execute the System Under Test (SUT, it can be an object method, standalone function, or a microservice API). And you will require:

  • direct inputs — method or function arguments, the data you will feed into the SUT
  • expected direct outputs — values or exceptions to compare with the result of the SUT execution
  • indirect inputs — arguments passed to and values returned from the queries to stubbed¹ SUT dependencies
  • indirect outputs — recorded executions of commands sent to mocked SUT dependencies and the command arguments
  • the SUT itself with all required dependencies (the real ones or replaced with test doubles)

In this article, I will be using test pattern terminology from the XUnit Test Patterns book. It introduced a common language for the problems and solutions around automated testing (and that was already in 2007!). I recommend wholeheartedly to read the book, even if it is more than 10 years old (or at least skim through the companion website). You find there condensed wisdom about testing that you usually get from reading tons of blogs or by making your own mistakes when practicing the craft.

Developers writing tests want to reap the benefits of having an automated test suite. It is supposed to provide a specification of their system and guard against unintended changes that violate the specification. Engineers want to spend minimal time on writing the tests. Those who practice TDD use tests as a feedback tool for their designs. The intentions are always good, but in practice, I encounter lots of issues about the fixture creation that undermine those lofty goals. Let’s go through them and see how they may be fixed.

Too much data

You create a complex object/data structure when a simpler structure could be used to execute the SUT. For example, you set all the properties or you attach all the children/peers of the object as in this code fragment in Kotlin language:

val invoice = Invoice(
id = 12345,
totalAmount = 100.toBigDecimal(),
customer = Customer(id = 23456, firstName = "Marcin", secondName = "Gryszko"),
lineItems = listOf(
LineItem(sku = "fake SKU 1", amount = 10.toBigDecimal()),
LineItem(sku = "fake SKU 2", amount = 90.toBigDecimal())

The SUT will use only the so you don't have to fill lineItems. You can leave them empty:

val invoice = Invoice(
id = 12345,
totalAmount = 100.toBigDecimal(),
customer = Customer(id = 23456, firstName = "Marcin", surname = "Gryszko", middleName = "NA"),
lineItems = emptyList()

Some people use external libraries to generate complete objects with random data (e.g. Podam). This approach can be even worse: you generate too large fixture that may be random. Randomness and fixture complexity may make your test not only less understandable but also erratic.

Too complex data

Your fixture uses a complex representation when a simple one would be sufficient. We can simplify the previous example even more:

val invoice = Invoice(
id = 12345,
totalAmount = 0.toBigDecimal(),
customer = Customer(id = 0, firstName = "", surname = "", middleName = null),
lineItems = emptyList()

by setting required properties to the most simple representation possible or null if a property is optional. They are just fill-in values required to construct the data.

In conversations with my fellow engineers, I often find a mix of astonishment and resistance when I propose to replace a value with null. We learned the hard way that null s sneak into the runtime due to the developer mistakes and cause costly errors. In tests, we want to introduce them deliberately to increase test sensitivity against regressions.

Imagine that in the previous example, somebody adds logic based on invoice customer middleName without modifying the test. If middleName had a value, there is a chance that the test wouldn't fail. With null middleName, you will get a NullPointerException indicating that the value became to be used by the SUT.

Too realistic data

Your fixture uses some real examples from the domain but your SUT doesn’t care about the meaning of the data passed to it². The behaviour of the SUT isn’t different if you use a real/realistic example or just a minimal one (as in the previous section) with invented values for the required properties. Take this example of a domain-to-JSON converter test:

// Google is our main customer
val tenant = DomainTenant(id = 92348476, companyName = "Google, LLC", address = "Mountain View, California, United States")

val jsonTenant = converter.toJsonRepresentation(tenant)

// assert JSON representation

The converter doesn’t know nor cares about Google being our most profitable customer. It is just an infrastructure class that maps one type to another. We can change this test to:

val tenant = DomainTenant(id = 1, companyName = "::company::", address = "::address::")

You may notice a strange notation for string constants For the string values, I use a notation learned from J.B. Rainsberger to indicate that I need a value to pass it to the SUT but I don’t care what is the exact value.

Too little data

Yes, you can underspecify the SUT too. This happens when you are describing the behaviour of our test doubles with lenient matches:

when(invoiceCreator.create(any(), any())).thenReturn(OK)

You are allowing to pass any argument to the stub which is a source of indirect inputs. Those arguments could be replaced by another value (or even by null) in the SUT and the test wouldn't stop you from doing so.

Use generally strict matchers when specifying test double behaviour:

when(invoiceCreator.create(invoice, tenant)).thenReturn(OK)

There are exceptions from this rule described in more detail in:

Variations of data

Someone has in mind eventual extensions of the SUT and adds to the fixture data variations that in an unspecified future may change the logic of the SUT. Currently, this data is required in the fixture but irrelevant to the SUT. The SUT doesn’t take any decision on that data:

fun `create invoice`(tenant: Tenant) {
// tenant is required to execute the SUT but there is no SUT logic on tenant!

You can safely remove those variations and use a single value (the simplest you can think of!).

Shared data

In your career, you are taught that duplication is bad. You eagerly remove it in tests without notifying that actually, this process introduces other, worse problems than the duplication itself.

You realize that some parts of the fixture are very similar between tests. So you extract them into parameterized creation methods that spawn similar objects valid for a variety of tests. Or you create and share standard objects to reuse in tests (pattern known as Object Mother).

As a result, your SUT is fed with too much data. Basically, it is the same complex data problem as described in the Too complex data section, with a difference that the data is external to the test.

What are the consequences? Tests exhibit high coupling to the extracted fixture. They can become fragile and start suddenly to fail because somebody adapts the fixture to their own new test. Irrelevant details or mystery guests (parts of the fixture created outside of the test) phenomena appear, making it hard to connect the dots between the test inputs and outputs — smell known as an obscure test.

The approach of extracting and externalizing shared parts of the fixture (to the outside of the test) leads to the pattern known as Standard Fixture, being more an antipattern than a boon.

Photo by Markus Winkler on Unsplash

In practice, I find it implemented as:

  • Object Mother — a class/object with static methods or variables creating or holding some standard instances of fixture objects
class TestObjects {
val standardInvoice = Invoice(…)
val invoiceWithTwoLineItems = Invoice(…)
val invoiceWithHighTotal = Invoice(…)

val customer = Customer(…)

fun invoiceForTenant(tenant: Tenant): Invoice = …

// the list goes on
  • test Builder with pre-initialized object properties. You create instances of Invoice just by calling dnew InvoiceBuilder().build() and get the object filled with some mysterious data.
public class InvoiceBuilder {
private int id = 12345;
private Big totalAmount = new BigDecimal(100);
private Customer customer = new CustomerBuilder().build();
private List<LineItem> lineItems = List.of(
new LineItemBuilder().withSku(“fake SKU 1”).withAmount(10).build(),
new LineItemBuilder().withSku(“fake SKU 2”).withAmount(90).build()

public InvoiceBuilder withId(int id) { = id;
return this;

// ...

public Invoice build() {
return new Invoice(id, totalAmount, customer, lineItems);
  • creation methods — yes, you can have a standard fixture within the same test. Those little inoffensive helper methods within the same that create the same object again and again (maybe with some minor variations) and reuse it for different test cases…
fun createInvoice(id: Int = 12345, totalAmount: BigDecimal = 100.toBigDecimal) = Invoice(
id = 12345,
totalAmount = 100.toBigDecimal(),
customer = Customer(id = 23456, firstName = “Marcin”, secondName = “Gryszko”),
lineItems = listOf(
LineItem(sku = “fake SKU 1”, amount = 10.toBigDecimal()),
LineItem(sku = “fake SKU 2”, amount = 90.toBigDecimal())
  • member variables initialized in the shared setup part — this is a similar case to the Object Mother with the difference that the shared fixture is local to the test and doesn’t leak outside of it.

If you find that the shared fixture elements are only partially used in tests (i.e. they contain some irrelevant details for the tests) you can:

  • inline them and then remove all the unneeded data (following the tips from the previous sections)
  • group your test cases around the fixture (e.g. using nested tests) and move the shared fixture to the test group

Strive for the minimal fixture

To sum up the fixture dos and dont’s: prefer a minimalist fixture over a general one unless you have a really good reason to share the data. Your tests will document better the verified behaviour. You decrease the test fragility — everything that is needed to execute the SUT is right there in the test and your test is independent of other tests.

Notice that when applying Test-Driven Development, chances are higher that you’ll have the minimal fixture. TDD mandates to write one test at a time and the very necessary test code to pass the test (not only the production code!). As a consequence, your fixture should contain the bare minimum to execute the production code and verify the result.

In the test-after approach, you fit your tests to the SUT. It happens frequently in the last phase of the iteration when there is pressure to deliver the feature and jump to the next one. If there is an already created, shared object, the temptation to use it is high. So you reuse it, maybe adapting slightly to test requirements. And you are on a slippery slope to the entangled standard fixture.

1: I’m using the terms mock and stub, as defined in the XUnit Test Patterns book and popularized by Martin Fowler in his article Mocks Aren’t Stubs

2: Unless you are implementing a system test



Marcin Gryszko

Enduring software engineer and long-distance cyclist