Testing paperless.blog

Start test names with “should”  ↦

Victor Engmark thinks we should start test names with “should” for a handful of reasons, here’s a few:

  • It removes redundancy, because the function name should already be in the call stack.
  • It is falsifiable, a person reviewing the test can decide to which degree the name agrees with the actual test.
  • It encourages testing one property of the function per test

I don’t disagree, but I like to take it a step further: Let the “should” be implied by the rest of the test name.

Instead of: should replace children when updating instance

I prefer: replaces children when updating instances

Instead of: should apply discount when total cost exceeds 100 dollars

I prefer: applies discount when total cost exceeds 100 dollars

Most of Victor’s reasons for using “should” still apply with this format, but it’s less verbose and more accurately describes the software working as expected when the tests pass.


Discussion

Sign in or Join to comment or subscribe

2022-06-08T00:46:04Z ago

and more accurately describes the software working as expected when the tests pass

That’s exactly the point I’m making for the should prefix:
when the test fails, the statements in the format you suggest is no longer true.
depending on the wording the test runner picked, this can lead to strange language, including double or triple negations (if the message contains “not”).
I’m not a native speaker, but I find it confusing because obviously, when the test fails it is no longer doing what it is supposed to do, but the message still claims it. And in some cases I even confused what was expected and what was the current value, because of that.

Just remember, once you are done with the feature and all the tests are passing, the most common case for a breaking test is that somebody changed something without being aware of the context in which the tests have been written.

From my perspective the “should” prefix makes that crystal clear.

Jerod Santo

Jerod Santo

Omaha, Nebraska

Jerod co-hosts The Changelog, crashes JS Party, and takes out the trash (his old code) once in awhile.

2022-06-08T13:57:02Z ago

I see where you’re coming from:

when the test fails it is no longer doing what it is supposed to do, but the message still claims it.

For me, seeing expected behavior described alongside a failing test is crystal clear that the code does not work as expected. Of course it should, that’s why the test is there in the first place!

This may be a native vs non-native speaker difference. I’m not sure. Thankfully, it’s a small decision in the grand scheme of things. Most important, I believe, is to pick a format and stick with it.

Consistency always improves clarity!

0:00 / 0:00