My personal journey with test-driven development (TDD)
Alabe Duarte shares his personal exposure with TDD over the years. This includes:
- why he believes TDD is important
- the subjectiveness of “good design”
- when doing TDD doesn’t help
Alabe Duarte shares his personal exposure with TDD over the years. This includes:
Amal and Divya turn our spotlight inward and interview our very own Christopher “Boneskull” Hiller about maintaining Mocha.js. Mocha has been a mainstay in the JavaScript testing community for ten (!) years now! They discuss the secret to Mocha’s success, what it’s like to maintain it, and how to make maintainers (and users) happy!
Mat Ryer hosts a spectacular panel with expert debuggers Derek Parker, Grant Seltzer Richman, and Hana Kim from the Go Team. Let’s face it, even the best-intended code doesn’t always do what you want it to. What’s a Gopher to do? Listen to this, that’s what!
Devon C. Estes:
It’s fairly common for folks who haven’t used mutation testing before to not immediately see the value in the practice. Mutation testing is, after all, still a fairly niche and under-used tool in the average software development team’s toolbox. So today I’m going to show a few specific types of very common problems that mutation testing is great at finding for us, and that are hard or impossible to find with other methods
He goes on to detail the “multiple execution paths on a single line” problem, the “untested side effect” problem, and the “missing pin” problem.
Typically, people say that testing is like a pyramid. A wide base of unit tests and very few end-to-end tests. I believe we’ve come to a point where a crab strategy is a better approach.
Gleb Bahmutov, PhD joins the show for a fun conversation around end-to-end testing. We get the skinny on Cypress, find out how it’s structured as both an open source library and a SaaS business, tease apart the various types of tests you may (or may not) want to have, and share a lot of laughs along the way.
Justin Searls from Test Double joins the party to talk about patterns he’s identified that lead to failure, minimalism, and of course, testing!
Headless recorder is a Chrome extension that records your browser interactions and generates a Puppeteer or Playwright script. Install it from the Chrome Webstore. Don’t forget to check out our sister project theheadless.dev, the open source knowledge base for Puppeteer and Playwright.
You may have heard of this when it was called Puppeteer Recorder, but its recent addition of Playwright support warranted a rename.
The panel discuss testing frameworks in Go. After a brief overview of the concepts involved, we discuss how testing frameworks can make our lives easier, and why some people still choose to avoid them. Mat Ryer and Mark Bates chat with Boyan Soubachov about the future of the Testify project.
uvu
has minimal dependencies and supports both async
/await
style tests and ES modules, but it’s not immediately clear to me why it benchmarks so well against the likes of Jest and Mocha.
~> "jest" took 1,630ms (861 ms)
~> "mocha" took 215ms ( 3 ms)
~> "tape" took 132ms ( ??? )
~> "uvu" took 74ms ( 1.4ms)
The benchmark suites are pretty basic, so it’d be cool to see a “production” grade library or application port their test suite to uvu
for comparison.
Some interesting analysis by Lawrence Hecht for The New Stack:
The 2020 version of JetBrains’ State of the Developer Ecosystem does quantify the extent to which this specialty has disappeared. One finding is that 43% of teams or projects have less than one tester or QA engineer per 10 developers. This is not necessarily a problem if most testing is automated, but that is only true among 38% of those surveyed.
38% is far too low a percentage of folks doing automated testing.
Jessica Xie:
I’m not implying that one shouldn’t write tests. The benefits of quality assurance, performance monitoring, and speeding up development by catching bugs early instead of in production outweigh its downsides. However, improvements can be made… Test selection sparks joy in my life. I wish that I can bring the same joy to you.
This is a very cool idea coming out of the Clojure community. I dig it because the examples in your README are guaranteed to never become stale as your project evolves.
It works by parsing your README and looking for executable code samples with expected outputs. For each one it finds, it generates a test ensuring that executing the code produces the output.
There are, as you might expect, caveats.
critic.sh
exposes high level functions for testing consistent with other frameworks and a set of built in assertions. One of my most important goals was to be able to pass in any shell expression to the_test
and_assert
methods, so that one is not limited to the built-ins.The coverage reporting is currently rudimentary, but it does indicate which lines haven’t been covered. It works by running the tests with extended debugging, redirecting the trace output to a log file, and then parsing it to determine which functions/lines have been executed. It can definitely be improved!
See a demo of critic.sh
in action on asciinema 📽️
Production ML systems include more than just the model. In these complicated systems, how do you ensure quality over time, especially when you are constantly updating your infrastructure, data and models? Tania Allard joins us to discuss the ins and outs of testing ML systems. Among other things, she presents a simple formula that helps you score your progress towards a robust system and identify problem areas.
Chaos Mesh is a cloud-native Chaos Engineering platform that orchestrates chaos on Kubernetes environments. At the current stage, it has the following components:
- Chaos Operator: the core component for chaos orchestration. Fully open sourced.
- Chaos Dashboard: a visualized panel that shows the impacts of chaos experiments on the online services of the system; under development; curently only supports chaos experiments on TiDB(https://github.com/pingcap/tidb).
For the uninitiated, chaos engineering is when you unleash havoc on your system to prove out its resiliency (or lack thereof).
LocalStack looks like an excellent way to develop & test your serverless apps without leaving your local host. It appears they are basically mocking 20+ AWS services which is undoubtedly a lot of work and I would expect to be error prone. Is anybody out there using LocalStack on the regular and can let us know if it actually works as advertised?
Writing good tests is hard, and very few people have thought about this domain more than Kent Beck. In this post, he lays out a short list of properties that good tests have.
Look at the last test you wrote. Which properties does it have? Which does it lack? Is that the tradeoff you want to make?
Kent Beck, for Increment:
It’s 2030. A programmer in Lagos extracts a helper method. Seconds later, the code of every developer working on the program around the world updates to reflect the change. Seconds later, each of the thousands of servers running the software updates. Seconds later, the device in my pocket in Berlin updates, along with hundreds of millions of other devices across the globe.
Perhaps the most absurd assumption in this story is that I’ll still have a pocket in 10 years.
I linked to this repo awhile back, but it’s worth another mention as more & more people are wanting to learn Go. It also contains a nice write-up on why unit testing and TDD is important.
Mocking is a powerful technique for isolating tests from undesired interactions among components. But often people find their mock isn’t taking effect, and it’s not clear why. Hopefully this explanation will clear things up.
Mocking isn’t always the best test isolation technique, but if/when you use it, you might as well use it correctly. Ned’s here to help you do just that.
Some people think that usability is very costly and complex and that user tests should be reserved for the rare web design project with a huge budget and a lavish time schedule. Not true. Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.
This article is from the year 2000 (queue Conan O’Brien’s side kick), but it’s filled with timeless goodies. Its conclusions are a straight forward example of diminishing returns, but worth reading how they arrived at them from empirical evidence.
Mat and Carmen along with guest panelists Dave Cheney, Peter Bourgon, and Marcel van Lohuizen discuss errors in Go, including the new try proposal. Many questions get answered…What do we think about how errors work in Go? How is it different from other languages/approaches? What do/don’t we like? What don’t we like? How do we handle errors these days? What’s going on with the try proposal?
This interesting testing tool was pointed out to me by Ned Batchelder when he was on The Changelog.
It combines human understanding of your problem domain with machine intelligence to improve the quality of your testing process while spending less time writing tests.
At its core, Hypothesis is a modern implementation of property based testing, which came out of the Haskell world 20 (!) years ago.
Hypothesis runs your tests against a much wider range of scenarios than a human tester could, finding edge cases in your code that you would otherwise have missed. It then turns them into simple and easy to understand failures that save you time and money compared to fixing them if they slipped through the cracks and a user had run into them instead.
Inspired by JSParty #70, 4 quick lessons on the philosophy of testing. The motivation?
Tools like Mocha, Jasmine and Jest have made writing tests far easier… But there’s still a gap. It’s extremely hard to find information on the philosophy of testing. What to test and why. How much is enough? What type of tests should I be writing, and when does it fit into my process?