Testing testing 1 2 3
This week we chat with Angie Jones about all things testing. Weāll cover unit testing, visual testing, end-to-end testing, and more!
This week we chat with Angie Jones about all things testing. Weāll cover unit testing, visual testing, end-to-end testing, and more!
We discuss how Test Driven Development (TDD) can help you write better code, and build better software. Packed with tips and tricks, gotchas and best practices, the panel explore the subject and share their real-world experiences.
In this article, Iād like to explore a couple of implementations of sorting algorithms. However, Iād like to do so driven by unit tests. The examples are written in Go, but donāt worry if you never worked with Go before. The emphasis here is on the journey and the joy of building solutions guided by tests!
We first talked fuzzing with Katie Hockman back in August of 2020. Fast-forward 10 months and native fuzzing in Go is ready for beta testing! Hereās Katie explaining fuzzing, for the uninitiated:
Fuzzing is a type of automated testing which continuously manipulates inputs to a program to find issues such as panics or bugs. These semi-random data mutations can discover new code coverage that existing unit tests may miss, and uncover edge case bugs which would otherwise go unnoticed. Since fuzzing can reach these edge cases, fuzz testing is particularly valuable for finding security exploits and vulnerabilities.
It looks like the feature wonāt be landing in Go 1.17, but theyāre planning on it sometime after that. Either way, you can use fuzzing today on its development branch.
āThe tests are timing out again!ā, someone yells. āAlright Iāll bump themā, you instinctively respond. Then you pause and feel uneasy. Is there another way?
In this blog post, I share my growing disconnect with code-coverage and unit-testing. I then detail the method Iāve been using for the greater part of 7 years and how it still allows me to preach at length that being correct is the single most important thing for a developer.
Testing can be hard, how to test, where to test, what is a good test? All questions that can be deceptively difficult to answer. In this episode we talk about the trials and tribulations of testing and why it can be argued to be especially difficult in Go.
Hereās a pretty useful idea for library authors and their users: there are better ways to test your code!
I give three examples of how user projects can be self-tested without actually writing any real test cases by the end-user. One is hypothetical about django
and two examples are real and working: featuring deal
and dry-python/returns
. A brief example with deal
:
import deal
@deal.pre(lambda a, b: a >= 0 and b >= 0)
@deal.raises(ZeroDivisionError) # this function can raise if `b=0`, it is ok
def div(a: int, b: int) -> float:
if a > 50: # Custom, in real life this would be a bug in our logic:
raise Exception('Oh no! Bug happened!')
return a / b
This bug can be automatically found by writing a single line of test code: test_div = deal.cases(div)
. As easy as it gets! From this article you will learn:
@deal.pre(lambda a, b: a >= 0 and b >= 0)
can help you to generate hundreds of test cases with almost no effortdry-python/returns
helps its users to build their own monadsI really like this idea! And I would appreciate your feedback on it.
Benjamin Coe joins Amal and Divya to discuss his wide-ranging open source projects, test coverage with Istanbul, and the future of testing in JavaScript.
Alabe Duarte shares his personal exposure with TDD over the years. This includes:
Amal and Divya turn our spotlight inward and interview our very own Christopher āBoneskullā Hiller about maintaining Mocha.js. Mocha has been a mainstay in the JavaScript testing community for ten (!) years now! They discuss the secret to Mochaās success, what itās like to maintain it, and how to make maintainers (and users) happy!
Mat Ryer hosts a spectacular panel with expert debuggers Derek Parker, Grant Seltzer Richman, and Hana Kim from the Go Team. Letās face it, even the best-intended code doesnāt always do what you want it to. Whatās a Gopher to do? Listen to this, thatās what!
Devon C. Estes:
Itās fairly common for folks who havenāt used mutation testing before to not immediately see the value in the practice. Mutation testing is, after all, still a fairly niche and under-used tool in the average software development teamās toolbox. So today Iām going to show a few specific types of very common problems that mutation testing is great at finding for us, and that are hard or impossible to find with other methods
He goes on to detail the āmultiple execution paths on a single lineā problem, the āuntested side effectā problem, and the āmissing pinā problem.
Typically, people say that testing is like a pyramid. A wide base of unit tests and very few end-to-end tests. I believe weāve come to a point where a crab strategy is a better approach.
Gleb Bahmutov, PhD joins the show for a fun conversation around end-to-end testing. We get the skinny on Cypress, find out how itās structured as both an open source library and a SaaS business, tease apart the various types of tests you may (or may not) want to have, and share a lot of laughs along the way.
Justin Searls from Test Double joins the party to talk about patterns heās identified that lead to failure, minimalism, and of course, testing!
Headless recorder is a Chrome extension that records your browser interactions and generates a Puppeteer or Playwright script. Install it from the Chrome Webstore. Donāt forget to check out our sister project theheadless.dev, the open source knowledge base for Puppeteer and Playwright.
You may have heard of this when it was called Puppeteer Recorder, but its recent addition of Playwright support warranted a rename.
The panel discuss testing frameworks in Go. After a brief overview of the concepts involved, we discuss how testing frameworks can make our lives easier, and why some people still choose to avoid them. Mat Ryer and Mark Bates chat with Boyan Soubachov about the future of the Testify project.
uvu
has minimal dependencies and supports both async
/await
style tests and ES modules, but itās not immediately clear to me why it benchmarks so well against the likes of Jest and Mocha.
~> "jest" took 1,630ms (861 ms)
~> "mocha" took 215ms ( 3 ms)
~> "tape" took 132ms ( ??? )
~> "uvu" took 74ms ( 1.4ms)
The benchmark suites are pretty basic, so itād be cool to see a āproductionā grade library or application port their test suite to uvu
for comparison.
Some interesting analysis by Lawrence Hecht for The New Stack:
The 2020 version of JetBrainsā State of the Developer Ecosystem does quantify the extent to which this specialty has disappeared. One finding is that 43% of teams or projects have less than one tester or QA engineer per 10 developers. This is not necessarily a problem if most testing is automated, but that is only true among 38% of those surveyed.
38% is far too low a percentage of folks doing automated testing.
Jessica Xie:
Iām not implying that one shouldnāt write tests. The benefits of quality assurance, performance monitoring, and speeding up development by catching bugs early instead of in production outweigh its downsides. However, improvements can be made⦠Test selection sparks joy in my life. I wish that I can bring the same joy to you.
This is a very cool idea coming out of the Clojure community. I dig it because the examples in your README are guaranteed to never become stale as your project evolves.
It works by parsing your README and looking for executable code samples with expected outputs. For each one it finds, it generates a test ensuring that executing the code produces the output.
There are, as you might expect, caveats.
critic.sh
exposes high level functions for testing consistent with other frameworks and a set of built in assertions. One of my most important goals was to be able to pass in any shell expression to the_test
and_assert
methods, so that one is not limited to the built-ins.The coverage reporting is currently rudimentary, but it does indicate which lines havenāt been covered. It works by running the tests with extended debugging, redirecting the trace output to a log file, and then parsing it to determine which functions/lines have been executed. It can definitely be improved!
See a demo of critic.sh
in action on asciinema š½ļø
Production ML systems include more than just the model. In these complicated systems, how do you ensure quality over time, especially when you are constantly updating your infrastructure, data and models? Tania Allard joins us to discuss the ins and outs of testing ML systems. Among other things, she presents a simple formula that helps you score your progress towards a robust system and identify problem areas.
Chaos Mesh is a cloud-native Chaos Engineering platform that orchestrates chaos on Kubernetes environments. At the current stage, it has the following components:
- Chaos Operator: the core component for chaos orchestration. Fully open sourced.
- Chaos Dashboard: a visualized panel that shows the impacts of chaos experiments on the online services of the system; under development; curently only supports chaos experiments on TiDB(https://github.com/pingcap/tidb).
For the uninitiated, chaos engineering is when you unleash havoc on your system to prove out its resiliency (or lack thereof).
LocalStack looks like an excellent way to develop & test your serverless apps without leaving your local host. It appears they are basically mocking 20+ AWS services which is undoubtedly a lot of work and I would expect to be error prone. Is anybody out there using LocalStack on the regular and can let us know if it actually works as advertised?