Performance Icon


92 Stories
All Topics

JS Party JS Party #216

Enabling performance-centric engineering orgs

This week Amal and Nick are joined by Dan Shappir, a Performance Tech Lead at Next Insurance, to learn about enabling a performance-first mindset within your engineering org.

Dan recently left his 7+ year tenure leading performance at Wix where he and his team improved, and monitored the speed of millions of websites around the world.

Join us to learn how he lead a cultural transformation that propelled Wix sites to be faster than most other React apps in the wild - including ones built with frameworks like Next.js.

Martin Heinz

Profiling and analyzing performance of Python programs

Martin Heinz on the tools/techniques for finding bottlenecks in your Python code. And fixing them, fast.

The first rule of optimization is to not do it. If you really have to though, then optimize where appropriate. Use the above profiling tools to find bottlenecks, so you don’t waste time optimizing some inconsequential piece of code. It’s also useful to create a reproducible benchmark for the piece of code you’re trying to optimize, so that you can measure the actual improvement.

JS Party JS Party #204

JavaScript will kill you in the Apocalypse

Salma Alam-Naylor joins us this week to share her thesis that JavaScript is best in moderation, and is a liability when creating performant, resilient, and accessible web applications. Salma says we’re drunk on JavaScript, and it’s time we learn how to leverage this powerful web primitive to enhance our web experiences, alongside HTML and CSS, instead of purely relying on JavaScript to completely run the show.


ClickHouse vs TimescaleDB

Two up-and-coming database options compared:

Recently, TimescaleDB published a blog comparing ClickHouse & TimescaleDB using timescale/tsbs, a timeseries benchmarking framework. I have some experience with PostgreSQL and ClickHouse but never got the chance to play with TimescaleDB. Some of the claims about TimescaleDB made in their post are very bold, that made me even more curious. I thought it’d be a great opportunity to try it out and see if those claims are really true.

Jordan Eldredge

Speeding up Webamp's music visualizer with WebAssembly

Jordan Eldredge:’s visualizer, Butterchurn, now uses WebAssembly (Wasm) to achieve better performance and improved security. Whereas most projects use Wasm by compiling pre-existing native code to Wasm, Butterchurn uses an in-browser compiler to compile untrusted user-supplied code to fast and secure Wasm at runtime.

Speeding up Webamp's music visualizer with WebAssembly


Parcel 2 is getting a 10x compiler speedup (thanks, Rust!)

The Parcel team is excited to release Parcel 2 beta 3! This release includes a ground up rewrite of our JavaScript compiler in Rust, which improves overall build performance by up to 10x. In addition, this post will cover some other improvements we’ve made to Parcel since our last update, along with our roadmap to a stable Parcel 2 release.

A growing trend in the JS tooling world is to replace bits and pieces with Rust || Go where it makes sense and reap the performance benefits. Congrats to the Parcel team on epic results from this rewriting effort.

Parcel 2 is getting a 10x compiler speedup (thanks, Rust!)


Speed is the killer feature

Brad Dickason:

… teams consistently overlook speed. Instead, they add more features (which ironically make things slower). Products bloat over time and performance goes downhill.

New features might help your users accomplish something extra in your product. Latency stops your users from doing the job they already hire your product for.

Slow ui acts like tiny papercuts. Every time we have to wait, we get impatient, frustrated, and lose our flow.


How the V8 team made JS calls faster with this clever trick

Victor Gomes details the elegant hack (in the best sense of the word) he and the V8 team came up with to significantly increase V8’s JavaScript function call performance (by up to 40% in some cases).

Until recently, V8 had a special machinery to deal with arguments size mismatch: the arguments adaptor frame. Unfortunately, argument adaption comes at a performance cost, but is commonly needed in modern front-end and middleware frameworks. It turns out that, with a clever trick, we can remove this extra frame, simplify the V8 codebase and get rid of almost the entire overhead.

A fascinating read and fantastic performance improvements for all to enjoy.


Why wasn't Ruby 3 faster?

Noah Gibbs tries to reason through why some folks are disappointed in Ruby 3’s lack of speed improvements:

I think some of the problem was misplaced expectations. People didn’t understand what “three times faster” was supposed to mean. I don’t think people thought it through, but I also don’t think it was communicated very clearly.

So: some people understood what was promised, and some people didn’t.

What was promised?

I think Noah hits on a lot of solid points here.

 Itamar Turner-Trauring

CI for performance: Reliable benchmarking in noisy environments

Benchmarking is often not done in CI because it’s so hard to get consistent results; there’s a lot of noise in cloud VMs, so you ideally want dedicated hardware. But, it turns out you can use a tool called Cachegrind to get consistent benchmarks results across different computers, allowing you to run benchmarks in GitHub Actions, GitLab CI, etc. and still get consistent results.

CSS-Tricks Icon CSS-Tricks

Comparing static site generator build times

Sean C Davis writing on CSS-Tricks:

A colleague of mine built a static site generator evaluation cheatsheet. It provides a really nice snapshot across numerous popular SSG choices. What’s missing is how they actually perform in action.

Sean set out to test 6 of the most popular SSGs on the market today. The results are somewhat expected (Hugo is the super fast), but there are some surprises in there as well (Hugo scales poorly, but it doesn’t matter so much because it’s super fast)

Comparing static site generator build times

Zach Leatherman

Use Speedlify to continuously measure site performance

Zach Leatherman:

Instantaneous measurement is a good first step. But how do we ensure that the site maintains good performance and best practices when deploys are happening every day? How do we keep the web site fast? The second step is continuous measurement. This is where Speedlify comes in. It’s an Eleventy-generated web site published as an open source repository to help automate continuous performance measurements.

Demo here.

Use Speedlify to continuously measure site performance


How the most popular Chrome extensions affect browser performance

I used to be the guy with dozens of Chrome extensions. These days I limit my use of both (Google Chrome and browser plugins). Performance and reliability are features I desire more than what most plugins have on offer.

That being said, if you have a lot of extensions and you’re curious which ones might be bogging down your machine’s resources, this is a great analysis of the top 1000.

How the most popular Chrome extensions affect browser performance

Achiel van der Mandele Cloudflare

Cloudflare launches

There’s a new speed test in town…

With many people being forced to work from home, there’s increased load on consumer ISPs. You may be asking yourself: how well is my ISP performing with even more traffic? Today we’re announcing the general availability of, a way to gain meaningful insights into exactly how well your network is performing.

We’ve seen a massive shift from users accessing the Internet from busy office districts to spread out urban areas. Although there are a slew of speed testing tools out there, none of them give you precise insights into how they came to those measurements and how they map to real-world performance.

Cloudflare launches
0:00 / 0:00