Performance Icon


70 Stories
All Topics

Addy Osmani

The cost of JavaScript in 2019

An update from Addy Osmani about the impact of JavaScript on the web and performance in 2019, and things you can do to impact it. Interesting because the number one items to pay attention to have changed from the conventional wisdom of a couple years back. Osmani:

One large change to the cost of JavaScript over the last few years has been an improvement in how fast browsers can parse and compile script. In 2019, the dominant costs of processing scripts are now download and CPU execution time.

Smashing Magazine Icon Smashing Magazine

A progressive migration to native lazy loading

Native lazy loading is coming to the web. Since it doesn’t depend on JavaScript, it will revolutionize the way we lazy load content today, making it easier for developers to lazy load images and iframes.

I’m excited about native lazy loading! We’ve been using lozad.js for lazy loading with some success. There are times when it seems that IntersectionObserver fails to its job and an image won’t load. (If you scroll the element out and back in to the viewport, it will usually work the second time.)

But it’s not a feature we can polyfill, and it will take some time before it becomes usable across all browsers. n this article, you’ll learn how it works and how you can progressively replace your JavaScript-driven lazy loading with its native alternative, thanks to hybrid lazy loading.

I might try this hybrid approach and see what happens…

CSS Wizardry Icon CSS Wizardry

Self-host your static assets

A revealing look at the costs and risks of linking out to CDN-hosted assets for common libraries. This “best practice” may be anything but, especially with today’s ease of setting up CDNs in front of your own content with tools like Cloudflare.

On the practice of linking to 3rd party CDNs, Harry doesn’t hold back:

There are a number of perceived benefits to doing this, but my aim later in this article is to either debunk these claims, or show how other costs vastly outweigh them.

Dave Cheney

Dave Cheney's "High Performance Go" workshop docs

If you haven’t attended the workshop directly, the next best thing is to learn indirectly by reading the workshop’s docs.

The goal for this workshop is to give you the tools you need to diagnose performance problems in your Go applications and fix them.

It’s licensed under the Creative Commons Attribution-ShareAlike 4.0 International license and the source is on GitHub.

Damian Gryski

Practices for writing high-performance Go

From writing and optimizing Go code to common gotchas with the Go standard library, Damian Gryski shared his thoughts on Go performance optimization and outlined best practices for writing high-performance Go code. Available in English, 中文, and Español.

When and where to optimize — Every optimization has a cost. Generally this cost is expressed in terms of code complexity or cognitive load – optimized code is rarely simpler than the unoptimized version. But there’s another side that I’ll call the economics of optimization. As a programmer, your time is valuable. There’s the opportunity cost of what else you could be working on for your project, which bugs to fix, which features to add. Optimizing things is fun, but it’s not always the right task to choose. Performance is a feature, but so is shipping, and so is correctness.

Sergiy Kukunin

The pros and cons of Elixir

In this short Q&A, Sergiy Kukunin, an Elixir expert, shares his thoughts on why Elixir is becoming so popular, its core advantages, and its drawbacks.

Sergiy also shared this as a takeaway to getting started with Elixir.

…the syntax of Elixir has some things in common with Ruby. The languages are entirely different, but it is always good to see symbols and elements you are used to. The simplest thing is to use some of the new Elixir-compatible web-development frameworks. The most popular web framework for Elixir is Phoenix. You should definitely give it a try, especially if you are used to using Ruby on Rails. This will simplify development while still making the app faster and more reliable.

Jeremy Wagner A List Apart

Responsible JavaScript (Part 1)

This pretty much sums up the point Jeremy is trying to get across with this post on A List Apart and the future parts to this story of “Responsible JavaScript.”

I’m not here to kill JavaScript — Make no mistake, I have no ill will toward JavaScript. It’s given me a career and—if I’m being honest with myself—a source of enjoyment for over a decade. Like any long-term relationship, I learn more about it the more time I spend with it. It’s a mature, feature-rich language that only gets more capable and elegant with every passing year.

Yet, there are times when I feel like JavaScript and I are at odds. I am critical of JavaScript. Or maybe more accurately, I’m critical of how we’ve developed a tendency to view it as a first resort to building for the web…

Victor Zhou

Why I replaced Disqus and you should too

Victor Zhou:

Switching away from Disqus reduced my page weight by over 10x and my network requests by over 6x. Disqus is bloated and sells your data - there are much better alternatives out there.

Disqus has been the de facto comment engine used for dev blogging (especially for SSGs) for years. I’m happy to learn there are less bloated and privacy-focused alternatives out there.

Cloudflare Icon Cloudflare + Warp

Cloudflare just launched a VPN for people who don’t know what V.P.N. stands for.

…we think the market for VPNs as it’s been imagined to date is severely limited. Imagine trying to convince a non-technical friend that they should install an app that will slow down their Internet and drain their battery so they can be a bit more secure. Good luck.

What’s interesting is the patience they’ve demonstrated with this launch. They first had to learn a thing or two about…

…the failure conditions when a VPN app switched between cellular and WiFi, when it suffered signal degradation, tried to register with a captive portal, or otherwise ran into the different conditions that mobile phones experience in the field.

The basic version of Warp is free. To put folks at ease (cause they’re a for-profit company), they’ve been transparent about their motives and shared “three primary ways this makes financial sense” for them.

 Itamar Turner-Trauring

10× faster database tests with Docker

Testing code that talks to the database can be slow. Fakes are fast but unrealistic. What to do? With a little help from Docker, you can write tests that run fast, use the real database, are easy to write and run.

I tried Itamar’s technique on’s test suite and the 679 tests complete in ~17 seconds. The same tests run directly against Postgres complete in ~12 seconds.

A net loss for me, but that may have something to do with how Docker for Mac works? I’d love to hear other people’s experiences.

Bits and Pieces Icon Bits and Pieces

Understanding Service Workers and caching strategies

Solid tutorial on Service Workers:

You can think of the service worker as someone who sits between the client and server and all the requests that are made to the server pass through the service worker. Basically, a middle man. Since all the request pass through the service worker, it is capable to intercept these requests on the fly.


Optimising startup time of Prometheus 2.6.0 with pprof

Brian Brazil:

The informal design goal of the Prometheus 2.x TSDB startup was that it should take no more than about a minute. Over the past few months there’s been reports of it taking quite a bit more than this, which is a problem if your Prometheus restarts for some reason. Almost all of that time is loading the WAL (write ahead log), which are the samples in the last few hours which have yet to be compacted into a block. I finally got a chance to dig into this at the end of October, and the outcome was PR#440 which reduced CPU time by 6.5x and walltime by 4x. Let’s look at how I arrived at these improvements.

I’ve been meaning to get more familiar with pprof, the Go profiling tool, as my job revolves around working on and around Go microservices. My team has been able to see the impact of the Go experts who can quickly find issues buried in a stack of profiles collected on a service.

Brian’s post is a great example of 1) identifying the an issue, 2) diagnosing said issue and 3) observing the implemented improvements using pprof. His parting paragraph is particularly insightful, specifically:

I did spend quite a bit of time pouring over the code, and had several dead ends such as removing the call to NumSamples, doing reading and decoding in separate threads, and a few variants of how the processWALSamples sharding worked

Profiling and optimization is a mix of knowing your codebase and being able to identifying false leads. A tool like pprof is invaluable when identifying both issues and improvements in a measurable way.

Scott Jehl

Inlining or caching? Both please!

I was exploring patterns that enable the browser to render a page as fast as possible by including code alongside the initial HTML so that the browser has everything it needs to start rendering the page, without making additional requests.

Our two go-to options to achieve this goal are inlining and server push (more on how we use those), but each has drawbacks: inlining prevents a file from being cached for reuse, and server push is still a bit experimental, with some browser bugs still being worked out. As I was preparing to describe these caveats, I thought, “I wonder if the new Service Worker and Caching APIs could enable caching for inline code.”

I’ve been dabbling a bit with service workers over on Brightly Colored to improve the loading time, so this exploration of caching inline CSS is fascinating. In fact, I used to completely inline all the CSS on the site, but switched to a file request because of the way I thought service workers, well… worked. Surprisingly, this implementation doesn’t look too difficult.


Guess.js - a toolkit for enabling data-driven user-experiences on the web

Our goal with Guess.js is to minimize your bundle layout configuration, make it data-driven, and much more accurate! In the end, you should lazy load all your routes and Guess.js will figure out which bundles to be combined together and what pre-fetching mechanism to be used! All this in less than 5 minutes setup time.

That’s an excellent goal! But how will that work?

During the build process, the GuessPlugin will fetch report from Google Analytics, build a model used for predictive pre-fetching and add a small runtime to the main bundle of your application. On route change, the runtime will query the generated model for the pages that are likely to be visited next and pre-fetch the associated with them JavaScript bundles.

The tool was announced at Google I/O back in May, but as of today it’s still in alpha.

Noa Gruman

Implementing a multi-CDN strategy? Here's everything you need to know.

There’s some seriously interesting thoughts shared here for building out a multi-CDN strategy. Having had issues with how to best use and leverage a CDN to get the best performance benefits, I can see how having a multi-CDN implementation would allow us to choose the right CDN for a given region of the world, as well as a whole host of other options based on things like cost, performance, and of course redundancy for when things go wrong. Murphy’s law, right?

This summer, the 2018 World Cup set an all-time streaming record – tripling its own 2014 record – with over 22 Tbps measured by Akamai at peak, but the event wasn’t smooth sailing for everyone. In a highly competitive market, and in an age where streaming failures make headlines, redundancy and quality of experience have never been more crucial for content publishers.

Drop a comment below if there are other resources out there on this subject that we should check out.

Addy Osmani Medium

A Netflix web performance case study

Hold on to your seat! This is a deep dive on improving time-to-interactive for on the desktop. Addy Osmani writes on the Dev Channel for the Chromium dev team regarding performance tuning of They were trying to determine if React was truly necessary for the logged-out homepage to function.

Even though React’s initial footprint was just 45kB, removing React, several libraries and the corresponding app code from the client-side reduced the total amount of JavaScript by over 200kB, causing an over-50% reduction in Netflix’s time-to-interactivity for the logged-out homepage.

There’s more to this story, so dig in. Or, share your comments on their approach to reducing time-to-interactivity and if you might have done things differently.

A Netflix web performance case study


A high-performance PHP app server, load balancer, and process manager

RoadRunner is an open source (MIT licensed), high-performance PHP application server, load balancer and process manager. It supports running as a service with the ability to extend its functionality on a per-project basis.

RoadRunner is written in Go, and can be used to replace the class Nginx+FPM setup, boasting “much greater performance”. I’d love to see some benchmarks. Better yet, I’d love to see someone use this in production for a bit and write up their experience.

Nikita Prokopov

Software disenchantment (or, struggles with operating at 1% possible performance)

Nikita Prokopov has been programming for 15 years and has become quite frustrated with the industry’s lack of care for efficiency, simplicity, and excellence in software — to the point of depression.

Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it.

Nikita cites some examples:

…our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage(s) struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games and watch 4K videos but not scroll web pages? How is it ok?

Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row.

We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages, and their environment produce. We cover shit with blankets just not to deal with it. “Single binary” is still a HUGE selling point for Go, for example. No mess == success.

Do you share in Nikita’s position? Sure, be frustrated with performance (cause we all want, “go faster!”), but do you agree with his points beyond that? If so, read this and consider supporting him on Patreon.

Dominic Tarr

Your web app is bloated

Using Firefox’s memory snapshot tool, Dominic Tarr measured the heap usage of a variety of web apps. The results are… not good. The biggest losers are Google properties followed closely by Slack (which is probably not a surprise). Fairing much better were GitHub (7.41MB), StackOverflow (2.55MB), and Wikipedia (1.73MB).

What struck me most is that while modern Gmail is one of the worst offenders (158MB), vintage Gmail uses just 0.81MB of memory. The ‘good ole’ days’ strike back!

0:00 / 0:00