Performance Icon

Performance

43 Stories
All Topics

Smashing Magazine Icon Smashing Magazine

A progressive migration to native lazy loading

Native lazy loading is coming to the web. Since it doesn’t depend on JavaScript, it will revolutionize the way we lazy load content today, making it easier for developers to lazy load images and iframes. I’m excited about native lazy loading! We’ve been using lozad.js for lazy loading with some success. There are times when it seems that IntersectionObserver fails to its job and an image won’t load. (If you scroll the element out and back in to the viewport, it will usually work the second time.) But it’s not a feature we can polyfill, and it will take some time before it becomes usable across all browsers. n this article, you’ll learn how it works and how you can progressively replace your JavaScript-driven lazy loading with its native alternative, thanks to hybrid lazy loading. I might try this hybrid approach and see what happens…

read more

CSS Wizardry Icon CSS Wizardry

Self-host your static assets

A revealing look at the costs and risks of linking out to CDN-hosted assets for common libraries. This “best practice” may be anything but, especially with today’s ease of setting up CDNs in front of your own content with tools like Cloudflare. On the practice of linking to 3rd party CDNs, Harry doesn’t hold back: There are a number of perceived benefits to doing this, but my aim later in this article is to either debunk these claims, or show how other costs vastly outweigh them.

read more

Dave Cheney dave.cheney.net

Dave Cheney's "High Performance Go" workshop docs

If you haven’t attended the workshop directly, the next best thing is to learn indirectly by reading the workshop’s docs. The goal for this workshop is to give you the tools you need to diagnose performance problems in your Go applications and fix them. It’s licensed under the Creative Commons Attribution-ShareAlike 4.0 International license and the source is on GitHub.

read more

Damian Gryski github.com

Practices for writing high-performance Go

From writing and optimizing Go code to common gotchas with the Go standard library, Damian Gryski shared his thoughts on Go performance optimization and outlined best practices for writing high-performance Go code. Available in English, 中文, and Español. When and where to optimize — Every optimization has a cost. Generally this cost is expressed in terms of code complexity or cognitive load – optimized code is rarely simpler than the unoptimized version. But there’s another side that I’ll call the economics of optimization. As a programmer, your time is valuable. There’s the opportunity cost of what else you could be working on for your project, which bugs to fix, which features to add. Optimizing things is fun, but it’s not always the right task to choose. Performance is a feature, but so is shipping, and so is correctness.

read more

Sergiy Kukunin habr.com

The pros and cons of Elixir

In this short Q&A, Sergiy Kukunin, an Elixir expert, shares his thoughts on why Elixir is becoming so popular, its core advantages, and its drawbacks. Sergiy also shared this as a takeaway to getting started with Elixir. …the syntax of Elixir has some things in common with Ruby. The languages are entirely different, but it is always good to see symbols and elements you are used to. The simplest thing is to use some of the new Elixir-compatible web-development frameworks. The most popular web framework for Elixir is Phoenix. You should definitely give it a try, especially if you are used to using Ruby on Rails. This will simplify development while still making the app faster and more reliable.

read more

Jeremy Wagner A List Apart

Responsible JavaScript (Part 1)

This pretty much sums up the point Jeremy is trying to get across with this post on A List Apart and the future parts to this story of “Responsible JavaScript.” I’m not here to kill JavaScript — Make no mistake, I have no ill will toward JavaScript. It’s given me a career and—if I’m being honest with myself—a source of enjoyment for over a decade. Like any long-term relationship, I learn more about it the more time I spend with it. It’s a mature, feature-rich language that only gets more capable and elegant with every passing year. Yet, there are times when I feel like JavaScript and I are at odds. I am critical of JavaScript. Or maybe more accurately, I’m critical of how we’ve developed a tendency to view it as a first resort to building for the web…

read more

Victor Zhou victorzhou.com

Why I replaced Disqus and you should too

Victor Zhou: Switching away from Disqus reduced my page weight by over 10x and my network requests by over 6x. Disqus is bloated and sells your data - there are much better alternatives out there. Disqus has been the de facto comment engine used for dev blogging (especially for SSGs) for years. I’m happy to learn there are less bloated and privacy-focused alternatives out there.

read more

Cloudflare Blog Icon Cloudflare Blog

1.1.1.1 + Warp

Cloudflare just launched a VPN for people who don’t know what V.P.N. stands for. …we think the market for VPNs as it’s been imagined to date is severely limited. Imagine trying to convince a non-technical friend that they should install an app that will slow down their Internet and drain their battery so they can be a bit more secure. Good luck. What’s interesting is the patience they’ve demonstrated with this launch. They first had to learn a thing or two about… …the failure conditions when a VPN app switched between cellular and WiFi, when it suffered signal degradation, tried to register with a captive portal, or otherwise ran into the different conditions that mobile phones experience in the field. The basic version of Warp is free. To put folks at ease (cause they’re a for-profit company), they’ve been transparent about their motives and shared “three primary ways this makes financial sense” for them.

read more

 Itamar Turner-Trauring pythonspeed.com

10× faster database tests with Docker

Testing code that talks to the database can be slow. Fakes are fast but unrealistic. What to do? With a little help from Docker, you can write tests that run fast, use the real database, are easy to write and run. I tried Itamar’s technique on changelog.com’s test suite and the 679 tests complete in ~17 seconds. The same tests run directly against Postgres complete in ~12 seconds. A net loss for me, but that may have something to do with how Docker for Mac works? I’d love to hear other people’s experiences.

read more

Bits and Pieces Icon Bits and Pieces

Understanding Service Workers and caching strategies

Solid tutorial on Service Workers: You can think of the service worker as someone who sits between the client and server and all the requests that are made to the server pass through the service worker. Basically, a middle man. Since all the request pass through the service worker, it is capable to intercept these requests on the fly.

read more

Go robustperception.io

Optimising startup time of Prometheus 2.6.0 with pprof

Brian Brazil: The informal design goal of the Prometheus 2.x TSDB startup was that it should take no more than about a minute. Over the past few months there’s been reports of it taking quite a bit more than this, which is a problem if your Prometheus restarts for some reason. Almost all of that time is loading the WAL (write ahead log), which are the samples in the last few hours which have yet to be compacted into a block. I finally got a chance to dig into this at the end of October, and the outcome was PR#440 which reduced CPU time by 6.5x and walltime by 4x. Let’s look at how I arrived at these improvements. I’ve been meaning to get more familiar with pprof, the Go profiling tool, as my job revolves around working on and around Go microservices. My team has been able to see the impact of the Go experts who can quickly find issues buried in a stack of profiles collected on a service. Brian’s post is a great example of 1) identifying the an issue, 2) diagnosing said issue and 3) observing the implemented improvements using pprof. His parting paragraph is particularly insightful, specifically: I did spend quite a bit of time pouring over the code, and had several dead ends such as removing the call to NumSamples, doing reading and decoding in separate threads, and a few variants of how the processWALSamples sharding worked Profiling and optimization is a mix of knowing your codebase and being able to identifying false leads. A tool like pprof is invaluable when identifying both issues and improvements in a measurable way.

read more

Scott Jehl filamentgroup.com

Inlining or caching? Both please!

I was exploring patterns that enable the browser to render a page as fast as possible by including code alongside the initial HTML so that the browser has everything it needs to start rendering the page, without making additional requests. Our two go-to options to achieve this goal are inlining and server push (more on how we use those), but each has drawbacks: inlining prevents a file from being cached for reuse, and server push is still a bit experimental, with some browser bugs still being worked out. As I was preparing to describe these caveats, I thought, “I wonder if the new Service Worker and Caching APIs could enable caching for inline code.” I’ve been dabbling a bit with service workers over on Brightly Colored to improve the loading time, so this exploration of caching inline CSS is fascinating. In fact, I used to completely inline all the CSS on the site, but switched to a file request because of the way I thought service workers, well… worked. Surprisingly, this implementation doesn’t look too difficult.

read more

JavaScript blog.mgechev.com

Guess.js - a toolkit for enabling data-driven user-experiences on the web

Our goal with Guess.js is to minimize your bundle layout configuration, make it data-driven, and much more accurate! In the end, you should lazy load all your routes and Guess.js will figure out which bundles to be combined together and what pre-fetching mechanism to be used! All this in less than 5 minutes setup time. That’s an excellent goal! But how will that work? During the build process, the GuessPlugin will fetch report from Google Analytics, build a model used for predictive pre-fetching and add a small runtime to the main bundle of your application. On route change, the runtime will query the generated model for the pages that are likely to be visited next and pre-fetch the associated with them JavaScript bundles. The tool was announced at Google I/O back in May, but as of today it’s still in alpha.

read more

Noa Gruman blog.streamroot.io

Implementing a multi-CDN strategy? Here's everything you need to know.

There’s some seriously interesting thoughts shared here for building out a multi-CDN strategy. Having had issues with how to best use and leverage a CDN to get the best performance benefits, I can see how having a multi-CDN implementation would allow us to choose the right CDN for a given region of the world, as well as a whole host of other options based on things like cost, performance, and of course redundancy for when things go wrong. Murphy’s law, right? This summer, the 2018 World Cup set an all-time streaming record – tripling its own 2014 record – with over 22 Tbps measured by Akamai at peak, but the event wasn’t smooth sailing for everyone. In a highly competitive market, and in an age where streaming failures make headlines, redundancy and quality of experience have never been more crucial for content publishers. Drop a comment below if there are other resources out there on this subject that we should check out.

read more

Addy Osmani Medium

A Netflix web performance case study

Hold on to your seat! This is a deep dive on improving time-to-interactive for Netflix.com on the desktop. Addy Osmani writes on the Dev Channel for the Chromium dev team regarding performance tuning of Netflix.com. They were trying to determine if React was truly necessary for the logged-out homepage to function. Even though React’s initial footprint was just 45kB, removing React, several libraries and the corresponding app code from the client-side reduced the total amount of JavaScript by over 200kB, causing an over-50% reduction in Netflix’s time-to-interactivity for the logged-out homepage. There’s more to this story, so dig in. Or, share your comments on their approach to reducing time-to-interactivity and if you might have done things differently.

read more

Go github.com

A high-performance PHP app server, load balancer, and process manager

RoadRunner is an open source (MIT licensed), high-performance PHP application server, load balancer and process manager. It supports running as a service with the ability to extend its functionality on a per-project basis. RoadRunner is written in Go, and can be used to replace the class Nginx+FPM setup, boasting “much greater performance”. I’d love to see some benchmarks. Better yet, I’d love to see someone use this in production for a bit and write up their experience.

read more

Nikita Prokopov tonsky.me

Software disenchantment (or, struggles with operating at 1% possible performance)

Nikita Prokopov has been programming for 15 years and has become quite frustrated with the industry’s lack of care for efficiency, simplicity, and excellence in software — to the point of depression. Only in software, it’s fine if a program runs at 1% or even 0.01% of the possible performance. Everybody just seems to be ok with it. Nikita cites some examples: …our portable computers are thousands of times more powerful than the ones that brought man to the moon. Yet every other webpage(s) struggles to maintain a smooth 60fps scroll on the latest top-of-the-line MacBook Pro. I can comfortably play games and watch 4K videos but not scroll web pages? How is it ok? Windows 10 takes 30 minutes to update. What could it possibly be doing for that long? That much time is enough to fully format my SSD drive, download a fresh build and install it like 5 times in a row. We put virtual machines inside Linux, and then we put Docker inside virtual machines, simply because nobody was able to clean up the mess that most programs, languages, and their environment produce. We cover shit with blankets just not to deal with it. “Single binary” is still a HUGE selling point for Go, for example. No mess == success. Do you share in Nikita’s position? Sure, be frustrated with performance (cause we all want, “go faster!”), but do you agree with his points beyond that? If so, read this and consider supporting him on Patreon.

read more

Dominic Tarr github.com

Your web app is bloated

Using Firefox’s memory snapshot tool, Dominic Tarr measured the heap usage of a variety of web apps. The results are… not good. The biggest losers are Google properties followed closely by Slack (which is probably not a surprise). Fairing much better were GitHub (7.41MB), StackOverflow (2.55MB), and Wikipedia (1.73MB). What struck me most is that while modern Gmail is one of the worst offenders (158MB), vintage Gmail uses just 0.81MB of memory. The ‘good ole’ days’ strike back!

read more

David Mark Clements Smashing Magazine

Keeping Node.js fast

David Mark Clements shares tools, techniques, and tips for making high-performance Node.js servers in this super deep post on Smashing Magazine: The surging popularity of Node.js has exposed the need for tooling, techniques and thinking suited to the constraints of server-side JavaScript. When it comes to performance, what works in the browser doesn’t necessarily suit Node.js. So, how do we make sure a Node.js implementation is fast and fit for purpose? Let’s walk through a hands-on example.

read more

Balaji Subramaniam kubernetes.io

Kubernetes' CPU Manager

Feature highlights of the beta CPU Manager in Kubernetes from Balaji Subramaniam, Cloud Software Engineer and Connor Doyle, Cloud Software Architect at Intel AI… A single compute node in a Kubernetes cluster can run many pods and some of these pods could be running CPU-intensive workloads. In such a scenario, the pods might contend for the CPU resources available in that compute node. When this contention intensifies, the workload can move to different CPUs depending on whether the pod is throttled and the availability of CPUs at scheduling time. There might also be cases where the workload could be sensitive to context switches. In all the above scenarios, the performance of the workload might be affected. If your workload is sensitive to such scenarios, then CPU Manager can be enabled to provide better performance isolation by allocating exclusive CPUs for your workload.

read more

0:00 / 0:00