This post summarizes how we analyzed publicly available web transparency data and ad hoc A/B testing to understand the adoption and performance characteristics of native image lazy-loading. What we found is that lazy-loading can be an amazingly effective tool for reducing unneeded image bytes, but overuse can negatively affect performance. Concretely, our analysis shows that more eagerly loading images within the initial viewport—while liberally lazy-loading the rest—can give us the best of both worlds: fewer bytes loaded and improved Core Web Vitals.
If fixing the introverted checkout button caused a three-quarters of a percentage point increase in online orders, it would increase Darden’s revenue by $8.1 million annually.
All of these numbers are guesses. They’re probably wrong. But even if they were a tenth of what I estimated, it would still be $812k. See what I mean about small changes making a big difference at this scale?
Jason documents his struggles trying to order from Olive Garden, and notes how a simple fix could provide them a significant boost in revenue.
Quite the collection of accessibility tools here from Nic Chan. A bunch of these were new to me and look super interesting.
Also gave me some ideas for a few things we could start baking into WebPageTest.
Surprisingly, the correlations for each CWV are weak to medium strength and for FID it’s actually a negative correlation, meaning that as the Lighthouse score goes up, real-user FID performance actually tends to go down a bit.
This is the kind of analysis I was hoping to see after Pat added CrUX data to WebPageTest.
Most of this lines up with what I’d expect. Cumulative Layout Shift is measured very differently synthetically versus in the CrUX data (particularly before the new windowing approach) and First Input Delay has always seemed to have a very weak connection to Total Blocking Time in my experience. (First Input Delay itself has plenty of limitations, and I’m eager to see it supplanted by something a bit more useful in the future.)
I think many of us have cautioned against leaning too hard on optimizing for your Lighthouse scores, and it’s nice to evidence as to why. Lighthouse is a great tool, but it works better as a “here’s a list of things you can try to improve” than it does as a goal in and of itself.
Creating a modal that could do all of this required thoughtful consideration and hard work. Under the hood, the modal component is composed of more than 10 sub-components. But that complexity is not passed on to our client.
A good reminder that I really, really need to get with it and spend a bit more time with web components.
Very good overview of the preload issue from Shubhie. This bit in particular is important to take to heart, I think:
Preload can be avoided in many cases, with alternative strategies such as inlining of critical CSS and inlining font-css.
Most of the time, preload gets used as a (not particularly efficient) band-aid for an underlying issue that is probably better solved in other ways.
Also, this is probably a personal hang-up, but whenever someone from the Chrome team shares a Google Doc full of interesting research and ideas (which is super often) I always feel a little anxiety. I just don’t trust these docs to stick around as long as a blog post.
Adding a ‘show password’ option to GOV.UK Accounts seemed like a straightforward task, but the more we looked into it the more complicated and interesting it became. This is how we did it and some of the challenges we faced.
More fodder for my firm belief that the closer you look at anything, the more interesting it becomes.
Apple’s iOS browser (Safari) and engine (WebKit) are uniquely under-powered. Consistent delays in delivery of important features ensure the web can never be a credible alternative to its proprietary tools and App Store.
Heckuva leading assertion from Alex, but he brings some serious data to back it up, including some pretty compelling results from the Web Platform Tests.
There’s a lot of criticism levied at Chrome and how they move through the standards process (or don’t). Some of that criticism is fair, some of it isn’t.
But it’s pretty clear, I think, that we have a mismatch of resources creating an imbalance. On the one hand, we have Google funding the heck out of their web-focused efforts. On the other hand, we have Apple that just never seems willing to invest in it much.
The result isn’t particularly healthy for the web or for anyone who uses it. Alex’s point here rings true:
It’s perverse that users and developers everywhere pay a subsidy for Apple’s under-funding of Safari/WebKit development.
The 95th percentile of RUM data is a great place to assess the slowest experiences. In this case, we see that the streaming Service Worker confers a 40% and 51% improvement in FCP and LCP, respectively, over no Service Worker at all. Compared to the “standard” Service Worker, we see a reduction in FCP and LCP by 19% and 43%, respectively.
What a fantastic article from Jess Peck on Cumulative Layout Shift. It’s approachable to folks new to the metric, detailed and in-depth and wildly entertaining (more entertaining than a post this informative has any right to be).