This feature seems to be involved. In order to layout the html page you need to know the image sizes(unless they are explicit), and for that you still need to download the image header, and the headers are of different sizes for different returned image formats. And some images change when you load them second time.
So it seems the only way to correctly implement that is to open a connection to load an image and stall it after receiving the content-type and appropriate image header and hope that the server will not close the hanging connection(?)
PS: Seems that chrome will download the first 2K if byte-ranges are supported. If there is no dimensions in the first 2kb or byte-ranges are not supported, the full image will be downloaded non-lazily https://docs.google.com/document/d/1691W7yFDI1FJv69N2MEtaSzp...
1. If the full image is present and fresh in the cache, then use that.
2. Otherwise, if the server supports range requests, and the image dimensions can be decoded from the first 2KB of the image, then generate and show an image placeholder with the same dimensions.
3. Otherwise, fetch the entire full image from the server as usual.
So, it'll only lazy-load images if it's in the cache or certain criteria is met. Also, it seems like it's opt-in using an attribute, so the implementer of the website can avoid it if they're worried. But overall, there won't be any reflow issues.
They note that Android battery saver may enable this by default unless the HTML specifically says otherwise. (Search for “or unset”.) Once they have enough data, perhaps in a year or two, it is likely they’ll extend that to all platforms for memory and battery life savings.
- If the image is fits in 2kb (icons/ui elements) then they can just do a full decode and not have to lazy load these ui items as the page scrolls.
- It allows them to load low resolution versions of progressive image formats rather than stick up a generic placeholder (marked as a future improvement right now).
For your first point, reading the specified dimensions from the page would be sufficient to tell the browser whether it's a small image to skip lazy loading.
the spec says "[…] the image dimensions can be decoded from the first 2KB of the image".
It doesn't say anything about when the dimensions are known and I would assume it's using those in that case.
However, loading 2KB of an image is enough for most image formats to determine the dimensions even if they aren't specified in the hosting document (because most image formats contain a header specifying dimensions).
Well, if you want to implement it optimally then you need to be doing srcsets and the easiest way to do them is to put the img width and height in the tag. Then run mod_pagespeed to tidy up the html. It then creates the srceset images. It will not be a solution to everyone's taste but it abstracts out the srcset bit to the server so you can have clean markup. If you are caching your pages at some level then it isn't going to be looking up image dimensions too often. Plus looking up image dimensions server side is a simple ask, no network needed.
I am going with the explicit image dimensions, srcsets and no lazy loading, which happens to be a feature of mod_pagespeed. With figure elements or picture elements for the content images, populated with the srcsets, I see this new browser side lazyload as the missing feature needed to preserve document structure, not have any javascript cludges and serve images in a way that respects data saving and viewport size.
Time to add the new 'lazy' attribute. Firefox and Safari users might not benefit yet but this is no need to wait.
Not really .. you can just have the page / content duration bounce around a bit as you scroll down in rare cases where image size is dictating layout :/ .. we see that pretty frequently in post page display ad fill pushing thing around ( and in some cases ads loading when you scroll )
No need to rely on IntersectionObserver() or even for JavaScript to be enabled -- Adtech vendors (like Google) can now detect scroll position server-side!
* "Speed up the load of above-the-fold content, since there will be less competition for network resources during the initial page load"
Why don't they just set a low priority for offscreen images and resources? Isn't this the entire premise for HTTP/2, that multiplexing with priorities and flow control would load the important data first? Do servers not implement the spec correctly? So this reason is BS as the data saved is immaterial.
* "Reduce memory usage."
Even commodity phones come with several gigabytes of RAM, the memory may have to be used anyway if the user scrolls, and if you have unlimited scroll or massive scroll something will need to unload data anyway. So this reason is marginal at best.
* "Save network data by avoiding downloading any deferred content that the user doesn't end up scrolling to"
Most phone plans are unlimited or have data capable of watching movies. On Google's own Fi plan "less than 1% of individual Fi users ... use above 15 GB". So this is another BS reason as the data saved is immaterial.
So why are they actually pushing this?
Under "privacy considerations": "so slightly more information about the user's scrolling position on the embedding page is exposed" and "a deferred cross-origin image gets an additional piece of information about the user's scrolling position".
This is not hard to figure out - they are barely even trying to hide it. Same thing as pushing HTTP/2, which I contend was at least partly to track people using socket IP:port address (for instance by keeping a single connection to google-analytics open that all domains' data goes through and boosting the connection keep alive from a few minutes to like half an hour, which they did).
Wow, how can this conspiracy-theory-like nonsense be voted to the top of the comments? I thought Hacker News was a rational place.
The reality is that lazy loading images really does help webpage performance, especially on mobile. You cannot properly implement it in the browser without additional information from the page developer - because you'll never know which images are so important that they will always need to be loaded, and which ones can be lazy loaded once they're almost in view.
We're talking about the addition of one attribute to the <img> tag here. There's no conspiracy, and there's no ulterior motive.
No we're talking about every website having to make this work because the default is to not require loading the resources or else be blamed for not being compliant with a spec forced on them by Google through their browser monopoly.
Google should need some really good evidence to support this.
What points do you disagree with, and why? Where are the metrics? If it's better performing than just priorities and flow control then by how much, and how do you justify breaking sites to achieve that margin?
Doesn't matter. The actual discussion on blink-dev is pretty clear about the plan being for lack of that attribute to mean "use some heuristics of Google's choice to decide whether to lazy-load".
Well they already decided to fuck the audio api. You have to have a direct user gesture to play any audio.
Wasn't part of the spec and Mozilla is doing it via a page permission. Just Google about the chrome autoplay audio and you'll see plenty of issues about how they rollout features.
Google has shown this is exactly what they plan to do with their Monopoly before with chrome-specific features that require the developer to be aware of.
Do you like the fact that Google can now create standards ?
They have an idea, put it in Chrome and then ask Mozilla to catch up and implement the missing tags.
You seem to be missing a few things:
- data usage is correlated with battery usage, which is important on mobile
- the http2 prioritization point is only half valid. Two big assumptions that probably don't hold in practice: whether the implementations actually implement priority correctly, all the way through the stack (e.g. perhaps some servers have a suboptimal implementation?), And whether having a bazillion low priority image requests can hit some slowness - e.g. I would expect there to be some limit on how many streams http2 implementations will allow simultaneously. And lastly, http2 is based on tcp so even if the prioritization is good, changing the priorities as the user scrolls means that the bytes currently transferring are likely less important and holding up the more important response - which is probably not a big deal on a good network but with any amount of packet loss will introduce lots of extra delay.
Their own customers are using less than 15 GB per month, how much battery could they possibly save on the radio? A single video will use more radio time than days of image downloads in the browser. Furthermore, Google themselves didn't mention battery savings but just fewer bytes sent.
And what delicious irony if they couldn't solve this with HTTP/2 because of servers not implementing the spec correctly or having weird undocumented quirks (their reason for not enabling pipelining despite it working just fine).
My previous phone - an HTC one m7 - had some battery issues when it was pretty new. I limited myself to something like 2 or 3gb per month on cellular data (because that was my plan) and yet I still had to charge my phone at work somedays. (It's not about battery over the whole month - it's about the worst case on any given day.)
I learned to debug by looking at the system data usage monitors - the standard power usage per app wasn't helpful. Turned out the problem was just a data hungry photo app we were developing at the time. We made it nicer on data usage before we launched.
And after all that, chrome was usually in my top 3 apps for data usage. Probably still is. So they absolutely should be optimizing it.
Granted that was several years ago but in some parts of the world that phone would probably be better then what the average consumer uses today.
Can you explain what you mean? Chrome does not have any customers.
> And what delicious irony if they couldn't solve this with HTTP/2 because of servers not implementing the spec correctly or having weird undocumented quirks (their reason for not enabling pipelining despite it working just fine).
How do you envison the browser to solve this when the server is not H2 capable?
I agree with you, but the way I see GP's point is that people are not running out of data because of non lazy loaded images. If they were, they'd be complaining about how little they can browse or buying bigger subscriptions.
It is also my experience that unlimited bundles (which aren't unlimited anyway, they're just limited to an amount that virtually nobody hits which is quite a difference from "we can make them use as much data as we want") are rare rather than the norm.
> Most phone plans are unlimited or have data capable of watching movies. On Google's own Fi plan "less than 1% of individual Fi users ... use above 15 GB".
I use Google Fi and I use less than one gigabyte a month: not because I'd like to, but because I don't want to pay for mobile data when I don't have to.
The downside of this is that it assumes that I am always online, and I can never know if a page has finished loading. Say I load an article in a background tab to read later, then go somewhere without WiFi like, say, an airplane. I read the first page, hit the space bar, and see gray boxes. Now I need to either scroll all the way through any article I want to read later (unless "infinite scroll" happens), or save it as a web archive (if that even still works).
Yeah and they are a pain in the ass. I hate this anti-feature and now it's becoming the standard. My only hope is that I will be able to turn it off with about:config flag or maybe set the proximity to the bottom of the screen higher, which I currently can't do because currently it's all custom JavaScript code.
And mobile browsers randomly purge and reload pages when they're low on memory, and news sites do lazy image loading with miscellaneous Javascript. It all sucks, and it's too bad that Google wants to (i.e. will) "make it official."
You think this sucks? So instead of optimizing for the 99% case where I go to a site and want to see the images on the page as fast as possible and defer loading what I can't see initially, you instead want to prioritize "Say I load an article in a background tab to read later, then go somewhere without WiFi like, say, an airplane."???
I agree, these websites already cause a new pattern of “quickly scroll to the bottom to make sure everything’s loaded”
I basically have to do this now before I board a train.
It’s good they’re implementing this, but it will mislead people to thinking the whole website is ready to use. It’s a shame an option couldn’t be “load the rest after page load”.
There is a Chrome optimization that already exists and has a similar downside: Taking away processing from unfocussed tabs. Example: Load Google play music. Switch to another tab while it loads. Switch back to music some time later. It only now actually starts loading. Quite annoying...
I've had the idea of doing a similar thing in application code for XHRs. For example, if you have a React app and you structure your code so that components declare their own data dependencies, then you can have a framework-level solution where you measure component positions on mount and give priority to data fetching for components above the fold.
I never actually implemented it, though, just seemed like a nice idea. Anyone know if existing data fetching frameworks (e.g. Apollo Client) can do anything like that?
> do we yet unload images (from memory) after a user scrolls far past them?
Last time I looked, the answer was no for images but yes for background images. I implemented a lazy-loader which did the latter for a pretty significant measured improvement on gallery pages.
Considering the privacy issues (see bsdetector's comment [0]), I'm not sure they'd willingly add this. They may end up not having much choice because of the "positive" aspects of the feature being too much of an advantage in chrome, but this feature doesn't seem to fit their philosophy.
[0]: https://news.ycombinator.com/item?id=19602877
> Under "privacy considerations": "so slightly more information about the user's scrolling position on the embedding page is exposed" and "a deferred cross-origin image gets an additional piece of information about the user's scrolling position".
But the last comment raises concerns about privacy implications it may have. So, I think that right now it is unclear whether Firefox will eventually implement it and in what form.
That's not this. Bug 1535749 is about lazy-loading images on the new tab page, not about implementing native support for the "loading" attribute on img tags.
On Android Chrome with Data Saver turned on, elements with loading="auto" or
unset will also be lazily loaded if Chrome determines them to be good
candidates for lazy loading (according to heuristics).
I don't agree with this design decision to make the default ("or unset...") allow for lazy loading, even if limited to Data Saver enabled phones. Doesn't this mean every site on the planet that deems Android >= 7.0 web traffic important and makes use of pixel-based or iframe-enclosed tracking will have to go through testing and potential modification?
I think that this could be a boon to pixel tracking. From my understanding, a site could embed multiple tracking pixels on a page to give a good approximation of how far the user scrolls and how long it takes them to scroll down the page, all without JavaScript.
I would be curious if this would also be apply to email clients that render using Chrome.
They address this in the linked document, and note that if you’re depending on offscreen elements to load content from your server, that will no longer necessarily work.
The new attribute’s value names look poorly chosen. “Lazy” and “eager”. Why don’t they use existing vocabulary like “deferred“ for lazy, and a new, equally professional word for eager.
To me, lazy loading is a well known term, maybe it's regional or language dependent? I'm from the Netherlands, heard the word in school and online, in multiple contexts but mainly C#, Java, and web.
So it seems the only way to correctly implement that is to open a connection to load an image and stall it after receiving the content-type and appropriate image header and hope that the server will not close the hanging connection(?)
PS: Seems that chrome will download the first 2K if byte-ranges are supported. If there is no dimensions in the first 2kb or byte-ranges are not supported, the full image will be downloaded non-lazily https://docs.google.com/document/d/1691W7yFDI1FJv69N2MEtaSzp...