the Andrew Bailey

Responsive Images & Remote Debugging

I got a new phone last month, because the screen on my old one broke. My new phone has a 5-ish inch 1080p screen. That's means it has an insane pixel density! On my podcast, I occasionally talk about some new program that increases efficiency, but doesn't change standards and fits within existing ecosystems. My favorite is MozJPEG. It's a program that encodes JPEG images much better than (almost) all others. Since I keep high resolution images of almost all the images on my blog (and share them), I experimented.

Most of the images on my blog are 1920×1200 or 1080p (1920×1080) game screenshots. In blog posts, I downscale them to 960 pixels wide, and I encode as much as I can in 80 KB, or up to 80% quality. (up to 0.138 bytes per pixel) I played around with the full resolution images. By doubling my size limit to 160 KB, the images look much better, despite the quadrupled image resolution and only doubled filesize (up to 0.0694 bytes per pixel). When downscaling these new large images to the dimensions of the existing small images, the new ones usually look better.

I can't find any explanation of why the quality is much better on the larger images despite the higher compression ratio. My working theory is that downscaling creates a finer detailed (higher frequency) image than the original. JPEG works by breaking the image into 64 pixel blocks, and applying a discrete cosine transform to determine the individual frequencies (detail levels) within each block. JPEG then chooses which frequencies are stored, starting with low frequencies, because those are more perceptible. Fewer frequency bands compress better, which make the file smaller. The same image at a higher resolution has lower frequencies on average. Downscaling the image after JPEG recreates some of those higher frequencies. Am I totally off here?

I decided to write some JavaScript to check if a higher quality image is available, and only show it if the device can show the additional resolution (i.e. if the larger download is worth it). Normally, the <img>'s srcset attribute, or a <picture> is used for this. However, I write my posts in Markdown, and it doesn't cleanly support either of those, and I'm not going to complicate Markdown to work around it. "Official" activity on Markdown (or is it CommonMark?) seems to have stalled over the past few years, and the maintainers aren't adding obvious things that would be useful, like tables. Last I checked, they were chasing weird corner cases involving insane nesting, and that drove me to forks.

I also only want it to happen on individual post pages, not on categories (which have multiple images from several articles), to keep page weight down. My category page's images have a gradient and text overlay, so I'm comfortable with those pages getting away with the lower quality images.

  1. I needed a naming convention for the high resolution images, and a regular expression to convert the existing image URL. Inserting "×2" between the image name and the extension should do it. Catch that? It's not an "x", but a "×": a multiplication sign. This expression: /^(.+)(\..+)$/u will group any string up to the last period, and group said period and everything after. (Having multiple extensions on an image is stupid, like "image.gif.png", and I'm not going to do that.) I take those parts, and put "×2" between them.

  2. I wanted to not download the image right away, but still wanted to know if the image existed. The HTTP HEAD method is designed for this scenario. The server will respond just like a HTTP GET, except without the resource itself.

  3. Sounds good, but how does a <img> know about that high resolution version? Every HTML element can have any number of data- attributes attached. The image needs to store the alternative (high resolution) URL. During testing, I realized that it needs the low resolution image width, too.

  4. To test whether the image quality would be improved, I needed to understand some responsive design APIs. While an image might be n pixels wide, the browser can display that image over how many ever pixels it wants to.

  5. Multiplying <img> natualWidth by window.devicePixelRatio counts how many real pixels an image is displayed over. Divide it by the low image resolution width to give a ratio. If that ratio is at least 1.2 (meaning that there's at least 1.2 screen pixels for every pixel in the low resolution image), use the higher resolution image instead. If it goes below 1.2 for some reason, show the low resolution one again. My server sends aggressive caching headers for images, so this should not result in a network request.

  6. I run this test when the HTTP HEAD completes, and on a resize event, so it works when things like window sizes or zoom levels change. On my 1080p phone, it shows low resolution images in portrait orientation, but high resolution in landscape. I can compare both versions of images this way.

For reference, I have this code:

const forEach=Array.prototype.forEach;
const imgSrcPattern=/^(.+)(\..+)$/u;
function swapImageIfNoticeable(img){
    const currentSrc=img.src;
    const lowResWidth=Number(img.dataset.lowResWidth);
    // lowResWidth === img.naturalWidth = showing low resolution image
    if(lowResWidth===img.naturalWidth && window.devicePixelRatio*img.clientWidth/lowResWidth>=1.2){
        img.src=img.dataset.altSrc;
        img.dataset.altSrc=currentSrc;
    }
    // lowResWidth !== img.naturalWidth = showing high resolution image
    else if(lowResWidth!==img.naturalWidth && window.devicePixelRatio*img.clientWidth/lowResWidth<1.2){
        img.src=img.dataset.altSrc;
        img.dataset.altSrc=currentSrc;
    }
}
function setEvents(ready){
    // images on category pages are "figure>img", article images are "p>img"
    forEach.call(document.querySelectorAll("p>img"),function(img){
        const parts=imgSrcPattern.exec(img.src);
        if(3>parts.length){
            return;
        }
        const url=parts[1]+"×2"+parts[2];
        const req=new XMLHttpRequest();
        req.open("HEAD",url);
        req.addEventListener("load",function(){
            if(200===req.status){
                // assuming low resolution image is already loaded
                img.dataset.lowResWidth=img.naturalWidth;
                img.dataset.altSrc=encodeURI(url);
                swapImageIfNoticeable(img);
            }
        });
        req.send();
    });
    window.addEventListener("resize",function(){
        forEach.call(document.querySelectorAll("p>img"),swapImageIfNoticeable);
    });
}
if(document.readyState==="complete"||document.readyState==="interactive"){
    setEvents(false);
}else{
    document.addEventListener('DOMContentLoaded',setEvents);
}

Testing this on a phone might be tricky, but it's not. You will need to download the Android SDK and install the Google USB Driver in the SDK tools, and plug an Android phone into a PC. Both Firefox and Chrome have remote debuggers built right into the browser. It's just like debugging a browser tab, except that browser tab lives on the phone. Inspect and set breakpoints to your heart's content! I like Chrome better, because it has a live view right inside the debugger, and you can do browser things in it, and watch as your actual phone looks like it's hacked.

Screenshot of Chrome's remote debugger

I got this going on my old phone, but I forgot how it went. Coincidentally, the day after I got it working on my new phone, I used it at work.

I've debugged this, and it works. Now for a real world test!

Any image with text looks much clearer, for instance in my Legacy of the Void and Rise of Nations images, the text looks much better. The Fallout 3 large image has banding in the sky, so I think it looks worse, but everything else is great. The Unigine Heaven Benchmark post is a mixed bag. The large image has banding in the smoke, but the dragon's red spikes look sharper, and the wings are more detailed (more veins).

In all, I consider this a win, and I don't have to write more stuff on my posts to make it happen.

Posted under Programming. 0 complaints.