I find such a test strange and irrelevant, though.
It works well because, as stated, the kernel does a good job rejecting frequencies prone to aliasing. So, in particular, you don't get any real quality loss from doing 2x scale changes as opposed to bigger steps (and thus considerably larger FIR support).
I have some Python notebooks with some of these results, haven't gotten around to publishing them yet.
The (2D) Lanczos downsizing can be done with only four samples using the bilinear sampling tricks that you mention, and I avoided expensive trigonometric functions, divisions, and the singularity at 0 by using an even 8th order polynomial approximation. I would be curious to see the results using this kernel, but the Lanczos is so far the best that I've tried.
So, basically the reason why this works better than other visually similar filters is that it happens to satify the Nyquist ISI criterion[1].
Loading a Progressive JPEG means you still unconditionally load the entire file, you just are able to show a low detail version before it is fully loaded. The last time I saw a progressive JPEG actually take time to load was when I had dialup.
2. The OPs JPEG-Clear proposal [1] also loads the entire file no matter what. It's literally just a reinvention of progressive JPEGs, presenting it as something novel.
This is basically a slightly different gaussian kernel and the "incredible results" of a small image becoming a larger resolution blurry image is completely normal.
Also you don't want negative lobes in image kernels no matter how theoretically ideal it is, because it will give you ringing artifacts.
If you work with image kernels / reconstruction filters long enough you will eventually learn that 90% of the time you want a gauss kernel.
Strongly disagree, and my commercial software is known for its high image quality and antialiasing. Gaussian is way too blurry unless you're rendering for film.
Essentially it uses a truncated isotropic (non-separable) Gaussian defined by exp(-2.0 * (x*x + y*y)) with a 2x2 pixel support [3], which is slightly soft and completely avoids ringing.
Gaussian also plays very nicely with filtered importance sampling [4] since it has no negative lobes.
(Though I remember a number of studios using RenderMan preferring filters with a bit of sharpening.)
[1] https://paulbourke.net/dataformats/rib/RISpec3_2.pdf#page=40
[2] https://rmanwiki-26.pixar.com/space/REN26/19661819/Filtering
[3] https://paulbourke.net/dataformats/rib/RISpec3_2.pdf#page=20...
[4] https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...
The aliasing you can end up with from a mitchell filter though can be noticeable all through the process. Not only that, but what will the compositor do when they see the aliasing? They'll blur it.
Basically it is trying to squeeze blood from a stone and the image out of renderer is going to be far sharper than anyone will see because it will go through multiple stages. Even compositing almost never leaves a render verbatim. There is usually some sort of slight softening, chromatic aberration, lens distortion and/or other transforms that require resampling anyway.
It is picking up pennies in front of a bulldozer and only causes problems to have a filter that's too sharp, let alone one that has negative lobes.
Scanning film would be digitizing it, going out to film could be called printing.
does add a little blur
A lot of blur. 35mm film had a lot of folk wisdom built up around it and even though it would be scanned and worked on at 2k resolution for live action vfx, if you actually zoomed in it would be extremely soft.
You could get away with 1.5k just for printing to the first generation film, let alone the 2nd gen master prints and the third gen prints that went out to theaters.
The filter did kinda matter
It is extremely unlikely lighters were changing the actual pixel sample filters on a shot by shot basis. This is something that is set globally. Also if you set it differently for different passes and you use holdouts, your CG will not line up with the other CG renders and you will get edges from the compositing algebra not being exact.
I was highly impressed with how sensitive the director and lighting/fx supes were to filter differences,
No one is changing the pixel sampling filters on a shot by shot basis and no one is going to be able to tell the filter just by looking at playing back on film. This is simply not reality.
and how quickly they could spot them in animated clips that were often less than one second long
Absolutely not. Whatever you are talking about is not different pixel sample filters. Aliasing in general yes, but that's much more complex.
The main thing they were doing was trying to avoid texture sizzle without overblurring.
This has nothing to do with the renderer's pixel sample filters which is what would be the only analog to the article here. Maybe you are talking about texture filtering, although that is not a difficult problem due to mipmapping and summed area mapping. Even back then a geforce 2 could do great texture filtering in real time.
Maybe you are talking about large filter sizes in shadow maps, which need a lot of samples when using percentage closer filtering, but that has nothing to do with this.
Lighters on Shrek were indeed not changing the pixel filter shot by shot. We did, however, multiple times, sit down to evaluate and compare various pixel filters built into the renderer, and when we did that, the pixel filter actually was changed for every shot, so they could be compared.
The filter mattered because it was set globally, and because the resolution was low, less than 1k pixels vertically. That is precisely why we spent time making sure we were using the filter we wanted.
I am in fact talking about different pixel filters, and the director and supes could tell the difference in short animated clips scanned to film, which is why it was so impressive to me. If you don’t believe me, I can only say that reflects on your experience and assumptions and not mine.
Both subtle high frequency sizzle as well as other symptoms of aliasing do occur when the pixel filter is slightly too sharp. I don’t know why you’re arguing that, but you seem to be making some bad assumptions.
Don't forget that these days, it's all going through some ML-based denoiser, anyway. I wouldn't be surprised if filter choice is nearly irrelevant at this point (except for training).