Jump to content
Search Community

Best compression for your banner assets

Christoph Erdmann test
Moderator Tag

Warning: Please note

This thread was started before GSAP 3 was released. Some information, especially the syntax, may be out of date for GSAP 3. Please see the GSAP 3 migration guide and release notes for more information about how to update the code to GSAP 3's syntax. 

Recommended Posts

Hi,

 

I just want to introduce Compress-Or-Die which is an online compression tool especially created for the creators of banners... so I hope for the most of you. :)

It isn't a tool like tinyjpg or jpegmini that just allows to shrink existing JPGs a little bit. It's the one that creates your (also low quality) images from your original data and really squeezes out the last byte. And allows things like JPGs with transparency and "Selective quality" (as known from Adobe Fireworks) btw...

 

Take a look at it here:

http://compress-or-die.com/

 

In this context these articles could be interesting that explain a lot of the options you can set:

http://compress-or-die.com/Understanding-JPG

http://compress-or-die.com/Understanding-PNG

 

I am the author of the tool and the article. So if you have questions, wishes or something else, just drop me a line.


Thanks,

Christoph

 

 

  • Like 9
Link to comment
Share on other sites

For the whole tool I am trying to use only the best compressors available, so for 8-bit PNGs it's pngquant (also for me the best choice). That's the reason why the conversion takes a little bit time on big images. For some things I coded my own stuff (e.g. JPGs with transparency and JPGs with "Selective Quality"). At the moment I am in a JPG research phase so there should come some little improvements soon.

  • Like 1
Link to comment
Share on other sites

As long as JavaScript is available, you should be able to use the transparent jpg wherever. When the image loads, it creates a canvas and does some pixel manipulation.

 

And speaking of pixel manipulation, would the transparent jpg script run a little faster using a typed array? Probably needs a check for older browsers though.

https://jsperf.com/canvas-pixel-manipulation/8

 

  • Like 1
Link to comment
Share on other sites

5 hours ago, OSUblake said:

One note, I had to add a crossorigin="anonymous" attribute to get it to work with assets hosted on CodePen.

 

Yes, always needed if the javascript domain and the image domain are different. If you are building banners it makes sense to add this everytime because there are ad servers that put the assets on a different domain than the index.html file.

Maybe I should point it out.

 

5 hours ago, Oliver16Years said:

Interresting. I am looking for ways to reduce my ads size more than Photoshop can. Do You plan to relase the compressor as a binary? Are these transparent .jpgs useable in banner iFrames?

 

No sorry, there won't be a binary.

And yes, as OSUblake stated, it works as long as javascript is available.

 

3 hours ago, OSUblake said:

And speaking of pixel manipulation, would the transparent jpg script run a little faster using a typed array? Probably needs a check for older browsers though.

https://jsperf.com/canvas-pixel-manipulation/8

 

Oh, nice one, thanks for this. I will take a look into it and will try to extend my code.

  • Like 1
Link to comment
Share on other sites

Just noticed how you can create your own selective quality mask. Very nice!

 

I think I need to redo a lot of my assets now. Any advice on compressing a sprite sheet? Would it be better to do each image individually, or just compress the spritesheet? 

 

For the typed array stuff... that can get confusing as the endianness of the processor determines the order of the rgba values. Parts of this script might help you out. It's from ZorroSVG, which does something similar to your transparent jpg trick, but with an svg mask.

https://github.com/Quasimondo/QuasimondoJS/blob/master/zorrosvg/js/zorrosvgmaskmaker.js

 

Some other SO links on the issue.

https://stackoverflow.com/questions/19499500/canvas-getimagedata-for-optimal-performance-to-pull-out-all-data-or-one-at-a

https://stackoverflow.com/questions/39213661/canvas-using-uint32array-wrong-colors-are-being-rendered

 

 

Link to comment
Share on other sites

Hope the selective mask was not too hard to find. I am a little unsure about the UX. It is such an important feature.

 

As so often the answer is... it depends. But just post your uncompressed sprite sheet here. I feel like having a look at it (hope this phrase is correct).

 

And once again thanks for the links. I plan to address this subject next week. At the moment I am working on a javascript JPG Blocking artifact deblocker which is my priority.

Link to comment
Share on other sites

On 5/27/2017 at 2:27 AM, Christoph Erdmann said:

Hope the selective mask was not too hard to find. I am a little unsure about the UX. It is such an important feature.

 

It wasn't obvious that selecting selective quality would open up a nice little editor. Maybe show the button and slider for it, but disabled.

 

And of course, it always depends, so I included several different types I have used. The jacket ones are huge, but something I do a lot for 360 product views.

https://drive.google.com/file/d/0B6-ZvI1UFHQzZ21jVDg4ekRiRzA/view?usp=sharing

 

Link to comment
Share on other sites

Jackets:
This is actually the perfect case for transparent JPGs. jacket0.png gets a file size of 205kB instead of 2637kB with the default quality settings. But I suppose you have to say good bye to your texture packer then. Think it would be worth it. But you have to keep an eye on the decompression time (you know, the typed array is on my list).
I also don't see notable differences using 8 bit PNGs (908kB instead of 2637kB), but I think this won't work for all your products.


Spine:
Just use 8 bit PNG and set a color amount that pleases you (48kB instead of 212kB width 256 colors).
Take a look at the compression view. There are stains I've marked with red rectangles:

 

errors.png.d913c3953b5672569a721e5875601c68.png


Those stains take space but shouldn't be there, correct? The compression view is really useful to identify dirty transparency.


Dungeon:
Just use 8 bit PNG and set a color amount that pleases you. If you need more colors that 256 I would break the sheets apart und consolidate images with similar colors.
You can use the compression view to check the correctness of your sheets:

 

errors2.png.f4de1fc49c34ce536270fc81e34418d7.png

 

In area 1 the PNG uses much file size for compressing the ponds of lava. So I think this is an animation. If it's not there is a opportunity for improvement.
In area 2 the ponds are duplicates. Fine.
In area 3 is a pond. In area 4 is a different pond, why? Copy the blue area from area 3 and you will save space.
These are just some examples how to use the compression view.


Does that help you?

  • Like 3
Link to comment
Share on other sites

Thanks for explaining the predator vision. I had no idea what I was looking at, but having you explain it makes a lot of sense now.

 

What you have created is pretty amazing. If you could could incorporate texture packing and wrap everything up in a desktop app, you could probably sell it. I know I would buy it. 

 

Link to comment
Share on other sites

On 5/29/2017 at 1:55 PM, Christoph Erdmann said:

Btw.: Tried UInt32Array with direct pixel manipulation and did not see any improvements in run time. Maybe chrome's V8 optimizes this internally.

 

Most of the performance gains with 32-bit manipulation come from being able to manipulate the pixel as a whole, instead of each channel separately. But you're only changing the alpha channel, so the number of reads and writes will be the same for both methods.

 

How did you write your loop? I'm no expert in bitwise operations, but using this loop for a little-endian was faster in Chrome and Firefox. In Edge, it blew the doors off a regular array.

context.drawImage(image, 0, 0);
    
var idata = context.getImageData(0, 0, canvas.width, canvas.height);
var data32 = new Uint32Array(idata.data.buffer);
var len = data32.length / 2;
var i = 0;
  
while (i < len) {
  data32[i] &= ((data32[len + i++] << 24) | 0xffffff); // red channel (little-endian)
}

context.putImageData(idata, 0, 0);

 

 

CodePen is not a good environment to test in, but here's a demo. Just let it run for a couple of seconds.

 

 

 

 

  • Like 2
Link to comment
Share on other sites

That was mean. I've implemented your code in compress-or-die and it was slower than the 8bit code. I've created a fiddle to show it to you. But in this fiddle the 32bit code was faster.

But I got it: The difference is that I've inlined the code in compress-or-die within the onload attribute of the img tag. That made the 8bit code a little bit slower, but the 32bit code a lot slower than all other variants! So you only get the performance boost of the 32bit code if you define a function (an IIFE seems not to be enough) otherwise the situation switches completely.

 

Here is the fiddle:

https://jsfiddle.net/McSodbrenner/gtv3earr/

 

I removed the "i" of the img tags to deactivate the corresponding code. So only one image tag should be correct at the same time to test it.

 

But it's just a very small performance boost. Most of the time is getting lost at "getImageData()" which is needed anyway.

  • Like 2
Link to comment
Share on other sites

  • 2 weeks later...
  • 1 month later...

My new article "Finally understanding PNG" is now online. It also explains the "predator view":

http://compress-or-die.com/Understanding-PNG

 

Just a warning. English is not my native language and thus it is possible you are stumbling over some quirks. :)

My english speaking colleague is proofreading this article at the moment, but I couldn't wait and would be glad to get your feedback.

  • Like 4
Link to comment
Share on other sites

That's really helpful... and very interesting seeing the difference some of those optimizations can make, like using 8-bit. I have always used 24-bit.

 

And compression is interesting. The LZW algorithm in JavaScript, which is based on LZ77. Taken form here

 

 

 

Link to comment
Share on other sites

One thing I really like is the "copy Data-URI" option :) This is great for testing on codepen. Although I have a pro account there and could upload every asset, it's quite a mess when you have like 25 files, all by the name "logo.jpg" or "product.png". Okay, having BASE64 in the code is ALSO a big mess, but after all a quick and easy method to test stuff.

I once wrote my own "transparent JPG" compressor which used a very similar approach like Christophs:

See the Pen aJLmrY by ninili (@ninili) on CodePen


...but it doesn't have all the controls for all the detailed aspects of compression, because it uses built-in Browser image compression.
So there's always a lack of optimized Huffman codes, selective quality, separate brightness and color compression, color subsampling, quantizers and so on.
You can even sometimes see it in the base64 output, when there is lots of repitition, that's where the entropy is way too low for what you would consider a "good compression".
I used this technique about 2 years ago when I had to animage quite big transparent graphics that would be over 300k even with tinyPNG. So going "transparent JPG" was the only way to get it done with sane image sizes.
But when I discovered all the details about JPG and that the browser doesn't perform a "huffman optimization" which always left some compression potential untouched made me ... SAD - but now there's Christophs tool that does exactly what I wanted, with almost everything I missed.

(almost meaning stuff like - using 8-color PNG data for masking for sharp edges or even traced SVG paths, separate control for Mask quality, etc.) But I think Christoph is already working on all that in secret. ;)


Beneath Canvas, my tool also offers various SVG techniques to use the image mask. But beware, CANVAS will render faster, because the CANVAS will be handled like any other image or bitmap when animated. The SVG will introduce too much re-calculating, especially when doing transforms. Except you put that in a html container having the style "will-change: transform", then the SVG itself will get cached as an image and you can transform the container almost as fast as a CANVAS element.


Right now I'm on something I call a "compression frenzy" and try to drain out every last bit ;) and it goes as far as re-ordering parts of the HTML, using JSUglify, CSSmin, HTML minifiers, https://jakearchibald.github.io/svgomg/ , manual editing and optimizing of vector data, including re-nesting of transformations, using lots of symbols, etc. AND of course http://compress-or-die.com/ which is something like a dreamland for a compression nerd like me ;)

Bzw. the attached image shows what you get - at least as BASE64 - when your JPG is NOT huffman-optimized (like Christophs tool does). This should convince anyone who is compression-aware ;)

 

baaadcompression.png

  • Like 1
Link to comment
Share on other sites

Another strange oservation:
There seem to be cases when inlining JPG images as base64 in HTML yields a smaller data size than using actual image files.

In this case, the huffman codes are NOT optimized. Maybe that is the cause and the gzip "finishes" what the un-optimized Huffman didn't get.

base64.png

Link to comment
Share on other sites

On 5.8.2017 at 4:32 PM, Ninili said:

(almost meaning stuff like - using 8-color PNG data for masking for sharp edges or even traced SVG paths, separate control for Mask quality, etc.) But I think Christoph is already working on all that in secret. ;)

 

I've used PNG8 for the masks some time ago but most of the time the PNG was a little larger than the JPG. And you need two requests. So I decided to go with a JPG mask.

I've also tried mask quality settings but removed it. If you really want a smaller JPG mask you would have to do two requests. That adds HTTP header data. "Selective quality" seems not to be the solution for this. If you use quality settings that are not similar to each other the JPG algorithm uses different patterns from the 8*8 table. And then you get a glow around the JPG because the background color of the source image is not masked perfectly anymore.

 

Quote

There seem to be cases when inlining JPG images as base64 in HTML yields a smaller data size than using actual image files.

 

In my experience gzipped base64 images are a little bit larger than the original images. But Chrome als counts the HTTP header data for requests that is missing in the right part of your screenshot. Maybe that's the reason.

Link to comment
Share on other sites

  • 10 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...