New feature: show app container image size at git-push time

We’ve just added this (much requested) feature to the end of the git push process, showing the final image size (just above Charlie the unicorn), alongside the time it took to build it

Some use cases include helping you to assess whether changes to your Dockerfile improved on the image size - and thus deploy size (eg. whether temporary files are cleared up properly in your Dockerfile).

What do you think? Anything interesting you’ve seen? Do you think the push feedback is getting more useful, or more crowed (e.g with the post git-push tips)?

3 Likes

This is super useful. I’ve hit the wall before with image bloat, and I had to bug support to find out that my issue was, in fact, the image size.

1 Like

@ashmastaflash what was the biggest image you’ve deployed before, and how much you could cut down on it? :chart_with_downwards_trend:

One layer in the image was 2.231GB, and it was killing the Pi as it tried to unpack it. I was trying to figure out a good way to get Gnuradio onto a Pi using Resin. I think I was building the image with Docker and Qemu in an AWS instance (lots of cores) and exporting the image to Dockerhub, then FROM that image in the Resin Dockerfile.

Gnuradio is a big monster. At the time it wasn’t packaged for any image I could find to use on the Pi, so I had all the build libs and deps in the image. Later, I gutted what I could and flattened the image, and that helped some. I think that there are Gnuradio packages out there now, so it would be a much smaller image. I moved away from gnuradio and I’m using other smaller, compiled tools. May revisit in the future, using an Odroid or Pi3. Gnuradio (or what I was trying to do with it) overwhelmed the Pi2.

Interesting, thanks for sharing the story! Looks like there are projects like there are some build scripts that might be possible to use now, but it does look horrendous! :see_no_evil:

One of the lesson I’m learning, that if you want to install a program, to save on space, you really have to do everything in one RUN step, from installing dependencies, building, tearing things down, otherwise a bunch of extra stuff will stick around in layers. That’s where one ends up with RUN steps like this one I had for building Dungeon Crawl Stone Soup (I know gnuradio is more complex, this is just what I had experience with so far), and it’s not even finished optimizing. Docker’s layers model has its advantages but its limitations too.

When you say that image was killing the Pi unpacking it, it means it was just taking veeeery long, or did not work at all? I wonder if it would have been better with binary deltas a bit. But RPi3 or the Odroid will get you anywhere faster for sure.

Docker exhausted all RAM and died when processing that big layer on the Pi :frowning:

Ouch, that doesn’t sound good indeed…