Reduce Deployment Time?

I have a multi-container application and one service(container) is a python application where the Nvidia driver is installed inside it.

The problem is its size - its image size is ~3GB and would take more than 30 minutes in slow networks to be deployed.

My questions are:

  • Any tips to reduce the image size?
  • Is it possible to deploy a smaller container instead and then pull the fat one when the network speed is fast enough?



Have you looked at preloading?
This can probably save you some time in deployment by already including the application in the initial image you write to the devices.
You will also want to look at the delta updates.

Hi, @TJvV
Thanks for your reply!

preloading won’t work because our customers don’t want to download a fat image file! lol

And the delta update - thanks for sharing with me!



Can you maybe share your docker-compose (and Dockerfiles)?
It can maybe give some insights in how to reduce size.

I’m guessing one of the reasons your image is so big, is that you need an SDK to build the nvidia driver?
Multistage builds may help reduce the size of what you’re actually deploying by splitting up the process.


FROM balenalib/%%BALENA_MACHINE_NAME%%-ubuntu:focal


ENV DEBIAN_FRONTEND=noninteractive

# Install Nvidia Driver
RUN apt-get update && apt-get install -y wget gcc build-essential apt-utils dialog aufs-tools libc-dev iptables conntrack unzip libglu1-mesa-dev
RUN wget -nv${RESINOS_VERSION}/kernel_modules_headers.tar.gz && \
    tar -xzf kernel_modules_headers.tar.gz && \
    mkdir -p /lib/modules/${YOCTO_KERNEL} && \
    cp -r kernel_modules_headers /lib/modules/${YOCTO_KERNEL}/build && \
    ln -s /lib64/ /lib/ && \
    chmod +x ./${NVIDIA_DRIVER_RUN} && \
    mkdir -p /nvidia && \
    mkdir -p /nvidia/driver && \
        --kernel-install-path=/nvidia/driver \
        --ui=none \
        --no-drm \
        --no-x-check \
        --install-compat32-libs \
        --no-nouveau-check \
        --no-nvidia-modprobe \
        --no-rpms \
        --no-backup \
        --no-check-for-alternate-installs \
        --no-libglx-indirect \
        --no-install-libglvnd \
        --x-prefix=/tmp/null \
        --x-module-path=/tmp/null \
        --x-library-path=/tmp/null \
        --x-sysconfig-path=/tmp/null \
        --kernel-name=${YOCTO_KERNEL} && \
    rm -rf /tmp/* ${NVIDIA_DRIVER_RUN} kernel_modules_headers.tar.gz kernel_modules_headers

# Install docker.
RUN apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common \
    && curl -fsSL | apt-key add - \
    && add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable" \
    && apt-get update && apt-get install -y docker-ce

# Install Nvidia Container Toolkit
RUN distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
   && curl -s -L | sudo apt-key add - \
   && curl -s -L$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
RUN apt-get update && apt-get install -y nvidia-docker2

# Installing AWS CLI V2
RUN curl "" -o "" && unzip -qq \
    && ./aws/install --bin-dir /usr/bin && rm -r ./aws && rm

# Install some other utilities
RUN apt-get install -y python3-pip python3-dev dbus dmidecode lshw hdparm smartmontools v4l-utils && pip3 install -U pip setuptools wheel

# Enable udevd so that plugged dynamic hardware devices show up in our container.


# Set our working directory
WORKDIR /usr/app

# Install OpenVINO's HDDL driver for Mustang-V100-MX8
COPY ./hddl ./hddl
RUN apt-get update && apt-get install -y cmake libudev-dev libjson-c-dev && \
    wget -nv${RESINOS_VERSION}/kernel_modules_headers.tar.gz && \
    tar -xzf kernel_modules_headers.tar.gz && rm kernel_modules_headers.tar.gz && \
    mkdir -p /usr/src/kernel && \
    mv kernel_modules_headers/* /usr/src/kernel/ && rm -r kernel_modules_headers && \
#    ln -s /lib64/ /lib/ && \
    cd /usr/app/hddl/drv_vsc && make

RUN apt-get install -y pciutils pkg-config && cd /usr/app/hddl/hddl-bsl/src && make && make install && cd .. && \
    mkdir build && cd build && cmake .. -DINSTALL_USB_RULES=TRUE && make && make install && cd ../../../

RUN apt-get clean && rm -rf /var/lib/apt/lists/* /var/tmp*

Hi, couple of optimizations right of the bat to make the dockerfile cleaner, slimmer

  1. Start using install_packages command that come packaged in every balena base image. Install_packages is a installer script that:
  • Install the named packages, skipping prompts etc.
  • Clean up the apt metadata afterwards to keep the image small.
  • Retrying if apt fails. Sometimes a package will fail to download due to a network issue, and this may fix that, which is particularly useful in an automated build pipeline.
  1. Next, any specific reason to install Docker
  2. Why not install all OS dependencies in one step for them to be more easily cached?
  3. Why install Python when you can use the base image from balena that has Python pre installed, check out Docker Hub (You will find a similar base that is available for your device type)
  4. Optionally to build faster, you can use a beefier local build machine (balena ARM servers are already quite substantial) so that you can use balena build command to build locally and then using balena deploy to deploy that image.

With each round of optimizations, do post the metrics for the final image so that we know we are making progress. Another hard optimization you could possibly do would be to use multi-stage builds but that very much depends on your usecase.