Improving development build times

I’m wondering if there are any strategies for improving development build times? I use ‘balena deploy balena.local’ for pushing development builds. It entails three node install and build processes. On a raspberry pi 4 it takes forever. Hot reloading works fine for things like python but node is really time consuming.

This seems to be particularly bad for my builds as I use multi-stage builds in my Dockerfiles, and as those multi-stage builds do not appear to be cached it has to rebuild the node_modules folder every time (incidentally the same issue makes building multi-stage builds sluggish on the online Balena builders too).

Any ideas welcome. Perhaps an option to build locally and then push the containers instead of building on device would be a start for the local Balena Push? The hardware really struggles.

I imagine this will be a growing issue as Balena really takes off and applications get more and more complex.

In cloud mode, balena push, balena build, and balena deploy all will use the cache if possible, either from the balenaCloud (push) or from the local docker (build and deploy). In local mode, the only option currently is balena push which builds on the device and AFAIK does use the layers already in the device. If your application is missing the cache even when building through the cloud, it seems possible that your Dockerfile is not structured in a cache-friendly way. See for example here: Best practices for writing Dockerfiles | Docker Documentation.

The option to build locally and deploy to a device in local mode is an old idea that unfortunately hasn’t seen much progress. See here: Deploy `balena build` docker images via `balena push` to Local Mode devices · Issue #613 · balena-io/balena-cli · GitHub.

Even with multi-stage builds? Not in terms of whether it triggers a build or not, but whether a layer in the multi-stage build is cache for later use. E.g will: yarn install in a multi-stage build step be cached so the next time the multi-stage build-step is required the yarn install is not required. I think this is where things clog up for me.

Hey @maggie0002, you’re correct that when using balena push and our cloud builders, the only cache comes from previous images so multi-stage images would not effectively use the cache in early stages.

However using balena build or balena deploy just calls your local Docker daemon in the background so it would depend on your environment whether caching is being used or not.

Thanks for the info.

It seems the solution would be --mount=type=cache or saving the cache with --cache-from, but both are buildx features or Dockerfile v2.

For the cloud builders it’s not a big deal, nor is it a big deal for balena build or deploy. It’s only the Balena Push which is a deal breaker. Developing on a Raspberry Pi 4 isn’t possible for me now, the build process takes hours on that low powered device and rebuilds happen too often without the cache. I have resorted to pushing to a dev fleet in the cloud instead to let the cloud builders do the work. It means I can’t benefit from a hot reload, but it seems like the best option.

Hi Maggie,

Let me take a step back here. As I understood, you are doing local development using balena push <uuid>.local but you are seeing long build times on repeated executions of the command.
First of all, thanks for your feedback, I agree there are still plenty of opportunities for optimization with cache for on-device builds. Second of all, If possible I’d love to know more about your development workflow.

  • Could you tell us more about how is your project and dockerfiles structured?
  • Are you using livepush for development? Do changes to your files get picked up by the livepush process?
  • If you are not using livepush, could you tell us how often are you running the balena push command? Running that command should still try to use the cache on device for the build, but it’s possible cache is not being used properly.

Looking forward to hear from you!

Good idea on the step back.

Yes, Balena push is the stumbling block. I mentioned the cloud as it also seemed not to be using the cache but the build times are so much quicker there it’s not as important. it could benefit from a cache for the multistage builds similar to how GitHub Workflows work to speed it up, but that extra speed boost is only really relevant to me as I am now using it to push development changes for testing due to the slow down of balena push.

Here is the repo with the dockerfiles: GitHub - LearnersBlock/learners-block: An open source project that lets individuals and organisations

It’s having three yarn processes that is a killer. It seems to be made worse by Balena executing them all simultaneously, which you would think would speed things up, but three yarn install commands at the same time on an SD card with its slow read/write speeds often leads to timeouts and failures.

Wifi-connect is the real kicker. It has a really simple interface of just a few boxes but has a huge node_install folder and a really slow build process. I realise I could pre-build it, but i opted for the typical and more ideal environment for development. Realise I could also not use multistage builds and get caching, but again opted for the more ideal dev environment.

I was reading more about this when I posted initially but it was some time ago and I haven’t got the stuff I found to hand right now. Did I read something about the yarn commands being built in a way to clear docker caches?

Yes I used livepush, although for the Python components rather than the yarn builds as they would take too long. A change to a build context setup with yarn requires the yarn build stage again.

Hard to say exactly how often I re-use the Balena push command. But package updates in package.json are common enough.

Just had another look back at my dockerfile. I could copy the package.json file I first before the rest. I have it set up so that it would trigger a new install after every file change rather than just a change to package.json, not sure why I changed that. I think because it’s a multi build with no cache, does copying the package.json make a difference in that scenario?

It may help, although because of builds failing now when there is no cache such as on first build, I have to either modify my compose file so that one build happens at a time and the cache is stored or just resort to the cloud.

Hi maggie, sorry for not replying earlier. I took a look at your Dockerfiles. Unfortunately I don’t have any specific recommendations since you know better than me the work that is required to build your services. Some general recommendations though,

As you mention, the most expensive step is building, so avoiding rebuilding is the best option, moving the build steps earlier in the process is ideal so code changes don’t require a full rebuild. With livepush you can also modify how the build behaves on development to avoid unnecessary build steps (see GitHub - balena-io-modules/livepush: Push code to your containers - live!).

If possible, you may benefit from splitting your dockerfile and uploading some of the stages to dockerhub (for instance), so your service needs to perform less build steps and can also perform better use of the cache.

One last thing, we are working on some changes to our docker-compose support for better compatibility with multi-stage builds, which should also help you with your development process.

Please let us know about your progress on this, as it also might help others having the same issue.

1 Like

@pipex I am similar issues as @maggie0002 and I was wondering if you would look mind looking at my Docker files and seeing if you can help improve the development build times? I’m also interested in your point about using Docker Hub.

Probably best just to include your Dockerfiles in your post so people can look and reply.

For me it was mostly the Node builds that were unsustainable. Now I have a local development environment that I use for most things, and then have a development fleet. When I want to test the final stages I push the build to the Balena Cloud development fleet where the build is done and then the device downloads the update. I found this to be quicker than waiting for rebuilds on the Raspberry Pi.

@maggie0002 Thanks for replying. Yeah I need to get better at including them in my questions, which I have done below. I’d appreciate any thoughts you have.
I too am using Node and finding it unsustainable.

Question for you. On your development machine are you using Docker and Docker compose to build so you can see everything working similar to how it will run on Balena?

Here are my Docker compose and Docker file:

version: "2.0"

services:
  client:
    build: "./client"
    privileged: true
    network_mode: host
    depends_on:
       - server
  server:
    build: "./server" # use this directory as the build context for the service
    privileged: true
    network_mode: host
    restart: "always"
    labels:
      io.balena.features.kernel-modules: "true"
      io.balena.features.firmware: "true"
    environment:
      CAN0_BITRATE: 250000
  stream-server:
    build: "./stream-server" # use this directory as the build context for the service
    privileged: true
    network_mode: host
    restart: "always"
    labels:
      io.balena.features.kernel-modules: "true"
      io.balena.features.firmware: "true"

**Server**

# Build stage
FROM balenalib/raspberrypi4-64-node:12-build as builder

WORKDIR /app

RUN install_packages iproute2 can-utils python

COPY . .
RUN npm install
RUN npm run clean
RUN npm run build-server

CMD ["bash", "/app/startup_scripts/start_server.sh"]

Client:

# Stage 0, "build-stage"m based on Node.js, to build and complie the frontend
FROM balenalib/raspberrypi4-64-node:12-build as build-stage
WORKDIR /app
COPY package*.json /app/
RUN npm install
COPY ./ /app/
RUN npm run build

FROM arm64v8/nginx:1.21.3-alpine
COPY --from=build-stage /app/build/ /usr/share/nginx/html
# Copy the default nginx.conf provided by build-stage
COPY --from=build-stage /app/nginx.conf /etc/nginx/conf.d/default.conf 

Stream Server:

FROM alpine:3.12
RUN apk add --no-cache ffmpeg
COPY ./rtsp-simple-server /
COPY ./rtsp-simple-server.yml /
ENTRYPOINT [ "/rtsp-simple-server" ]

Thanks,

Dockerfiles look good to me. Can any of the content of ./ on this line be in the last build step? Wouldn’t make much difference usually, but never know, depending on your setup could help.

COPY ./ /app/

If you don’t need the whole app in the build-stage just to run an npm run build which already has the package.json. Each time something changes in ./ the cache gets busted and it does the nom run build again. If anything from the ./ folder doesn’t need to trigger the re-run it would be better off in the file stage. For example the default.conf

Yes, in the development environment I use docker-compose-dev.yml and a Dockerfile.dev in the same repo to have a parallel development environment. I also have a dummy backend to replicate the device it would usually be on, so I can do UI design around wi-fi interaction and things. Depending on the project, it isn’t always necessary to have a parallel file structure for your dev env, you can have a docker-compose file that merges in to the original docker-compose file and makes only a few minor changes.