HELP: Caching not consistent (Delta Updates)

I am currently working on a project where Balena is being used for IoT devices that are placed in areas where internet connectivity is relatively weak. Because of this, the delta updates feature is one that is heavily relied upon to minimize the amount of information that needs to be downloaded to these devices.

I have recently had two past occurrences where a particular service (CouchDB), that has not been touched or updated for months, is not being detected to utilize cache when updating an application.

Could anyone please fill me in on what constitutes a service or various steps of a Dockerfile of a service to not be cached, even when said files are not modified or updated? Or if there is any known faults in Balena that are not making deltas deterministic?

Having the knowledge of what removes caching for a particular service is necessary for us to do our job and if it’s not something we can consistently predict, it’s not a service we can use at the scale we’re hoping to achieve.

An example of the docker-compose.yml being used is as follows:

version: "2"

volumes:
  couchdb-data:
  storage-one:
  storage-two:

services:
  supervisorlock:
    build:
      context: ./services
      dockerfile: ./Dockerfile
  cronbox:
    privileged: true
    build:
      context: ./services
      dockerfile: cronbox/Dockerfile
  storage:
    privileged: true
    build:
      context: ./services
      dockerfile: storage/Dockerfile
    volumes:
      - storage-one:/buckets/one
      - storage-two:/buckets/two
  fileserver:
    privileged: true
    build:
      context: ./services
      dockerfile: fileserver/Dockerfile
    volumes:
      - storage-one:/www/data/buckets/one
      - storage-two:/www/data/buckets/two
    ports:
      - "1010:1010"
  couchdb:
    privileged: true
    build:
      context: ./services
      dockerfile: couchdb/Dockerfile
      args:
        couchdb_user: user
        couchdb_password: password
    volumes:
      - couchdb-data:/opt/couchdb/data
    ports:
      - "5984:5984"
  api-one:
    privileged: true
    build:
      context: ./services
      dockerfile: api-one/Dockerfile
    depends_on:
      - couchdb
    ports:
      - "8080:8080"
  api-two:
    privileged: true
    build:
      context: ./services
      dockerfile: api-two/Dockerfile
    volumes:
      - storage-one:/storage
    depends_on:
      - api-one
    ports:
      - "8280:8280"
  pdf:
    privileged: true
    build:
      context: ./services
      dockerfile: pdf/Dockerfile
    depends_on:
      - api-one
    ports:
      - "9090:9090"
  cloud:
    privileged: true
    build:
      context: ./services
      dockerfile: cloud/Dockerfile
    depends_on:
      - api-one
    ports:
      - "80:80"
  knowledge-base:
    privileged: true
    build:
      context: ./services
      dockerfile: knowledge-base/Dockerfile
    volumes:
      - storage-one:/app/buckets/one
    depends_on:
      - storage
    ports:
      - "411:411"

The Dockerfile for the service (couchdb) that is not caching consistently (yet has been untouched for months) is as follows:

FROM couchdb:2.3.1
ARG couchdb_user
ARG couchdb_password
ENV COUCHDB_USER ${couchdb_user}
ENV COUCHDB_PASSWORD ${couchdb_password}
EXPOSE 5984

Thank you in advance for your assistance!

Hi there, are you referring to caching on the balenaCloud build servers?

Hi! Yes, that is what I’m referring to.

As far as I know the use of cache on our build servers won’t affect the delta generation, as the delta is generated between the images after they are created. Have you seen anything specifically indicating that the deltas are not being generated correctly or is the concern based on the lack of cache on our builders?

I was mistaken about the usage of deltas on the particular devices that I was monitoring, and the impact of caching. I thought that my entire application had delta updates turned on, but they were enabled by device, leaving a couple that were not enabled. I’ve since enabled deltas on all devices and I see everything operating as expected.

Thank you for your help!

Hey thanks for letting us know, glad you sorted it out!