Multicontainer: many other containers running

While working on a multicontainer setup I noticed that I have accumulated about 14 unspecified containers that run the following code:
/bin/sh -c 'while true; do sleep 3600; done'

How can I prevent these, probably useless, containers from running?

Note: it could be unrelated but after a while I received the error No delay specified in scheduledApply. After killing all of the unspecified containers the error went away.

Hi there,

Is this something you are able to reproduce? If so, I’d be interested to see the logs from balena-engine as well as the supervisor to help debug what’s gone on.

If you are able to provide your docker-compose.yml (obviously redacting anything private), that would help!

I was pushing new code to the device, but I couldn’t get ‘motion’ to run. Turns out a command in my own entryscript was failing. Everytime tried to push a new container as mentioned before starts and the old one isn’t closed it seems. Is this enough information or would you still like to see a balena engine log? I don’t recall it showing many errors, is there a way to make it more verbose?

P.S.: The following shows the docker-compose.yaml file used:

version: '2'
volumes:
  static:
services:
  vision:
    build: ./vision
    devices:
      - "/dev/apex_0"
    privileged: true
    volumes:
      - static:/usr/src/app/static
  motion:
    build: ./motion
    privileged: true
    volumes:
      - static:/usr/src/app/static
  server:
    build: ./server
    privileged: true
  planning:
    build: ./planning
    privileged: true

Yeah it would be helpful to see the engine’s logs. Also what device are you using? What os version and what supervisor version are you on? Finally the container that is running ‘while true; do sleep 3600; done’ isn’t part of your app right?

  • I am using balenaOS 2.58.6+rev1 on a ‘Generic X86_64’ device. With supervisor version
    11.14.0
  • The container that is running ‘while true; do sleep 3600; done’ is NOT part of my app.
  • I have attached the logs here: journalctl.log (10.6 KB) hopefully the correct ones.
    • The logs were made starting a single container using a docker compose file similar to the one mentioned above (with the other services commented out). A new (broken) image is pushed to the device where the logs start. This also starts the unknown container with ID 595a3e460db6 and image hash 3cba9b20d450. The container crashes on excecution of the entrypoint. Then reloading the image by pushing to the device again or changing the dockerfile results in a new unkown container with the same image hash to be started. This time with container ID 132ff4aee6b4 and again image hash 3cba9b20d450.

Hey @tobyhijzen, thanks for providing the logs, we’re looking into it now.

Could you try the same test docker-compose file but without privileged: true and see if the unknown containers are still created?

Also, if you could provide some or all of your motion Dockerfile/init scripts (obviously redacting anything private) we may be able to reproduce internally.