Balena supervisor: 'Invalid Network Configuration Error' during local push then stuck

I’m facing an error when pushing a multi containers app where I have to reset all data, containers and images to be able to local push again.

Supervisor version : 12.5.10

Cause: When creating a bridge network in the docker compose with an incomplete ipam config (gateway missing), the supervisor start an error loop :

Scheduling another update attempt in 128000ms due to failure:  InvalidNetworkConfigurationError: Network IPAM config entries must have both a subnet and gateway
[error]         at /usr/src/app/dist/app.js:10:823643
[error]       at arrayEach (/usr/src/app/dist/app.js:2:9171)
[error]       at Function.forEach (/usr/src/app/dist/app.js:2:78885)
[error]       at Function.validateComposeConfig (/usr/src/app/dist/app.js:10:823544)
[error]       at Function.fromComposeObject (/usr/src/app/dist/app.js:10:821348)
[error]       at /usr/src/app/dist/app.js:10:818666
[error]       at /usr/src/app/dist/app.js:2:101637
[error]       at /usr/src/app/dist/app.js:2:54165
[error]       at baseForOwn (/usr/src/app/dist/app.js:2:31078)
[error]       at Function.lodash.mapValues (/usr/src/app/dist/app.js:2:101564)
[error]       at Function.fromTargetState (/usr/src/app/dist/app.js:10:818569)
[error]       at /usr/src/app/dist/app.js:6:107246
[error]       at Array.map (<anonymous>)
[error]       at Object.exports.getApps (/usr/src/app/dist/app.js:6:107194)
[error]       at async fn (/usr/src/app/dist/app.js:6:1957)
[error]   Device state apply error InvalidNetworkConfigurationError: Network IPAM config entries must have both a subnet and gateway
[error]         at /usr/src/app/dist/app.js:10:823643
[error]       at arrayEach (/usr/src/app/dist/app.js:2:9171)
[error]       at Function.forEach (/usr/src/app/dist/app.js:2:78885)
[error]       at Function.validateComposeConfig (/usr/src/app/dist/app.js:10:823544)
[error]       at Function.fromComposeObject (/usr/src/app/dist/app.js:10:821348)
[error]       at /usr/src/app/dist/app.js:10:818666
[error]       at /usr/src/app/dist/app.js:2:101637
[error]       at /usr/src/app/dist/app.js:2:54165
[error]       at baseForOwn (/usr/src/app/dist/app.js:2:31078)
[error]       at Function.lodash.mapValues (/usr/src/app/dist/app.js:2:101564)
[error]       at Function.fromTargetState (/usr/src/app/dist/app.js:10:818569)
[error]       at /usr/src/app/dist/app.js:6:107246
[error]       at Array.map (<anonymous>)
[error]       at Object.exports.getApps (/usr/src/app/dist/app.js:6:107194)
[error]       at async fn (/usr/src/app/dist/app.js:6:1957)
[info]    Applying target state

I’ve see you’ve updated the tests here Fix broken IPAM network validation · balena-os/balena-supervisor@fdb3719 · GitHub and that’s ok! The real problems here are :

  • Supervisor is in error loop (not responding)
  • No information are given in balena cli through the local push logs
  • Local push is not working while the error loop is on going (stuck when fetching resin-state)

Workaround :

systemctl stop resin-supervisor
rm -rf /mnt/data/resin-data/resin-supervisor/
systemctl start resin-supervisor

Side question :
Is there a way to reset easily the current application (while in local push deployment) (remove balena cli state to make it stop doing what’s it’s doing) ?

Thanks

Hello @quentingllmt

We have opened a GitHub issue for this so that we can investigate further. You can track progress here

1 Like

Hi Quentin,

Thanks for bringing up this issue. As you pointed out, we recently made a PR that surfaced this validation on the code, where before it was never being executed. I wrote my assessment in the linked issue, but my conclusion is that probably throwing an exception in that case is too harsh, and a simple log warning should suffice. I’m working on a PR to make the changes, and you’ll be notified when we close the issue.

1 Like

hey quentingllmt, we merged Show warning instead of exception for invalid network config by pipex · Pull Request #1694 · balena-os/balena-supervisor · GitHub which is available via self-service upgrades in the dashboard. Make sure your devices are running Supervisor v12.6.8 or later to contain the fix.