Hello,
I’m trying to push an update to my multicontainer app running on aarch64 (rpi4) but the build always fails with the error:
[Info] Uploading images
[Error] An error occured: (HTTP code 404) no such image - no such image: ae5620f8ef66: No such image: ae5620f8ef66:latest
[Info] Built on arm01
[Error] Not deploying release.
It was compiling fine a few days ago so I’m not sure what the reason could be. The rest of the build log looks good, no errors, it’s just at the end.
Please note: I have disabled delta updates because it seemed to help people with similar issues in the past, but that DID NOT SOLVE it.
Hi, thanks for bringing this to our notice. This is a known issue and we are currently investigating the same. We will get back to you when its resolved.
1 Like
Hello,
I see that some fix was implemented yesterday, but I still have errors building today.
[Info] Uploading images
[Success] Successfully uploaded images
[Error] Some services failed to build:
[Error] Service: kiosk
[Error] Error: invalid from flag value builder: No such image: sha256:6974f0895f1b8d7d030ff780c84b97d9520e55aae5f1a8cc4dd43316cc53a0e3
[Info] Built on arm01
[Error] Not deploying release.
Remote build failed
The relevant part of the “kiosk” image logs is here:
[kiosk] Step 14/16 : COPY --from=builder /go/src/github.com/mozz1
[kiosk] invalid from flag value builder: No such image: sha256:69
Which is a different error form yesterday (but is related to not finding an image at some point). This dockerfile (kiosk service) is from the belena-dash repo and it was working yesterday.
Seems the first stage image gets lost during the building process, so it fails at the instruction:
COPY --from=builder /go/src/github.com/mozz100/tohora /home/chromium/tohora
My gut feeling is that, since my build is long (> 2 hours), some data gets lost, maybe because of memory management on the worker. Is doesn’t happen with short builds I made for testing on other applications (with the same dockerfiles)
Hey, thank you for the report ad the logs. We’ve performed some emergency changes to the builders to give us a little breathing room to get to the root cause of the problem. There may be the occasional build error at a much more reduced rate for the next day or so, but I’d be surprised if it continues. Could you try to repush please?
Thanks Cameron,
I managed to build successfully thanks to the cache reducing the build time to a few minutes.
But I will have a strange (maybe unrelated) issue with one of my images running node:latest
, where it will successfully build my Vue.js project if I run the dockerfile in another app (without docker-compose), but it fails to build it in my big app with a lot of containers. This means I had to build my Vue.js app locally and copy the files to the nginx
image serving them instead of being able to build them inside a multi-staged container. It’s really strange that the same project builds fine on one app (same architecture and device type) and fails in the other.
Hey Valentin, in this second error that you say, is the error number/message the same as above or something else? Thank you for bearing with us
In the second error I mentioned with node/Vue.js, the vue-cli-service
is complaining that it cannot find some package.json file, so it’s a different error. I’ll trigger a new build to get the exact logs, hold on.
Thanks for the help
Here is my dockerfile:
FROM node:14.4.0-stretch as builder-base
WORKDIR /app
COPY myapp/package*.json ./
RUN yarn install
COPY myapp ./
RUN yarn build
FROM nginx:alpine
COPY --from=builder-base /app/dist /usr/share/nginx/html
I can share my project privately if needed, but suffice to say it builds fine when I push only this dockerfile/project into a test app running aarch64/raspberrypi4-64.
When I use it inside my docker-compose like so (more containers, long build time because they depend on heavy python libraries like numpy, scipy):
version: '2.1'
networks: {}
services:
some-other-services-here:
build:
context: something/.
myapp:
build:
context: myapp_parent/. # here is the dockerfile from above
restart: always
ports:
- "80:80"
It installs the packages correctly (step 4), but then when it comes to the yarn build
command (step 6), I get this error:
[myapp] Step 6/8 : RUN yarn build
[myapp] ---> Running in 45b84717952f
[myapp] yarn run v1.22.4
[myapp] $ vue-cli-service build
[myapp] internal/modules/cjs/loader.js:1032
[myapp] throw err;
[myapp] ^
[myapp] Error: Cannot find module '../package.json'
[myapp] Require stack:
[myapp] - /app/node_modules/.bin/vue-cli-service
[myapp] at Function.Module._resolveFilename (internal/modules/cjs/loader.js:1029:15)
[myapp] at Function.Module._load (internal/modules/cjs/loader.js:898:27)
[myapp] at Module.require (internal/modules/cjs/loader.js:1089:19)
[myapp] at require (internal/modules/cjs/helpers.js:73:18)
[myapp] at Object.<anonymous> (/app/node_modules/.bin/vue-cli-service:4:25)
[myapp] at Module._compile (internal/modules/cjs/loader.js:1200:30)
[myapp] at Object.Module._extensions..js (internal/modules/cjs/loader.js:1220:10)
[myapp] at Module.load (internal/modules/cjs/loader.js:1049:32)
[myapp] at Function.Module._load (internal/modules/cjs/loader.js:937:14)
[myapp] at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12) {
[myapp] code: 'MODULE_NOT_FOUND',
[myapp] requireStack: [ '/app/node_modules/.bin/vue-cli-service' ]
[myapp] }
[myapp]
[myapp] error Command failed with exit code 1.
[myapp]
[myapp] info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
When I build this dockerfile/source alone, I get this success message (the warnings can be ignored):
[main] Step 6/8 : RUN yarn build
[main] ---> Running in db43eeb39c31
[main] yarn run v1.22.4
[main] $ vue-cli-service build
[main] - Building for production...
[main]
[main] WARNING Compiled with 4 warnings6:21:14 AM
[main] Module Warning (from ./node_modules/eslint-loader/index.js):
[main] /app/src/store/index.js
[main] 38:7 warning Unexpected console statement no-console
[main] ✖ 1 problem (0 errors, 1 warning)
[main] You may use special comments to disable some warnings.
[main] Use // eslint-disable-next-line to ignore the next line.
[main] Use /* eslint-disable */ to ignore all warnings in a file.
[main] warning
[main] asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
[main] This can impact web performance.
[main] Assets:
[main] js/chunk-vendors.e4d36559.js (888 KiB)
[main] warning
[main] entrypoint size limit: The following entrypoint(s) combined asset size exceeds the recommended limit (244 KiB). This can impact web performance.
[main] Entrypoints:
[main] index (1.09 MiB)
[main] js/chunk-vendors.e4d36559.js
[main] css/index.aba88c88.css
[main] js/index.65fd8dd2.js
[main] warning
[main] webpack performance recommendations:
[main] You can limit the size of your bundles by using import() or require.ensure to lazy load some parts of your application.
[main] For more info visit https://webpack.js.org/guides/code-splitting/
[main] File Size Gzipped
[main] dist/js/chunk-vendors.e4d36559.js 887.51 KiB 217.21 KiB
[main] dist/js/index.65fd8dd2.js 5.10 KiB 2.10 KiB
[main] dist/css/index.aba88c88.css 219.53 KiB 31.97 KiB
[main] Images and other types of assets omitted.
[main] DONE Build complete. The dist directory is ready to be deployed.
[main] INFO Check out deployment instructions at https://cli.vuejs.org/guide/deployment.html
[main] Done in 31.64s.
[main] Removing intermediate container db43eeb39c31
[main] ---> c6814372b99b
[main] Step 7/8 : FROM nginx:alpine
[main] ---> d918ec5de862
[main] Step 8/8 : COPY --from=builder-base /app/dist /usr/share/nginx/html
[main] ---> 40e657e7e942
[main] Successfully built 40e657e7e942
Hi,
Can you try to build on your local dev machine? (docs)
Also can you make sure that node_modules
folder is inside .dockerignore
file? I found similar error caused by moving a directory to another place, so COPY myapp ./
may be overwriting local node modules over the installed ones in the image.
Building this particular image locally works without problems. I also made sure to exclude node_modules
and I initially checked with a ls
command that it was indeed not copied over.
Any other ideas? As I said, the build works flawlessly when triggered on its own, but in conjunction with other containers it fails.
Could it be that when the dockerfile is in a sub-folder, the dockerignore file inside the same sub-folder is not properly interpreted and it copies node_modules
anyway? The docker-compose is at the root, while this particular dockerfile is in a sub-folder myapp_parent
.
Dockerignore inside myapp_parent
(same folder as dockerfile)
*
!myapp
myapp/yarn.lock
myapp/node_modules
Damnit you were right, I deleted the node_modules folder on my host and it succeeded. Looks like the .dockerignore
file in the subfolder doesn’t work…
.
├── .gitignore
├── docker-compose.yml
├── myapp_parent
│ ├── .dockerignore
│ ├── Dockerfile
│ └── myapp
│ ├── .browserslistrc
│ ├── .editorconfig
│ ├── .eslintrc.js
│ ├── .gitignore
│ ├── .prettierrc
│ ├── .vscode
│ ├── README.md
│ ├── babel.config.js
│ ├── package.json
│ ├── public
│ ├── src
│ └── vue.config.js
Yes the .dockerignore
should be in the root folder. Also note that from V12 of the cli .gitignore
won’t be used by balena push
and it will only rely on the dockerignore
file. see: https://github.com/balena-io/balena-cli/wiki/CLI-v12-Release-Notes#breaking-changes
OK I’ll see about adding a dockerignore to the root as well, thanks.
Hello Cameron,
This message to let you know that I’m still experiencing the “An error occured: (HTTP code 404) no such image - no such image
” error when building my big multi-container app in the cloud.
Before that in the logs I also see Failed to generate deltas due to an internal error; will be generated on-demand
.
What’s the status on resolving this issue?
Are you using deltas in your build? Can you try to disable them while we also investigate if there is a known problem with our delta server?
We are indeed investigating a known issue with the delta server. Would you mind sending us the whole “Failed to generate deltas due to an internal error; will be generated on-demand” logs to aid in our debugging session, if possible?
I’m sending you whatever logs I have in a PM, but I used detached mode since the build time is greater than 2 hours. So the logs are not very detailled. If you need I can try to build again with a live session.
We got the logs. Thanks! We will be investigating the issue and let you know shortly. Did you have a chance to test it out without deltas?
I just started a build now after disabling delta updates, let’s see in 2 hours!
Awesome, let us know how it goes!