Remote: error: file write error (No space left on device)


I’m getting the following error when pushing code to resin using: git push resin master

The authenticity of host ' (' can't be established.
ECDSA key fingerprint is SHA256:NfwmqnKId5cx1RWpebbEuuM87bCJbdyhzRnqFES9Nnw.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ',' (ECDSA) to the list of known hosts.
Counting objects: 20, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (17/17), done.
Writing objects: 100% (20/20), 19.30 KiB | 0 bytes/s, done.
Total 20 (delta 15), reused 8 (delta 3)
remote: error: file write error (No space left on device)
remote: fatal: unable to write sha1 file
error: unpack failed: unpack-objects abnormal exit
 ! [remote rejected] master -> master (unpacker error)
error: failed to push some refs to ''

The command was working a couple of days ago. Any ideas?


Looks like it’s an issue (mentioned by others on chat too). Investigating, thanks for the report!


@zuma - Hey! This was an intermittent issue that was fixed shortly afterwards. Please let us know if you are still seeing this.


Fixed. Thanks :slight_smile:


I’m trying to push an update and I’m getting a similar error message. Not sure if it’s the same issue - any tips?

[Info]     Building on ARM01
[Info]     Pulling old image for caching purposes
[Info]     Fetching base image
[==================================================>] 100%
[Info]     Building Standard Dockerfile project
[Error]    Build failed: (HTTP code 500) server error - {"message":"mkdir 
/var/lib/docker/tmp/docker-builder597425943: no space left on device"}
[Info]     Uploading successful layers to registry for caching purposes


+1 Please fix.


@MiluchOK @sheng , we have acknowledged the issue and are working on it. Please check No space left on device and for updates.

Best, Kostas


@MiluchOK @sheng we have performed space cleanup on the affected service in our backend and the build pipeline should be back to normal now. Please let us know if you still have any issues. We are actively looking into how to prevent this from happening again.


@lekkas Thanks for tracking down that issue!