Is there any cleanup/housekeeping script for the s3-data volume? In my current environment i am deploying around 6 releases a day each with 5-7 containers.
It seems like deleting old release on the openbalena-admin webui and api just deletes the reference instead of deleting the release too. And i am sitting right now at around 200gb for the open-balena_s3-data/_data volume.
I also would like to know how to do this. My server is currently 98% full. I need to delete old data to create room for new builds and releases.
I ran the following script to remove old releases. While I got a success message, the used space was still the same.
#!/bin/bash
# --- CONFIGURATION ---
APP_ID="<Fleet ID>"
AUTH_TOKEN="<Token>"
API_BASE_URL="https://api.balena.mydomain.com"
# ---------------------
# 1. Generate the list
# 2. Pipe into a loop
# tail -n +3 so that the last release is not removed
balena release list alpha-autoFlow | awk '{print $2}' | tail -n +3 | while read -r COMMIT_HASH; do
# Skip empty lines if any
if [ -z "$COMMIT_HASH" ]; then
continue
fi
echo "Attempting to delete release: $COMMIT_HASH"
# Run the delete command
# Note: We escape the $filter variable with a backslash so bash doesn't think it's a shell variable
curl --location --request DELETE "${API_BASE_URL}/v6/release?\$filter=belongs_to__application%20eq%20${APP_ID}%20and%20commit%20eq%20'${COMMIT_HASH}'" \
--header "Authorization: Bearer ${AUTH_TOKEN}"
echo -e "\n-----------------------------------"
done
i have tried with that, but that curl request seems to only delete the references to the releases. And not the actual file. I had some success with a script i made, but today i found out it also deleted part of an active release and kinda made a mess of a working device.
So i am still here waiting to see if anyone on the balena crew could help out