I would like to use VS Code remote development via SSH but cannot find a way to SSH using the standalone client. balena ssh works but I cannot see how to incorporate the container service name if I use a regualr ssh client
Hi bowenm187,
The SSH server on a balena device listens on TCP port 22222, so you can use âplainâ SSH with a command like ssh -p 22222 root@<device_ip_address>
. While development images have passwordless root access enabled, production images require an SSH key to be added to the config.json file. Complete details are here: https://www.balena.io/docs/learn/manage/ssh-access/.
A couple caveats: Generally speaking, itâs not good to edit files in a temporary container. We recommend adding a persistent volume via your docker-compose.yml or Dockerfile and modify files there so your edits survive service and device restarts.
Weâre very interested in the developer experience, and weâd love to hear how you make out. Feel free to report back and let us know.
John
Thanks for the quick response John,
Accessing the balenaos host is no problem. However I want to access a container running on this. So effectively I need the âplainâ ssh version of âbalena ssh mydevice.local mainâ
Hi @bowenm187,
For this, you need to add a second command to your SSH command that takes you inside the container, something like ssh -t -p 22222 root@device-ip "balena-engine exec -it <container_name> /bin/sh"
. Note that the container name is unlikely to match what you see in the balenaCloud dashboard. Log into the device HostOS and run balena ps
to get the valid container name.
With all this said, this will work using plain SSH, but may not work without some tweaking of your VS Code extension settings.
John
Hi,
I also love the quick turn-around I get when using Visual Studio Code Remote - SSH. It is invaluable when I work with libraries accessing features of my Raspberry Pi which I cannot install and test on my local machine.
Rebuilding the image is too slow and even with livepush, I donât get the same turnaround time.
With VS Remote SSH I can also debug directly in container which afaict has no equivalent in the Balena toolbox.
In order to use VS Remote SSH, I apply the following changes to my Dockerfile and the startup script.
Dockerfile:
RUN apt-get update \
&& apt-get install -y openssh-server
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir /var/run/sshd \
&& echo 'root:balena' | chpasswd \
&& sed -i 's/#PermitRootLogin prohibit-password/PermitRootLogin yes/' /etc/ssh/sshd_config \
&& sed -i 's/UsePAM yes/UsePAM no/' /etc/ssh/sshd_config
...
COPY start.sh .
CMD ./start.sh
The above will install openssh in the container and configure it for password-based access.
My startup script looks for example like this:
start.sh:
#!/bin/bash
# Start sshd if START_SSHD env variable is set
# This allows for remote access via PyCharm and ssh
if [[ "$START_SSHD" == "1" ]]; then
/usr/sbin/sshd -p 22 &
fi
python3 main.py
If I now set the device variable START_SSHD to 1, sshd starts on port 22 and I can connect to the container using the device local IP address and the username/password root/balena.
Note: For this to work, the service needs to share the host network.
docker-compose.yml
version: '2'
services:
my-service:
build: ./my-service
network_mode: host
Obviously, this has security implications and it is not a good idea to have enabled in a production setup. For development and experiments, however, it provides so much value that I find myself adding it almost all the time.
Hope this is helpful for someone. I am also curious to hear, whether there is another way to get the same experience without installing and starting sshd on the container.
cc @mpous - wdyt?
Thanks for the feedback @bowenm187 @hardy and for sharing your current solution. For additional visibility, I have created a feature request in the balena CLI repository as well: ssh: Should support Visual Studio Code remote development using SSH ¡ Issue #2466 ¡ balena-io/balena-cli ¡ GitHub
Running a ssh server in each service container works for sure and the START_SSHD variable is handy to that end, however ideally we would be able to use the balenaOS host OS ssh server. FYI, I think that the ssh-uuid proof of concept implementation comes very close to meeting the requirements, except that currently it uses the balenaCloud proxy backend (device UUID instead of a local IP address) and thus would be too slow for use with Visual Studio Clode. I have also created a ssh-uuid
issue (Should support local IP address as alternative to UUID (Visual Studio Code over SSH) ¡ Issue #2 ¡ pdcastro/ssh-uuid ¡ GitHub), although ultimately the objective is to get the feature added to the balena CLI so that VS Code would be configured to use âbalena sshâ.
Awesome, I am looking forward to this feature landing via the Balena CLI.
I am a bit intrigued on how this is going to work, since if I understood you correctly you want this to work w/o having an actual sshd running in the service container. sshd runs on the host only and you basically ssh into the host. From there you are somehow âemulatingâ ssh into the service containers (docker exec like).
From the ssh-uuid issue:
- Being named
balena ssh
, it suggests the provision ofssh
's functionality, while being incompatible with basicssh
command line usage.
Exactly, I have been caught by this as well. Being named âsshâ one expects a certain kind of functionality, Would be great to close this gap.
âHardy
Hi,
I am trying to setup the same environment for our application.
I have been able to do this for both development and production images.
I can connect using VSC remote development Plugin.
But I am experiencing an issue. When I access the container using
ssh -p 22333 root@192.168.0.240
I donât have the same environment as if I access using:
balena device ssh 192.168.0.240 backend
basically I lack all balena injected variables and i am not sure where to get them from.
I should be able to see this:
root@backend:/app# env
RESIN_APP_ID=1874960
BALENA_APP_ID=1874960
BALENA_DEVICE_ARCH=armv7hf
RESIN_SUPERVISOR_ADDRESS=http://127.0.0.1:48484
HOSTNAME=backend
BALENA=1
RESIN_SUPERVISOR_HOST=127.0.0.1
BALENA_API_URL=https://api.balena-cloud.com
BALENA_APP_UUID=f576794df97d4673adcf896f8462b988
RESIN_APP_LOCK_PATH=/tmp/balena/updates.lock
BALENA_APP_LOCK_PATH=/tmp/balena/updates.lock
BALENA_API_KEY=c33b76b3730b93592290b07dbc348f5f
RESIN_SUPERVISOR_PORT=48484
PWD=/app
UDEV=1
....
But only have access to this:
env
SHELL=/bin/bash
PWD=/root
LOGNAME=root
MOTD_SHOWN=pam
HOME=/root
SSH_CONNECTION=192.168.0.237 62301 192.168.0.240 22333
TERM=xterm-256color
USER=root
SHLVL=1
LC_CTYPE=UTF-8
SSH_CLIENT=192.168.0.237 62301 22333
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/local/go/bin
SSH_TTY=/dev/pts/3
TERM_PROGRAM=WarpTerminal
_=/usr/bin/env
From what I have been reading, balena âenvironment variablesâ (the ones you set in the dashboard or via balena env
) are not stored in a flat file on the host that you can edit at runtime. Instead, the balena Supervisor:
- Fetches them from the cloud
- Persists them internally in its state database (a SQLite file under the hostâs state partition, e.g.
/mnt/state/balena-supervisor/db/database.sqlite
) - Injects them into each serviceâs Docker container as process-environment variables when the container is started .
Any ideas on how I could solve this?
Just so that itâs mentioned too, if you donât have local network access to the device the following works to get onto a device via Balena SSH proxy, assuming youâve added your SSH key to your Balena profile.
ssh <balena-username>@ssh.balena-devices.com host <device-id> <command>
From this you should be able to setup whatever regular SSH connection you need. For example
ssh -t <user>@ssh.balena-devices.com host <id> balena exec -it <container-id/name> sh
would open up an interactive shell in a running container. You could also imagine mounting the remote filesystem using SSHFS or something.
Weâve used the remote proxy to build our own tooling tunneling onto devices for this kind of temporary debugging.
Hope thatâs useful.
I think this doesnât help solve my current issue, but is good to know. I didnât knew about this one, it might be helpfull for troubleshooting.
Thank you
For your issue it looks like youâre connecting to the device in the first command, and one of the services in the second command. This would explain why the environment is different.
You need to get into the running container after having gotten into the device. You should be able to do that using balena exec
or similar- that would put you into the running container and you should have all the supervisor injected environment variables.
I donât think is that.
Because I am acessing straight into the backend (my application) container, which is where I have configured the root no password access SSH -p 22333 (see Dockerfile.Template
at the end).
I think the issue is what i was mentioning here:
From what I have been reading, balena âenvironment variablesâ (the ones you set in the dashboard or via
balena env
) are not stored in a flat file on the host that you can edit at runtime. Instead, the balena Supervisor:
- Fetches them from the cloud
- Persists them internally in its state database (a SQLite file under the hostâs state partition, e.g.
/mnt/state/balena-supervisor/db/database.sqlite
)- Injects them into each serviceâs Docker container as process-environment variables when the container is started .
Anyway there should be some point where I should be able to find them and maybe load them myself, for exmaple.
That was one of the answers I was looking for from the Team.
Letâs see if anyone can help me.
# USING DEBIAN INSTEAD OF ALPINE
FROM balenalib/raspberrypi3-debian-golang:latest
# VS Codeâs Remote-SSH server component isnât supported on a 32-bit, musl-based Alpine host.
# Even with gcompat, libstdc++ and the loader in place, the Remote-SSH extension will never
# unpack and launch its ARMHF server bits on Alpine ARMv7l, because Alpine on ARM32 is explicitly
# not a supported Remote-SSH target
WORKDIR /app
COPY /app/go.mod /app/go.sum ./
RUN go mod download
COPY /app .
# add sqlite & tzdata
# RUN apk add --no-cache modemmanager networkmanager sqlite tzdata
# add sqlite & tzdata on Debian Bullseye
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y --no-install-recommends openssh-server \
modemmanager \
network-manager \
sqlite3 \
tzdata \
git \
wget \
ca-certificates
RUN rm -rf /var/lib/apt/lists/*
RUN echo 'export PATH=$PATH:/usr/local/go/bin' >> /root/.profile
RUN mkdir -p /run/sshd && chmod 0755 /run/sshd
RUN passwd --delete root
RUN sed -i 's/^#\?\(PermitRootLogin\).*$/PermitRootLogin yes/' /etc/ssh/sshd_config
RUN sed -i 's/^#\?\(PermitEmptyPasswords\).*$/PermitEmptyPasswords yes/' /etc/ssh/sshd_config
RUN wget -qO go1.24.2.linux-armv6l.tar.gz \
https://go.dev/dl/go1.24.2.linux-armv6l.tar.gz
RUN rm -rf /usr/local/go
RUN tar -C /usr/local -xzf go1.24.2.linux-armv6l.tar.gz
# 2) Put Go on your PATH
# RUN echo 'export PATH=$PATH:/usr/local/go/bin:$(go env GOPATH)/bin' \
# >> ~/.profile
ENV GOROOT=/usr/local/go GOPATH=/go PATH=/usr/local/go/bin:/go/bin:$PATH
# 3) Install Air via Go modules
RUN GOBIN=/usr/local/bin GO111MODULE=on go install github.com/air-verse/air@latest
# needed in order to access physical devices
ENV UDEV=1
ENV DBUS_SYSTEM_BUS_ADDRESS=unix:path=/host/run/dbus/system_bus_socket
#CMD ["sh", "-c", "tail -f /dev/null"]
CMD ["/usr/sbin/sshd", "-D", "-p", "22333"]
I see so youâre using your own sshd
in the service. Then I believe itâs because the environment of the server is not inherited by the SSH session https://serverfault.com/questions/969021/how-to-have-ssh-session-inherit-environment-variables-from-sshd
The environment variables are injected through Docker (balena
) so only the sshd
environment would have those. They are not persisted anywhere as far as Iâm aware, but potentially someone else can suggest a workaround.
If you wrap the running of the server in a script you could persist the environment into a file on startup and that would be accessible by the session, but watch out for unintended consequences if exposing all of it!
Hi @asdf123 ,
Seems like your sugestion worked.
Created this Start.sh
#!/usr/bin/env bash
# Capture the environment
OUTFILE="environment.log"
env > "$OUTFILE"
echo "Environment written to $OUTFILE"
exec /usr/sbin/sshd -D -p 22333
Then in my Makefile:
...
devOnTarget:
while IFS='=' read -r key val; do
if [[ $$key == *BALENA* ]]; then
export "$$key"="$$val"
fi
done < /app/environment.log
env | grep BALENA
air -c .air.onTarget.toml
....
Itâs still not the best way, but at least is a starting point.
Thanks