Facing issue in openbalena configuration

Okay I’ll try this. Thanks for the response

Hello

I have configured this on my server which has a public IP. Now I had some doubts related to registering CNAME records.

Is it compulsory to have https and is it compulsory to register subdomains api, s3, vpn, and registry. Can I register only one DNS for the IP Address?

Also, How can I verify my server is running after assigning the DNS to my IP address?

I get following logs when I run ./scripts/compose up

-> ./scripts/compose up
Starting openbalena_redis_1_918c910c01d3         ... done
Starting openbalena_cert-provider_1_453dfda47d64 ... done
Starting openbalena_s3_1_afe1b57bcb92            ... done
Starting openbalena_db_1_8981f3edc9d9            ... done
Starting openbalena_api_1_b946d63f2941           ... done
Starting openbalena_vpn_1_4f32c78d74e5           ... done
Starting openbalena_registry_1_75517ae0d16c      ... done
Starting openbalena_haproxy_1_ea78669dc5c2       ... done
Attaching to openbalena_s3_1_afe1b57bcb92, openbalena_cert-provider_1_453dfda47d64, openbalena_redis_1_918c910c01d3, openbalena_db_1_8981f3edc9d9, openbalena_api_1_b946d63f2941, openbalena_registry_1_75517ae0d16c, openbalena_vpn_1_4f32c78d74e5, openbalena_haproxy_1_ea78669dc5c2
s3_1_afe1b57bcb92 | Systemd init system enabled.
cert-provider_1_453dfda47d64 | [Error] ACTIVE variable is not enabled. Value should be "true" or "yes" to continue.
cert-provider_1_453dfda47d64 | [Error] Unable to continue due to misconfiguration. See errors above. [Stopping]
redis_1_918c910c01d3 | 1:C 09 May 2019 13:22:11.707 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1_918c910c01d3 | 1:C 09 May 2019 13:22:11.707 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1_918c910c01d3 | 1:C 09 May 2019 13:22:11.707 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 * Running mode=standalone, port=6379.
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 # Server initialized
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 * DB loaded from disk: 0.000 seconds
redis_1_918c910c01d3 | 1:M 09 May 2019 13:22:11.709 * Ready to accept connections
db_1_8981f3edc9d9 | 2019-05-09 13:22:12.097 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1_8981f3edc9d9 | 2019-05-09 13:22:12.097 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1_8981f3edc9d9 | 2019-05-09 13:22:12.112 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1_8981f3edc9d9 | 2019-05-09 13:22:12.182 UTC [21] LOG:  database system was shut down at 2019-05-09 13:21:54 UTC
db_1_8981f3edc9d9 | 2019-05-09 13:22:12.195 UTC [1] LOG:  database system is ready to accept connections
api_1_b946d63f2941 | Systemd init system enabled.
registry_1_75517ae0d16c | Systemd init system enabled.
vpn_1_4f32c78d74e5 | Systemd init system enabled.
haproxy_1_ea78669dc5c2 | Building certificate from environment variables...
haproxy_1_ea78669dc5c2 | Setting up watches.  Beware: since -r was given, this may take a while!
haproxy_1_ea78669dc5c2 | Watches established.
haproxy_1_ea78669dc5c2 | [NOTICE] 128/132214 (15) : New worker #1 (17) forked
haproxy_1_ea78669dc5c2 | [WARNING] 128/132214 (17) : Server backend_api/resin_api_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/132214 (17) : backend 'backend_api' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/132215 (17) : Server backend_registry/resin_registry_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/132215 (17) : backend 'backend_registry' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/132216 (17) : Server vpn-tunnel/balena_vpn is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/132216 (17) : proxy 'vpn-tunnel' has no server available!
db_1_8981f3edc9d9 | 2019-05-09 13:22:18.418 UTC [28] ERROR:  relation "uniq_model_model_type_vocab" already exists
db_1_8981f3edc9d9 | 2019-05-09 13:22:18.418 UTC [28] STATEMENT:  CREATE UNIQUE INDEX "uniq_model_model_type_vocab" ON "model" ("is of-vocabulary", "model type");
haproxy_1_ea78669dc5c2 | [WARNING] 128/132219 (17) : Server backend_registry/resin_registry_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/132220 (17) : Server vpn-tunnel/balena_vpn is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/132224 (17) : Server backend_api/resin_api_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

I am not sure if my server is running properly.

Hi @sharvin26 , I think you should be able to test the server using curl https://api.openbalena.local/ping , replacing api.openbalena.local which whatever domain you have set up. However, looking at the logs it seems HAproxy is refusing to start:

cert-provider_1_453dfda47d64 | [Error] ACTIVE variable is not enabled. Value should be "true" or "yes" to continue.
cert-provider_1_453dfda47d64 | [Error] Unable to continue due to misconfiguration. See errors above. [Stopping]

Not sure why that is, can you perhaps describe how you are setting up the server and the certs?

Hello, it seems there’s an issue bringing up the API successfully due to this:

db_1_8981f3edc9d9 | 2019-05-09 13:22:18.418 UTC [28] ERROR:  relation "uniq_model_model_type_vocab" already exists
db_1_8981f3edc9d9 | 2019-05-09 13:22:18.418 UTC [28] STATEMENT:  CREATE UNIQUE INDEX "uniq_model_model_type_vocab" ON "model" ("is of-vocabulary", "model type");

Can you please paste all the logs from the API container? You can get them with ./scripts/compose exec -it <API_CONTAINER_ID> journalctl --no-pager

I have done following configurations for setting the server:

apt-get update && apt-get install -y build-essential git
adduser balena
usermod -aG sudo balena
apt-get install docker.io
usermod -aG docker balena
curl -L https://github.com/docker/compose/releases/download/1.23.0-rc3/docker-compose-Linux-x86_64 -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
apt-get install libssl-dev
apt-get install nodejs
apt-get install npm
git clone https://github.com/balena-io/open-balena.git ~/open-balena
./scripts/quickstart -U <email@address> -P <password> -d test.mydomain.com
./scripts/compose up -d

Now I downloaded the ca-cert on my local machine from the Server and then instructed balena-cli on client-side to use the new certificate:

export NODE_EXTRA_CA_CERTS=~/open-balena/config/certs/root/ca.crt

Installed the Certificate system-wide

sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ~/ca.crt

And then I run the services

=> ./scripts/compose up
Starting openbalena_cert-provider_1_453dfda47d64 ... done
Starting openbalena_redis_1_918c910c01d3         ... done
Starting openbalena_s3_1_afe1b57bcb92            ... done
Starting openbalena_db_1_8981f3edc9d9            ... done
Starting openbalena_api_1_b946d63f2941           ... done
Starting openbalena_vpn_1_4f32c78d74e5           ... done
Starting openbalena_registry_1_75517ae0d16c      ... done
Starting openbalena_haproxy_1_ea78669dc5c2       ... done
Attaching to openbalena_s3_1_afe1b57bcb92, openbalena_cert-provider_1_453dfda47d64, openbalena_db_1_8981f3edc9d9, openbalena_redis_1_918c910c01d3, openbalena_api_1_b946d63f2941, openbalena_vpn_1_4f32c78d74e5, openbalena_registry_1_75517ae0d16c, openbalena_haproxy_1_ea78669dc5c2
s3_1_afe1b57bcb92 | Systemd init system enabled.
cert-provider_1_453dfda47d64 | [Error] ACTIVE variable is not enabled. Value should be "true" or "yes" to continue.
cert-provider_1_453dfda47d64 | [Error] Unable to continue due to misconfiguration. See errors above. [Stopping]
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.093 UTC [1] LOG:  listening on IPv4 address "0.0.0.0", port 5432
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.093 UTC [1] LOG:  listening on IPv6 address "::", port 5432
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.105 UTC [1] LOG:  listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.149 UTC [21] LOG:  database system was shut down at 2019-05-09 18:28:31 UTC
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.195 UTC [1] LOG:  database system is ready to accept connections
redis_1_918c910c01d3 | 1:C 09 May 2019 18:29:00.580 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1_918c910c01d3 | 1:C 09 May 2019 18:29:00.581 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1_918c910c01d3 | 1:C 09 May 2019 18:29:00.581 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 * Running mode=standalone, port=6379.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # Server initialized
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 * DB loaded from disk: 0.001 seconds
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 * Ready to accept connections
api_1_b946d63f2941 | Systemd init system enabled.
vpn_1_4f32c78d74e5 | Systemd init system enabled.
registry_1_75517ae0d16c | Systemd init system enabled.
haproxy_1_ea78669dc5c2 | Building certificate from environment variables...
haproxy_1_ea78669dc5c2 | Setting up watches.  Beware: since -r was given, this may take a while!
haproxy_1_ea78669dc5c2 | Watches established.
haproxy_1_ea78669dc5c2 | [NOTICE] 128/182903 (15) : New worker #1 (17) forked
haproxy_1_ea78669dc5c2 | [WARNING] 128/182903 (17) : Server backend_api/resin_api_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182903 (17) : backend 'backend_api' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/182903 (17) : Server backend_registry/resin_registry_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182903 (17) : backend 'backend_registry' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/182904 (17) : Server backend_vpn/resin_vpn_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182904 (17) : backend 'backend_vpn' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/182905 (17) : Server vpn-tunnel/balena_vpn is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182905 (17) : proxy 'vpn-tunnel' has no server available!
db_1_8981f3edc9d9 | 2019-05-09 18:29:07.184 UTC [28] ERROR:  relation "uniq_model_model_type_vocab" already exists
db_1_8981f3edc9d9 | 2019-05-09 18:29:07.184 UTC [28] STATEMENT:  CREATE UNIQUE INDEX "uniq_model_model_type_vocab" ON "model" ("is of-vocabulary", "model type");
haproxy_1_ea78669dc5c2 | [WARNING] 128/182907 (17) : Server backend_registry/resin_registry_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/182908 (17) : Server backend_vpn/resin_vpn_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/182911 (17) : Server vpn-tunnel/balena_vpn is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/182913 (17) : Server backend_api/resin_api_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.

Note: I faced the same issue on my local machine so I started using a server with configuration given in balena documentation and public IP for attaching a CNAME record.

For ./scripts/compose exec -it <API_CONTAINER_ID> journalctl --no-pager

=> docker ps
CONTAINER ID        IMAGE                                COMMAND                  CREATED             STATUS              PORTS                                                                                           NAMES
82486b048bf9        openbalena_haproxy                   "/docker-entrypoint.…"   6 hours ago         Up 3 minutes        0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 222/tcp, 5432/tcp, 0.0.0.0:3128->3128/tcp, 6379/tcp   openbalena_haproxy_1_ea78669dc5c2
45d7128d08c3        balena/open-balena-vpn:v8.10.0       "/usr/bin/entry.sh"      6 hours ago         Up 3 minutes        80/tcp, 443/tcp, 3128/tcp                                                                       openbalena_vpn_1_4f32c78d74e5
cceb47f57e88        balena/open-balena-registry:v2.5.0   "/usr/bin/entry.sh"      6 hours ago         Up 3 minutes        80/tcp                                                                                          openbalena_registry_1_75517ae0d16c
35ac2229da82        balena/open-balena-api:v0.11.8       "/usr/bin/entry.sh"      6 hours ago         Up 4 minutes        80/tcp                                                                                          openbalena_api_1_b946d63f2941
29ce31519995        openbalena_cert-provider             "/entry.sh /usr/src/…"   6 hours ago         Up 4 minutes        80/tcp                                                                                          openbalena_cert-provider_1_453dfda47d64
c901d19e3cef        balena/open-balena-db:v2.0.3         "docker-entrypoint.s…"   7 hours ago         Up 4 minutes        5432/tcp                                                                                        openbalena_db_1_8981f3edc9d9
fdad5939c64e        redis:alpine                         "docker-entrypoint.s…"   7 hours ago         Up 4 minutes        6379/tcp                                                                                        openbalena_redis_1_918c910c01d3
d7b55d1b30c6        balena/open-balena-s3:v2.6.2         "/usr/bin/entry.sh"      7 hours ago         Up 4 minutes        80/tcp                                                                                          openbalena_s3_1_afe1b57bcb92

I am getting the following logs:

=> /home/ubuntu/open-balena# ./scripts/compose exec -it 35ac2229da82 journalctl --no-pager
Execute a command in a running containerUsage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]

Options:
    -d, --detach      Detached mode: Run command in the background.
    --privileged      Give extended privileges to the process.
    -u, --user USER   Run the command as this user.
    -T                Disable pseudo-tty allocation. By default `docker-compose exec`
                      allocates a TTY.
    --index=index     index of the container if there are multiple
                      instances of a service [default: 1]
    -e, --env KEY=VAL Set environment variables (can be used multiple times,
                      not supported in API < 1.25)
    -w, --workdir DIR Path to workdir directory for this command.

Apologies, the correct command is

./scripts/compose exec api journalctl --no-pager

Log is too Big too share But I saw a behaviour where this log is getting printed continuously

May 10 13:51:51 35ac2229da82 api[862]: SET "last heartbeat" = $1
May 10 13:51:51 35ac2229da82 api[862]: WHERE "service instance"."id" = $2 [ 2019-05-10T13:51:51.813Z, 5 ]
May 10 13:52:01 35ac2229da82 api[862]: Parsing PATCH /resin/service_instance(5)
May 10 13:52:01 35ac2229da82 api[862]: Running PATCH /resin/service_instance(5)
May 10 13:52:01 35ac2229da82 api[862]: UPDATE "service instance"
May 10 13:52:01 35ac2229da82 api[862]: SET "last heartbeat" = $1
May 10 13:52:01 35ac2229da82 api[862]: WHERE "service instance"."id" = $2 [ 2019-05-10T13:52:01.827Z, 5 ]

If you want I can upload the log as a file.

The part that you provided does’t signify that anything is wrong.
In order to troubleshot this further, could you paste a bigger part of your logs in a service like pastebin and share it with us?

I have uploaded the Log file please check them.

logfile.log (3.8 MB)

Hello, the logs look normal to me, and indicate that the API and VPN services are up and running. It’s not clear to me from the conversation what is the issue you’re facing – can you please clarify?

There are 2 Doubts that I am facing:-

  1. Now I have configured the open balena on the server which has a public IP. How can I register the CNAME records? Can I register the DNS as A record against the Server’s public IP? ( for example for test.domain.com ? )
  2. How can I verify if my server is running properly? If I try to hit the IP Address from my web browser I am unable to get any response.
  3. Does OpenBalena Support Beaglebone?

All the logs have been added in the conversation above this thread.

Thanks

Hi,

You can configure CNAME records, or A records, whatever you prefer. Important is, that the domain is resolved to the correct IP.

You can verify, that the API is up and running, by hitting the ping endpoint: curl https://api.<YOUR_DOMAIN>/ping, this should respond with ‘OK’. If you get a ssl error, you need to add the generated CA certificate to your trust store, just for testing you can just add the -k option to the curl command, which will not validate the certificate. But when you start using the instance, you should add the CA certificate, otherwise MITM attacks are possible.

Yes, OpenBalena also supports the Beaglebone.

The logs above look good, so I guess it should work fine.

Cheers

This is what I get a reply when I send a curl request

root@machine:/home/root# curl https://api.localhost/ping -k
curl: (35) OpenSSL SSL_connect: SSL_ERROR_SYSCALL in connection to api.localhost:443

Did you set the domain in the configuration to localhost? Because in the software stack we do have a reverse proxy (the haproxy), which uses SNI to find the correct back end service. So if you specified domain.com as the domain during the configuration, you have to use api.domain.com.

I have specified the domain as test.domain.com as the domain during the configuration

When I send a curl request for that I get following error:-

root@machine:/home/root# curl http://test.domain.com/ping
curl: (6) Could not resolve host: api.test.domain.com

Yes, because you don’t control the domain domain.com. For testing you can add api.test.domain.com into your hosts file. For example:
In linux you can add the line 127.0.0.1 api.test.domain.com to the file /etc/hosts. This would inform the computer where /etc/hosts is located, that it should resolve the domain api.test.domain.com to the IP address 127.0.0.1, which is the loopback address.
But in general if you want to reach this instance from the internet, you need to setup a domain that you control and set up DNS entries accordingly.

My real domain is different. I have just added test.domain.com as an example here in the forum.

While configuration I have added my real domain.

Ah ok.
The underlying problem is that your computer could not resolve the domain name api.<whatever_your_domain_is> to any IP. Did you set A records or CNAME records? You can test this using the dig command. dig api.test.domain.com, if you did set the records, recently they might not have propagated yet, but you can still overwrite them in the hosts file for testing.

I have set the A record. I got the configuration using dig api.test.domain.com also.

When I try balena login I get following log:

➜ balena login
 _            _
| |__   __ _ | |  ____  _ __    __ _
| '_ \ / _` || | / __ \| '_ \  / _` |
| |_) | (_) || ||  ___/| | | || (_) |
|_.__/ \__,_||_| \____/|_| |_| \__,_|


Logging in to test.domain.com
? How would you like to login? Credentials
? Email: user@domain.com
? Password: [hidden]
ENOTFOUND: request to https://api.test.domain.com/login_ failed, reason: getaddrinfo ENOTFOUND api.test.domain.com api.test.domain.com:443

Additional information may be available by setting a DEBUG=1 environment
variable: "set DEBUG=1" on a Windows command prompt, or "export DEBUG=1"
on Linux or macOS.

If you need help, don't hesitate in contacting our support forums at
https://forums.balena.io

For bug reports or feature requests, have a look at the GitHub issues or
create a new one at: https://github.com/balena-io/balena-cli/issues/

After setting debug =1 I get following logs:

FetchError: request to https://api.test.domain.com/login_ failed, reason: getaddrinfo ENOTFOUND api.test.domain.com api.test.domain.com:443
    at ClientRequest.<anonymous> (/snapshot/balena-cli/node_modules/node-fetch/index.js:133:11)
    at emitOne (events.js:96:13)
    at ClientRequest.emit (events.js:188:7)
    at ClientRequest.emit (/snapshot/balena-cli/node_modules/raven/lib/instrumentation/http.js:51:23)
    at TLSSocket.socketErrorListener (_http_client.js:310:9)
    at emitOne (events.js:96:13)
    at TLSSocket.emit (events.js:188:7)
    at connectErrorNT (net.js:1025:8)
    at _combinedTickCallback (internal/process/next_tick.js:80:11)
    at process._tickDomainCallback (internal/process/next_tick.js:128:9)

Ok, so if your computer can resolve api.test.domain.com to an IP, you should not get this error with curl anymore:

root@machine:/home/root# curl http://test.domain.com/ping
curl: (6) Could not resolve host: api.test.domain.com