I have done following configurations for setting the server:
apt-get update && apt-get install -y build-essential git
adduser balena
usermod -aG sudo balena
apt-get install docker.io
usermod -aG docker balena
curl -L https://github.com/docker/compose/releases/download/1.23.0-rc3/docker-compose-Linux-x86_64 -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
apt-get install libssl-dev
apt-get install nodejs
apt-get install npm
git clone https://github.com/balena-io/open-balena.git ~/open-balena
./scripts/quickstart -U <email@address> -P <password> -d test.mydomain.com
./scripts/compose up -d
Now I downloaded the ca-cert on my local machine from the Server and then instructed balena-cli on client-side to use the new certificate:
export NODE_EXTRA_CA_CERTS=~/open-balena/config/certs/root/ca.crt
Installed the Certificate system-wide
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain ~/ca.crt
And then I run the services
=> ./scripts/compose up
Starting openbalena_cert-provider_1_453dfda47d64 ... done
Starting openbalena_redis_1_918c910c01d3 ... done
Starting openbalena_s3_1_afe1b57bcb92 ... done
Starting openbalena_db_1_8981f3edc9d9 ... done
Starting openbalena_api_1_b946d63f2941 ... done
Starting openbalena_vpn_1_4f32c78d74e5 ... done
Starting openbalena_registry_1_75517ae0d16c ... done
Starting openbalena_haproxy_1_ea78669dc5c2 ... done
Attaching to openbalena_s3_1_afe1b57bcb92, openbalena_cert-provider_1_453dfda47d64, openbalena_db_1_8981f3edc9d9, openbalena_redis_1_918c910c01d3, openbalena_api_1_b946d63f2941, openbalena_vpn_1_4f32c78d74e5, openbalena_registry_1_75517ae0d16c, openbalena_haproxy_1_ea78669dc5c2
s3_1_afe1b57bcb92 | Systemd init system enabled.
cert-provider_1_453dfda47d64 | [Error] ACTIVE variable is not enabled. Value should be "true" or "yes" to continue.
cert-provider_1_453dfda47d64 | [Error] Unable to continue due to misconfiguration. See errors above. [Stopping]
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.093 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.093 UTC [1] LOG: listening on IPv6 address "::", port 5432
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.105 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.149 UTC [21] LOG: database system was shut down at 2019-05-09 18:28:31 UTC
db_1_8981f3edc9d9 | 2019-05-09 18:29:00.195 UTC [1] LOG: database system is ready to accept connections
redis_1_918c910c01d3 | 1:C 09 May 2019 18:29:00.580 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
redis_1_918c910c01d3 | 1:C 09 May 2019 18:29:00.581 # Redis version=5.0.4, bits=64, commit=00000000, modified=0, pid=1, just started
redis_1_918c910c01d3 | 1:C 09 May 2019 18:29:00.581 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 * Running mode=standalone, port=6379.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # Server initialized
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 * DB loaded from disk: 0.001 seconds
redis_1_918c910c01d3 | 1:M 09 May 2019 18:29:00.585 * Ready to accept connections
api_1_b946d63f2941 | Systemd init system enabled.
vpn_1_4f32c78d74e5 | Systemd init system enabled.
registry_1_75517ae0d16c | Systemd init system enabled.
haproxy_1_ea78669dc5c2 | Building certificate from environment variables...
haproxy_1_ea78669dc5c2 | Setting up watches. Beware: since -r was given, this may take a while!
haproxy_1_ea78669dc5c2 | Watches established.
haproxy_1_ea78669dc5c2 | [NOTICE] 128/182903 (15) : New worker #1 (17) forked
haproxy_1_ea78669dc5c2 | [WARNING] 128/182903 (17) : Server backend_api/resin_api_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182903 (17) : backend 'backend_api' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/182903 (17) : Server backend_registry/resin_registry_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182903 (17) : backend 'backend_registry' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/182904 (17) : Server backend_vpn/resin_vpn_1 is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182904 (17) : backend 'backend_vpn' has no server available!
haproxy_1_ea78669dc5c2 | [WARNING] 128/182905 (17) : Server vpn-tunnel/balena_vpn is DOWN, reason: Layer4 connection problem, info: "Connection refused", check duration: 0ms. 0 active and 0 backup servers left. 0 sessions active, 0 requeued, 0 remaining in queue.
haproxy_1_ea78669dc5c2 | [ALERT] 128/182905 (17) : proxy 'vpn-tunnel' has no server available!
db_1_8981f3edc9d9 | 2019-05-09 18:29:07.184 UTC [28] ERROR: relation "uniq_model_model_type_vocab" already exists
db_1_8981f3edc9d9 | 2019-05-09 18:29:07.184 UTC [28] STATEMENT: CREATE UNIQUE INDEX "uniq_model_model_type_vocab" ON "model" ("is of-vocabulary", "model type");
haproxy_1_ea78669dc5c2 | [WARNING] 128/182907 (17) : Server backend_registry/resin_registry_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/182908 (17) : Server backend_vpn/resin_vpn_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/182911 (17) : Server vpn-tunnel/balena_vpn is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
haproxy_1_ea78669dc5c2 | [WARNING] 128/182913 (17) : Server backend_api/resin_api_1 is UP, reason: Layer4 check passed, check duration: 0ms. 1 active and 0 backup servers online. 0 sessions requeued, 0 total in queue.
Note: I faced the same issue on my local machine so I started using a server with configuration given in balena documentation and public IP for attaching a CNAME record.
For ./scripts/compose exec -it <API_CONTAINER_ID> journalctl --no-pager
=> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
82486b048bf9 openbalena_haproxy "/docker-entrypoint.…" 6 hours ago Up 3 minutes 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp, 222/tcp, 5432/tcp, 0.0.0.0:3128->3128/tcp, 6379/tcp openbalena_haproxy_1_ea78669dc5c2
45d7128d08c3 balena/open-balena-vpn:v8.10.0 "/usr/bin/entry.sh" 6 hours ago Up 3 minutes 80/tcp, 443/tcp, 3128/tcp openbalena_vpn_1_4f32c78d74e5
cceb47f57e88 balena/open-balena-registry:v2.5.0 "/usr/bin/entry.sh" 6 hours ago Up 3 minutes 80/tcp openbalena_registry_1_75517ae0d16c
35ac2229da82 balena/open-balena-api:v0.11.8 "/usr/bin/entry.sh" 6 hours ago Up 4 minutes 80/tcp openbalena_api_1_b946d63f2941
29ce31519995 openbalena_cert-provider "/entry.sh /usr/src/…" 6 hours ago Up 4 minutes 80/tcp openbalena_cert-provider_1_453dfda47d64
c901d19e3cef balena/open-balena-db:v2.0.3 "docker-entrypoint.s…" 7 hours ago Up 4 minutes 5432/tcp openbalena_db_1_8981f3edc9d9
fdad5939c64e redis:alpine "docker-entrypoint.s…" 7 hours ago Up 4 minutes 6379/tcp openbalena_redis_1_918c910c01d3
d7b55d1b30c6 balena/open-balena-s3:v2.6.2 "/usr/bin/entry.sh" 7 hours ago Up 4 minutes 80/tcp openbalena_s3_1_afe1b57bcb92
I am getting the following logs:
=> /home/ubuntu/open-balena# ./scripts/compose exec -it 35ac2229da82 journalctl --no-pager
Execute a command in a running containerUsage: exec [options] [-e KEY=VAL...] SERVICE COMMAND [ARGS...]
Options:
-d, --detach Detached mode: Run command in the background.
--privileged Give extended privileges to the process.
-u, --user USER Run the command as this user.
-T Disable pseudo-tty allocation. By default `docker-compose exec`
allocates a TTY.
--index=index index of the container if there are multiple
instances of a service [default: 1]
-e, --env KEY=VAL Set environment variables (can be used multiple times,
not supported in API < 1.25)
-w, --workdir DIR Path to workdir directory for this command.