Is there anything to do other than putting the INITSYSTEM env var anywhere in your dockerfile ?
I’m totally stuck here to make the TICK stack managed by systemd and I’m not sure if it’s Resin related or just some systemd mystery :
UPDATE : also maybe I have to say I haven’t restarted the container at all, I just expect “systemctl start” to work right after “systemctl enable”, am I wrong ?
UPDATE 2 : so you don’t have to do the install to see the telegraf.service script, here it is. Maybe using a non-root user is a problem inside a container ? (I have double check the “ExecStart” command line can run as “telegraf” user).
[Unit]
Description=The plugin-driven server agent for reporting metrics into InfluxDB
Documentation=https://github.com/influxdata/telegraf
After=network.target
[Service]
EnvironmentFile=-/etc/default/telegraf
User=telegraf
ExecStart=/usr/bin/telegraf -config /etc/telegraf/telegraf.conf -config-directory /etc/telegraf/telegraf.d ${TELEGRAF_OPTS}
ExecReload=/bin/kill -HUP $MAINPID
Restart=on-failure
KillMode=control-group
[Install]
WantedBy=multi-user.target
Today, after a full restart (which forced me to redo some conf since everything is not in my Dockerfile yet), systemd do start my telegraf process when I do systemctl start telegraf.
I noticed something which is resin related though : since systemd can’t be detected at build image time, the telegraf debian package installs init.d scripts which will be mixed with your telegraf.service instructions, I prefer to remove these init.d scripts.
I may post my final Dockerfile here since there is nothing confidential.
UPDATE : here are the relevant part of my setup (working now) :
#!/bin/sh
# Maybe needed to take into account the new telegraf.service
systemctl daemon-reload
# Enable telegraf service boot loading
systemctl enable telegraf
# Start telegraf service
systemctl start telegraf
# To prevent Docker from exiting
journalctl -f
I think I’ll copy your systemd set up. I’ve extracted the telegraf stuff I’m doing in another project into a a small repo, there maybe things you can copy from there? Most of it’s pretty standard but I added some exec scripts to monitor some resin services on the device see here: https://github.com/craig-mulligan/resin-telegraf/tree/master/scripts/resin-services.
I also have backend service that polls the resin api and logs device status to influxdb if you’re interested in that.
Ok, please take some more setup then : here’s the full Dockerfile.template + start.sh for a TIC install (no Kapacitor yet).
I don’t copy my conf files here, but it’s just copied and adapted from the default conf, and for telegraf I’ve splitted it in one file per plugin in telegraf.d directory for easier maintenance.
#!/bin/sh
# Make and chown influxdb data directory
mkdir -p /data/influxdb
chown influxdb:influxdb /data/influxdb
# Make and chown chronograf data directory
mkdir -p /data/chronograf
chown chronograf:chronograf /data/chronograf
# Maybe needed to take into account the new telegraf.service
systemctl daemon-reload
# Enable Telegraf service boot loading
systemctl enable telegraf
# Start Telegraf service
systemctl start telegraf
# Enable InfluxDB service boot loading
systemctl enable influxdb
# Start InfluxDB service
systemctl start influxdb
# Enable Chronograf service boot loading
systemctl enable chronograf
# Start Chronograf service
systemctl start chronograf
# To prevent Docker from exiting
journalctl -f
I think it’s better practice to only run telegraf on the device and then post the data to a server running the rest of the stack. Running influx remotely means you can keep all device/“hosts” in a single timeseries.
Sure, we have a remote InfluxDB too, and some other databases (our Telegraf uses MQTT as an output to keep it decoupled from our remote backend, so we can add any subscribers to our broker server-side at any moment).
We had to keep a local storage in case of deconnections, and in case some data can’t be legally uploaded and we still need to be able to provide a full local service.
I’ve just done it and MQTT is really simple to setup.
As a subscriber, I use telegraf too (input MQTT, output InfluxDB) so it goes really easily from my MQTT broker to my central InfluxDB. I may use more complex subscribers if I want to load data into DB like Warp10 with a different format.
For the device, I think you are right though, it’s loaded at 50-60% mem usage with only supervision tools installed, so I may remove Chronograf and Kapacitor to keep only Telegraf and local InfluxDB.