Hi balena community,
I have been having a debate in my head over the pros and cons of NGINX over Caddy (https://caddyserver.com/). And where better to flush out those thoughts than in the open!
If you’re not familiar with Caddy, it is a production ready web server that can be used as an alternative to NGINX and has multi-arch Docker images maintained by Caddy. Here it is in action on a balena project by @alanb128: GitHub - alanb128/landr-buddy: Easily upload a website and test with a public URL..
And here are some key comparisons between NGINX and Caddy:
Caddy | Nginx | |
---|---|---|
Compressed Docker Image Size | 14.8 | 9.51 |
Language | Golang | C |
Automatic HTTPS | yes | no |
Market share (Comparing the best web servers: Caddy, Apache, and Nginx - LogRocket Blog)
Performance (centminmod-caddy-v2/readme.md at master · centminmod/centminmod-caddy-v2 · GitHub)
HTTP/2 HTTPS Benchmarks
server | h2load HTTP/2 | requests/s | ttfb min | ttfb avg | ttfb max | cipher | protocol | successful req | failed req |
---|---|---|---|---|---|---|---|---|---|
caddy v2 | t1 c150 n1000 m50 | 959.57 | 213.30ms | 696.74ms | 1.03s | ECDHE-ECDSA-AES256-GCM-SHA384 | h2 TLSv1.2 | 100% | 0% |
caddy v2 | t1 c500 n2000 m100 | 990.03 | 711.60ms | 1.36s | 1.98s | ECDHE-ECDSA-AES256-GCM-SHA384 | h2 TLSv1.2 | 100% | 0% |
caddy v2 | t1 c1000 n10000 m100 | 1049.00 | 965.65ms | 3.34s | 6.53s | ECDHE-ECDSA-AES256-GCM-SHA384 | h2 TLSv1.2 | 68.89% | 31.11% |
nginx 1.17.10 | t1 c150 n1000 m50 | 2224.74 | 158.04ms | 300.22ms | 440.22ms | ECDHE-ECDSA-AES128-GCM-SHA256 | h2 TLSv1.2 | 100% | 0% |
nginx 1.17.10 | t1 c500 n2000 m100 | 1600.52 | 583.80ms | 861.70ms | 1.23s | ECDHE-ECDSA-AES128-GCM-SHA256 | h2 TLSv1.2 | 100% | 0% |
nginx 1.17.10 | t1 c1000 n10000 m100 | 1912.05 | 949.61ms | 2.98s | 5.16s | ECDHE-ECDSA-AES128-GCM-SHA256 | h2 TLSv1.2 | 100% | 0% |
Looks like a no-brainer right? Performance is better on NGINX, market share is greater for NGINX, the Docker image size is smaller, it is in C rather than Go which has performance benefits.
But then comes the question of how much this really matters for IoT? Marketshare is somewhat valuable a statistic, but Caddy is already well known and respected for stability and quality. The 5mb difference in compressed image size I will choose not to lose any sleep over. The big one is performance, but when running on IoT devices it seems to be far more likely we will experience other bottle necks before the NGINX vs Caddy comparison comes in to play. Network speeds, SD card write speeds, processing power, the number of cores on the devices and so forth. And with low traffic, will we notice any at all? NGINX is designed for scaling large web web services and I suspect these performance differences will be relatively insignificant for IoT devices.
Then comes the big plus side for Caddy: the ease of configuration. Here is the default NGINX config file from their Docker image:
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log notice;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
keepalive_timeout 65;
#gzip on;
include /etc/nginx/conf.d/*.conf;
}
Trust me when I say this is just the beginning of a slippery slope. When you start adding in reverse proxies, encounter the differences between URLs with a /
and those without, begin using ~
and ^
, compression, root vs alias and the many other NGINX caveats, things quickly get complicated. Here is one of my NGINX config files to serve 3 routes (/
, /storage
, /website
) and a reverse proxy on /dev-server
:
user nginx;
worker_processes auto;
error_log stderr warn;
pid /run/nginx.pid;
# Set worker_connections to appropriate level for low resource hardware
events {
worker_connections 256;
}
http {
include mime.types;
default_type application/octet-stream;
# Switch off access logs and route error logs to docker console
access_log off;
error_log /proc/1/fd/2 notice;
keepalive_timeout 75;
# Write temporary files to /tmp so they can be created as a non-privileged user, and to avoid SD writes
client_body_temp_path /tmp/client_temp;
proxy_temp_path /tmp/proxy_temp_path;
fastcgi_temp_path /tmp/fastcgi_temp;
uwsgi_temp_path /tmp/uwsgi_temp;
scgi_temp_path /tmp/scgi_temp;
# Specify Docker resolver
resolver 127.0.0.11;
# Prevent port being added to end of URL on redirects
port_in_redirect off;
# Default server definition
server {
listen [::]:8081 default_server;
listen 8081 default_server;
server_name _;
# Allow CORS
add_header Access-Control-Allow-Origin *;
sendfile on;
root /app/public/interface;
index index.html;
# Set to allow large file uploads in File Manager
client_max_body_size 0;
location / {
# First attempt to serve request as file, then
# as directory, then fall back to index.html
try_files $uri $uri/ /index.html?q=$uri&$args /index.html;
}
# Redirect for the root storage volume
location ^~ /storage {
root /app/public;
}
# Redirect for the 'website' feature
location ^~ /website {
root /app/public/storage;
}
# Development Server
location ~ /dev-server {
set $interface http://0.0.0.0:8082;
proxy_pass $interface;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header X-Real-Ip $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $host;
proxy_set_header REMOTE_ADDR $remote_addr;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_read_timeout 86400;
# Disable caching for dev env
add_header 'Cache-Control' 'no-store, no-cache, must-revalidate, proxy-revalidate, max-age=0';
expires off;
etag off;
proxy_no_cache 1;
}
# Redirect server error pages to the static page /404.html
error_page 404 500 502 503 504 /404;
error_page 401 /401;
# Cache these file types for 5 days to save bandwidth on each load
location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ {
expires 5d;
}
# Deny access to . files, for security
location ~ /\. {
log_not_found off;
deny all;
}
}
# Light compression for page load speed on larger files
gzip on;
gzip_comp_level 1;
gzip_min_length 1000;
gzip_proxied any;
gzip_types text/plain application/xml text/css text/js text/xml application/x-javascript text/javascript application/json application/xml+rss;
gzip_vary on;
gzip_disable "msie6";
}
Now let’s take a look at Caddy config file entry points, starting with a basic server:
:80 {
# Set this path to your site's directory.
root * /usr/share/caddy
# Enable the static file server.
file_server
}
Or a load balanced reverse proxy:
example.com # Your site's domain name
# Load balance between three backends with custom health checks
reverse_proxy 10.0.0.1:9000 10.0.0.2:9000 10.0.0.3:9000 {
lb_policy random_choose 2
health_path /ok
health_interval 10s
}
Or a HTTPS site with reverse proxying and compression
example.com
# Compress responses according to Accept-Encoding headers
encode gzip zstd
# Make HTML file extension optional
try_files {path}.html {path}
# Send API requests to backend
reverse_proxy /api/* localhost:9005
# Serve everything else from the file system
file_server
Looking back at the many days/weeks lost over the years on NGINX config files, a part of me wishes I had just used Caddy. I am still experimenting with Caddy, and there are likely some Caddy quirks to experience yet, but if the goal is to utilise development time wisely and reduce user friction, I think there is a good case for Caddy.
It would be great to hear thoughts or arguments for one over the other.