Monitoring Your Server with Uptime Kuma and Prometheus
#monitoring
#uptime-kuma
#prometheus
#devops
#tutorial
Overview
Monitoring your server effectively requires both external visibility (is the service reachable from the internet) and internal metrics (how is the service performing under load). This guide shows how to use Uptime Kuma for uptime checks alongside Prometheus for collecting and alerting on metrics. By wiring them together on a single host with Docker, you get a cohesive monitoring stack that is easy to deploy and extend.
Prerequisites
- A host with Docker and docker-compose installed
- Basic familiarity with YAML and PromQL
- A public or private network accessible from the monitoring stack
Step 1 — Deploy Uptime Kuma
Uptime Kuma provides an approachable UI for uptime checks (HTTP, TCP, ping, etc.). Run Kuma in Docker so you have a stable endpoint to monitor.
# docker-compose.yml (Kuma portion)
version: "3.8"
services:
uptime-kuma:
image: louislam/uptime-kuma:latest
container_name: uptime-kuma
ports:
- "3001:3001"
volumes:
- ./kuma_data:/app/data
networks:
- monitoring
networks:
monitoring:
driver: bridge
Access Kuma at http://localhost:3001. From the UI, you can add monitors for your services (HTTP endpoints, DNS, TCP, ICMP, etc.), configure alerting, and manage settings. Kuma also exposes a Prometheus-compatible metrics endpoint on /metrics when metrics are enabled in the UI; this endpoint is what Prometheus will scrape.
Step 2 — Deploy Prometheus
Prometheus will collect metrics from your services, including the metrics Kuma exposes. Use a docker-compose setup that includes a Prometheus service and a configuration file.
# docker-compose.yml (Prometheus portion)
version: "3.8"
services:
prometheus:
image: prom/prometheus:latest
container_name: prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
- ./prometheus/rules:/etc/prometheus/rules
networks:
- monitoring
networks:
monitoring:
driver: bridge
Prometheus configuration (prometheus/prometheus.yml):
global:
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs:
- job_name: 'uptime-kuma'
static_configs:
- targets: ['uptime-kuma:3001']
# If you expose Kuma metrics on a different path or port, adjust accordingly
Optional: add alert rules to alert on Kuma downtime.
prometheus/rules/uptime-kuma-alerts.yml:
groups:
- name: uptime-kuma-alerts
rules:
- alert: KumaDown
expr: up{job="uptime-kuma"} == 0
for: 5m
labels:
severity: critical
annotations:
summary: "Uptime Kuma is down"
description: "Uptime Kuma endpoint (uptime-kuma:3001) has been down for more than 5 minutes."
If you use alerting, you can route alerts through Alertmanager (optional) and view them in Grafana or another dashboard.
Step 3 — Wire Kuma metrics into Prometheus
As mentioned, Kuma can expose a /metrics endpoint for Prometheus to scrape. Ensure in Kuma’s UI (Settings or Integrations) that Prometheus metrics are enabled, and confirm the endpoint is available at:
Prometheus will scrape this endpoint according to the scrape_configs in prometheus.yml. If you run Kuma in Docker on a different host or with a different network, adjust the targets accordingly (e.g., using host.docker.internal on macOS/Windows or an actual IP).
Step 4 — Optional: Visualize with Grafana
Grafana is a natural companion to Prometheus for dashboards. Add Grafana to your docker-compose file if you want a quick visualization layer.
# docker-compose.yml (Grafana portion)
version: "3.8"
services:
grafana:
image: grafana/grafana:latest
container_name: grafana
ports:
- "3000:3000"
depends_on:
- prometheus
networks:
- monitoring
Grafana setup steps:
- Start Grafana and open http://localhost:3000
- Add Prometheus as a data source (URL: http://prometheus:9090)
- Create dashboards or import community dashboards to visualize:
- Uptime Kuma metrics (availability, response times)
- Prometheus metrics from your services
Note: If you keep Grafana on the same user path, you can use the service name prometheus as the data source URL when using Docker’s internal DNS.
Step 5 — Verify and observe
- Check Kuma’s UI at http://localhost:3001 to verify monitors are running.
- Verify Prometheus is scraping Kuma by visiting http://localhost:9090 and querying:
- up{job=“uptime-kuma”}
- kuma_metrics or the specific metric names exposed by Kuma
- If you configured alert rules, test alerting by stopping a monitored endpoint and observing the alert in Prometheus Alertmanager (if configured) or in Grafana dashboards.
- In Grafana, explore a dashboard for host and service health, latency, and uptime trends.
Best practices
- Use a dedicated network or separate host for the monitoring stack in production to reduce blast radius.
- Secure Prometheus and Grafana access with authentication and, if exposed publicly, enable TLS and strict firewall rules.
- Regularly back up Kuma data and Prometheus/Grafana configurations.
- Start simple: monitor essential services first, then expand to additional endpoints and custom metrics as needed.
Conclusion
By combining Uptime Kuma for external uptime checks with Prometheus for internal metrics and alerting, you get a balanced visibility into the health and performance of your servers. This setup is approachable to deploy, easy to extend, and provides a clear path from raw metrics to actionable insights in dashboards.