Project guide
Uptime Kuma
A lightweight status and uptime dashboard for checks, alerts and quick operational visibility.
Complexity: Low Node.js app in a single container
What it is
- Uptime Kuma is a small self-hosted dashboard for endpoint checks, SSL expiry reminders and public status pages.
- It fits teams that want simple visibility without introducing a full monitoring stack on day one.
When it is a good fit
- You need HTTP, TCP or keyword checks for a handful of services.
- You want a readable status page for internal tools or a small public-facing service.
- You prefer one container and a short Docker Compose file over a larger monitoring setup.
When it is not the best choice
- You need long-term metrics, alert routing rules or infrastructure-wide dashboards.
- You already run Prometheus and Grafana and want everything in one monitoring workflow.
Minimum VPS requirements
- 1 vCPU is usually enough.
- 1 GB RAM is comfortable for a small set of checks.
- 10 GB SSD leaves room for the app, logs and normal image churn.
Starter install
- The quickest clean start is the same one the project documents itself: run the official container and keep `/app/data` on a local disk-backed volume.
- On a VPS, I would bind it to localhost first and let nginx handle the public hostname. That keeps the admin UI off the raw port and avoids redoing the install later.
services:
uptime-kuma:
image: louislam/uptime-kuma:1
restart: unless-stopped
ports:
- "127.0.0.1:3001:3001"
volumes:
- ./data:/app/data - Create a service directory, save the Compose file and start it with `docker compose up -d`.
- Open the site through nginx, finish the first-run UI setup and create the initial account there.
- Add one or two checks first, then verify that notifications and status pages behave as expected before scaling the list.
- Keep the data directory on local storage. The project explicitly warns against treating network file systems like ordinary local disk.
Deployment notes
- Keep checks conservative at first. A small VPS should not spend all of its time monitoring other things.
- Store the data directory on a named volume so restarts and image updates remain routine.
- If you expose a status page, separate its hostname from the admin area.
Reverse proxy / nginx notes
- Terminate TLS at nginx and pass requests to the container over the local Docker network.
- Forward `Host`, `X-Forwarded-For` and `X-Forwarded-Proto` so generated links remain correct.
- If you publish a status page, add a clear cache policy for static assets and keep admin paths unindexed.
Data, volumes and backups
- The first backup target is the application data volume.
- Pair volume backups with the exact Compose file and environment values used for deployment.
- A periodic archive before upgrades is usually enough for a small installation.
Updates and maintenance
- Review failed checks after updates so alert noise does not hide real incidents.
- Keep the image current, but avoid updating during active incidents.
- Test notifications after major version changes instead of assuming old settings still behave the same.
Common pitfalls
- Aggressive check intervals can become self-inflicted load on a small server.
- Public status pages often leak more naming detail than intended if defaults are left unchanged.
- Restart loops are easy to miss when only the UI is checked and container logs are ignored.
Alternatives
- Healthchecks for a simpler heartbeat-style setup.
- Prometheus with Grafana for deeper metrics and alerting depth.
Official links
For installation details, release information and current support policy, always check the official project resources directly.