Merge remote-tracking branch 'dev/master'

Louis Lam 2023-12-04 02:18:06 +08:00
commit 754e153543
12 changed files with 74 additions and 23 deletions

@ -10,4 +10,4 @@ Feel free to add your project here by making a pull request in this wiki repo: h
- [uptimekuma-migrator](https://github.com/Peppershade/uptimekuma-migrator) - Simple migrator from UptimeRobot to UptimeKuma - [uptimekuma-migrator](https://github.com/Peppershade/uptimekuma-migrator) - Simple migrator from UptimeRobot to UptimeKuma
- [swatchdog](https://github.com/imsingee/swatchdog) - A simple requester to send periodically requests to Uptime Kuma's "Push" monitor - [swatchdog](https://github.com/imsingee/swatchdog) - A simple requester to send periodically requests to Uptime Kuma's "Push" monitor
- [KumaCompanion](https://github.com/Zerka30/KumaCompanion) - A Command Line Interface (CLI) for Uptime Kuma - [KumaCompanion](https://github.com/Zerka30/KumaCompanion) - A Command Line Interface (CLI) for Uptime Kuma
- [UptimeKumaRemoteProbe ](https://github.com/zimbres/UptimeKumaRemoteProbe) - A Remote Probe to work with Uptime Kuma "Push" monitor type.

@ -39,17 +39,17 @@ Authentication is done by passing the API key in the `Authorization`
header. For example, here is a request made with curl to the `metrics` header. For example, here is a request made with curl to the `metrics`
endpoint. endpoint.
``` ```bash
curl -u":<key>" uptime.kuma/metrics curl -u":<key>" uptime.kuma/metrics
``` ```
Note, the `:` is required before the key as basic authentication > [!NOTE]
requires a username and password separated by a `:`, however we don't > `:` is required before the key, because basic authentication requires a username and password separated by a `:`.
make use of the username field. > We don't make use of the username field.
Here is an example config for Prometheus: Here is an example config for Prometheus:
``` ```yml
- job_name: 'uptime' - job_name: 'uptime'
scrape_interval: 30s scrape_interval: 30s
scheme: http scheme: http
@ -59,4 +59,5 @@ Here is an example config for Prometheus:
password: <api key> password: <api key>
``` ```
Note: we don't need to set a username field as it is not used. > [!NOTE]
> Seting the username field is not nessesary, as it is unused.

@ -2,7 +2,10 @@
By default, Cloudflare is not API friendly including Uptime Kuma. Cloudflare may block requests from Uptime Kuma. By default, Cloudflare is not API friendly including Uptime Kuma. Cloudflare may block requests from Uptime Kuma.
You need to disable "Browser Integrity Check" in Cloudflare Dashboard. You need to disable or bypass "Browser Integrity Check" in Cloudflare Dashboard via one of these methods:
- (Easiest) Add your Uptime Kuma host IP address to [IP Access rules](https://developers.cloudflare.com/waf/tools/ip-access-rules/) as an Allowed address, optionally across every domain in your Cloudflare account
- Allow Uptime Kuma to [bypass the check via WAF Custom Rules](https://developers.cloudflare.com/waf/custom-rules/skip/), only applies to one domain at a time
- Use a [Configuration Rule](https://developers.cloudflare.com/rules/configuration-rules/) to disable the check for your Uptime Kuma IP address
Related discussion: https://community.cloudflare.com/t/api-403-after-enabling-cloudflare/108078/6 Related discussion: https://community.cloudflare.com/t/api-403-after-enabling-cloudflare/108078/6

@ -83,6 +83,13 @@ Add a new Docker host and choose TCP as the option. Specify the IP address of th
![Docker host monitor](img/docker-host.png) ![Docker host monitor](img/docker-host.png)
**Configuring certificates for Docker TLS connection**
Assuming you have already properly configured your remote docker instance to listen securely for TLS connections as detailed [here](https://docs.docker.com/engine/security/protect-access/#use-tls-https-to-protect-the-docker-daemon-socket), you must configure Uptime-Kuma to use the certificates you've generated. The base path where certificates are looked for can be set with the `DOCKER_TLS_DIR_PATH` environmental variable or defaults to `data/docker-tls/`.
If a directory in this path exists with a name matching the FQDN of the docker host (e.g. the FQDN of `https://example.com:2376` is `example.com` so the directory `data/docker-tls/example.com/` would be searched for certificate files), then `ca.pem`, `key.pem` and `cert.pem` files are loaded and included in the agent options. File names can also be overridden via `DOCKER_TLS_FILE_NAME_(CA|KEY|CERT)`.
## Related Discussion ## Related Discussion
- https://github.com/louislam/uptime-kuma/issues/2061 - https://github.com/louislam/uptime-kuma/issues/2061

@ -45,20 +45,20 @@ start_post() {
Set the script to executable. Set the script to executable.
```sh ```bash
sudo chmod 755 /etc/init.d/uptime-kuma sudo chmod 755 /etc/init.d/uptime-kuma
``` ```
Create a user and group `uptime-kuma:uptime-kuma` for the service. Create a user and group `uptime-kuma:uptime-kuma` for the service.
```sh ```bash
sudo addgroup -S uptime-kuma sudo addgroup -S uptime-kuma
sudo adduser -S -D -h /var/lib/uptime-kuma -s /sbin/nologin -G uptime-kuma -g uptime-kuma uptime-kuma sudo adduser -S -D -h /var/lib/uptime-kuma -s /sbin/nologin -G uptime-kuma -g uptime-kuma uptime-kuma
``` ```
Start the service and add it to default runlevel if preferred. Start the service and add it to default runlevel if preferred.
```sh ```bash
sudo rc-service uptime-kuma start sudo rc-service uptime-kuma start
sudo rc-update add uptime-kuma sudo rc-update add uptime-kuma
``` ```

@ -16,7 +16,7 @@ Labels to filter by include:
Put the following into your Prometheus config: Put the following into your Prometheus config:
``` ```yml
- job_name: 'uptime' - job_name: 'uptime'
scrape_interval: 30s scrape_interval: 30s
scheme: http scheme: http

@ -1,6 +1,6 @@
## With Docker ## With Docker
``` ```bash
docker exec -it <container name> bash docker exec -it <container name> bash
npm run reset-password npm run reset-password
``` ```

@ -166,7 +166,7 @@ Link to https-portal Websocket under [Advanced Usage](https://github.com/SteveLT
Example docker-compose.yml file using Https-Portal: Example docker-compose.yml file using Https-Portal:
``` ```yml
version: '3.3' version: '3.3'
services: services:
@ -220,8 +220,11 @@ docker run -d --restart=always -p 127.0.0.1:3002:3001 -v uptime-kuma:/app/data -
![Reverse Proxy](./img/Synology-reverse-proxy.png) ![Reverse Proxy](./img/Synology-reverse-proxy.png)
6. Click on the tab *Custom Header*
7. Click `Create` -> `Websockets`, this automatically fills in the required headers for websockets.
# Traefik # Traefik
``` ```yml
labels: labels:
- "traefik.enable=true" - "traefik.enable=true"
- "traefik.http.routers.uptime-kuma.rule=Host(`YourOwnHostname`)" - "traefik.http.routers.uptime-kuma.rule=Host(`YourOwnHostname`)"

@ -31,3 +31,29 @@ Now you can show different status pages based on the domain names.
This is my example, they both are from the same instance: This is my example, they both are from the same instance:
- https://status.louislam.net - https://status.louislam.net
- https://status.kuma.pet - https://status.kuma.pet
## Custom Subdirectory / custom html on status pages
> [!CAUTION]
> For the following to work the [environment variable `UPTIME_KUMA_DISABLE_FRAME_SAMEORIGIN=true`](https://github.com/louislam/uptime-kuma/wiki/Environment-Variables) needs to be set.
> This allows other pages to include Uptime Kuma as an `iframe` and makes you vulnerable to [`clickjacking`](https://en.wikipedia.org/wiki/Clickjacking).
Changing the subdirectory of Uptime Kuma is tracked in https://github.com/louislam/uptime-kuma/pull/1092
Embedding `script`s/`meta`-tags/... into Uptime Kuma is tracked in https://github.com/louislam/uptime-kuma/issues/3115
A solution to get around this limitation is to utilise an `iframe`.
Here is an example of how to configure this (replacing `INSERT_{...}_HERE` with your own values):
```html
<html data-lt-installed="true"><head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta http-equiv="X-UA-Compatible" content="ie=edge">
<meta name="description" content="INSERT_DESCRIPTION_HERE">
<title>INSERT_TITLE_HERE</title>
</head>
<body style="height: 100vh;margin: 0;padding: 0;overflow: hidden;">
<iframe src="INSERT_UPTIME_KUMA_URL_HERE" frameborder="0" width="100%" height="100%" allowtransparency="yes" style="overflow:hidden;margin: 0; border: none;"></iframe>
</body>
</html>
```

@ -17,13 +17,15 @@ Restart=on-failure
WantedBy=multi-user.target WantedBy=multi-user.target
``` ```
Note: This unit file assumes that you are running the software as a separate 'uptime' user. If you have node/npm installed in a different path, you will need to alter the ExecStart line to match this. > [!NOTE]
> This unit file assumes that you are running the software as a separate 'uptime' user.
> If you have node/npm installed in a different path, you will need to alter the ExecStart line to match this.
This unit file may be installed to /etc/systemd/system/uptime-kuma.service (Or whatever service name you'd prefer) This unit file may be installed to /etc/systemd/system/uptime-kuma.service (Or whatever service name you'd prefer)
Once installed, issue the following commands to reload systemd unit files, enable it to start on boot, and start it immediately: Once installed, issue the following commands to reload systemd unit files, enable it to start on boot, and start it immediately:
```sh ```bash
systemctl daemon-reload systemctl daemon-reload
systemctl enable --now uptime-kuma systemctl enable --now uptime-kuma
``` ```

@ -1,9 +1,13 @@
## Uptime Kuma reports DOWN, but I can access. ## Uptime Kuma reports `DOWN`, but the service can be accessed
If your Uptime Kuma reports DOWN of your service, sometimes you would like to know it is a bug of Uptime Kuma or your own Docker network issue. > [!INFO]
> In case you did not know:
> docker has [more than one network type](https://youtu.be/bKFMS5C4CG0) with only some of them allowing access to the local network and some not even allowing access to remote networks
Go into your container's bash. If your Uptime Kuma reports `DOWN` of your service, knowing if it is a bug of Uptime Kuma / a docker network misconfiguration or a firewall is a good start to fixing the issue.
To debug this, go into your container's bash via
```bash ```bash
docker exec -it uptime-kuma bash docker exec -it uptime-kuma bash
@ -15,7 +19,9 @@ Install `curl`
apt update && apt --yes install curl apt update && apt --yes install curl
``` ```
Then you can test with these commands for example: Then you can debug this issue with commands like `ping`, `curl`, ...
Examples:
```bash ```bash
curl https://google.com curl https://google.com
ping google.com ping google.com

@ -7,7 +7,9 @@ docker run -d --restart=always -p 3001:3001 -v uptime-kuma:/app/data --name upti
Uptime Kuma is now running on http://localhost:3001 Uptime Kuma is now running on http://localhost:3001
> [!WARNING] > [!WARNING]
> **NFS** (Network File System) are **NOT** supported. Please map to a local directory or volume. > Filesystem support for POSIX file locks is required to avoid SQLite database corruption.
> Be aware of possible [file locking problems](https://www.sqlite.org/howtocorrupt.html#_file_locking_problems) such as those [commonly encountered with NFS](https://www.sqlite.org/faq.html#q5).
> **Please map the `/app/data`-folder to a local directory or volume.**
Browse to http://localhost:3001 after started. Browse to http://localhost:3001 after started.
@ -115,7 +117,8 @@ https://github.com/louislam/uptime-kuma/wiki/Reverse-Proxy
### ☸️ OpenShift 4 and Kubernetes Helm 3 Chart (Unofficial) ### ☸️ OpenShift 4 and Kubernetes Helm 3 Chart (Unofficial)
> Note: This Chart relies on a repackaged OCI Container Image, which lets *uptime-kuma* run as **non-root** user. \ > [!NOTE]
> This Chart relies on a repackaged OCI Container Image, which lets *uptime-kuma* run as **non-root** user.
> The entire repackage process is automated via GitHub Actions and renovate-bot keeps everything up to date. (feel free to audit it yourself) > The entire repackage process is automated via GitHub Actions and renovate-bot keeps everything up to date. (feel free to audit it yourself)
The Containerfile used to rebundle *uptime-kuma*: [rootless Containerfile](https://github.com/k3rnelpan1c-dev/uptime-kuma-helm/blob/main/container/Containerfile) The Containerfile used to rebundle *uptime-kuma*: [rootless Containerfile](https://github.com/k3rnelpan1c-dev/uptime-kuma-helm/blob/main/container/Containerfile)