mirror of
https://github.com/hassio-addons/addon-prometheus.git
synced 2025-05-04 19:21:35 +00:00
:sparkels: Initial add-on code
This commit is contained in:
parent
8c52b884af
commit
29bfd81c60
15 changed files with 490 additions and 0 deletions
120
prometheus/DOCS.md
Normal file
120
prometheus/DOCS.md
Normal file
|
@ -0,0 +1,120 @@
|
||||||
|
# Home Assistant Community Add-on: Prometheus
|
||||||
|
|
||||||
|
....
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
The installation of this add-on is pretty straightforward and not different in
|
||||||
|
comparison to installing any other Home Assistant add-on.
|
||||||
|
|
||||||
|
1. Search for the "Prometheus" add-on in the Supervisor add-on store.
|
||||||
|
1. Install the "Prometheus" add-on.
|
||||||
|
1. Start the "Prometheus" add-on.
|
||||||
|
1. Check the logs of the "Prometheus" to see if everything went well.
|
||||||
|
1. Open the Web UI.
|
||||||
|
|
||||||
|
**Note**: The addon supports both Ingress and direct access, this is the default
|
||||||
|
|
||||||
|
## Configuration
|
||||||
|
|
||||||
|
There are no configuration options for the addon.
|
||||||
|
|
||||||
|
To add additional scrape targets you need to create a file per target in /share/prometheus/targets.
|
||||||
|
|
||||||
|
Example:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
---
|
||||||
|
job_name: 'octoprint'
|
||||||
|
scrape_interval: 5s
|
||||||
|
metrics_path: '/plugin/prometheus_exporter/metrics'
|
||||||
|
params:
|
||||||
|
apikey: ['VERYSECRETAPIKEY']
|
||||||
|
static_configs:
|
||||||
|
- targets: ['octoprint.example.org:5000']
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note**: _This is just an example, don't copy and paste it! Create your own!_
|
||||||
|
|
||||||
|
|
||||||
|
The job names `home-assistant` and `prometheus` are already defined by default.
|
||||||
|
|
||||||
|
Rules can be created under /share/prometheus/rules/
|
||||||
|
|
||||||
|
The addon will reload the configuration if a valid configuration is available. If not it will log errors in the addon log
|
||||||
|
|
||||||
|
## Known issues and limitations
|
||||||
|
|
||||||
|
* Job name must be unique, but this has to be enforced by the user.
|
||||||
|
* no alert manager yet
|
||||||
|
|
||||||
|
## Changelog & Releases
|
||||||
|
|
||||||
|
This repository keeps a change log using [GitHub's releases][releases]
|
||||||
|
functionality. The format of the log is based on
|
||||||
|
[Keep a Changelog][keepchangelog].
|
||||||
|
|
||||||
|
Releases are based on [Semantic Versioning][semver], and use the format
|
||||||
|
of ``MAJOR.MINOR.PATCH``. In a nutshell, the version will be incremented
|
||||||
|
based on the following:
|
||||||
|
|
||||||
|
- ``MAJOR``: Incompatible or major changes.
|
||||||
|
- ``MINOR``: Backwards-compatible new features and enhancements.
|
||||||
|
- ``PATCH``: Backwards-compatible bugfixes and package updates.
|
||||||
|
|
||||||
|
## Support
|
||||||
|
|
||||||
|
Got questions?
|
||||||
|
|
||||||
|
You have several options to get them answered:
|
||||||
|
|
||||||
|
- The [Home Assistant Community Add-ons Discord chat server][discord] for add-on
|
||||||
|
support and feature requests.
|
||||||
|
- The [Home Assistant Discord chat server][discord-ha] for general Home
|
||||||
|
Assistant discussions and questions.
|
||||||
|
- The Home Assistant [Community Forum][forum].
|
||||||
|
- Join the [Reddit subreddit][reddit] in [/r/homeassistant][reddit]
|
||||||
|
|
||||||
|
You could also [open an issue here][issue] GitHub.
|
||||||
|
|
||||||
|
## Authors & contributors
|
||||||
|
|
||||||
|
The original setup of this repository is by [Robbert Müller][mjrider].
|
||||||
|
|
||||||
|
For a full list of all authors and contributors,
|
||||||
|
check [the contributor's page][contributors].
|
||||||
|
|
||||||
|
## License
|
||||||
|
|
||||||
|
MIT License
|
||||||
|
|
||||||
|
Copyright (c) 2018-2020 Franck Nijhof
|
||||||
|
|
||||||
|
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
|
of this software and associated documentation files (the "Software"), to deal
|
||||||
|
in the Software without restriction, including without limitation the rights
|
||||||
|
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
|
copies of the Software, and to permit persons to whom the Software is
|
||||||
|
furnished to do so, subject to the following conditions:
|
||||||
|
|
||||||
|
The above copyright notice and this permission notice shall be included in all
|
||||||
|
copies or substantial portions of the Software.
|
||||||
|
|
||||||
|
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
|
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
|
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||||
|
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
||||||
|
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
||||||
|
SOFTWARE.
|
||||||
|
|
||||||
|
[contributors]: https://github.com/hassio-addons/addon-Prometheus/graphs/contributors
|
||||||
|
[discord-ha]: https://discord.gg/c5DvZ4e
|
||||||
|
[discord]: https://discord.me/hassioaddons
|
||||||
|
[forum]: https://example.net
|
||||||
|
[mjrider]: https://github.com/mjrider
|
||||||
|
[issue]: https://github.com/hassio-addons/addon-prometheus/issues
|
||||||
|
[keepchangelog]: http://keepachangelog.com/en/1.0.0/
|
||||||
|
[reddit]: https://reddit.com/r/homeassistant
|
||||||
|
[releases]: https://github.com/hassio-addons/addon-prometheus/releases
|
||||||
|
[semver]: http://semver.org/spec/v2.0.0.htm
|
61
prometheus/Dockerfile
Executable file
61
prometheus/Dockerfile
Executable file
|
@ -0,0 +1,61 @@
|
||||||
|
ARG BUILD_FROM=hassioaddons/base:8.0.1
|
||||||
|
# hadolint ignore=DL3006
|
||||||
|
FROM ${BUILD_FROM}
|
||||||
|
|
||||||
|
# Set shell
|
||||||
|
SHELL ["/bin/bash", "-o", "pipefail", "-c"]
|
||||||
|
|
||||||
|
# Setup base system
|
||||||
|
ARG BUILD_ARCH=amd64
|
||||||
|
ENV PROMETHEUS_VERSION=2.19.2
|
||||||
|
|
||||||
|
# Copy root filesystem
|
||||||
|
COPY rootfs /
|
||||||
|
|
||||||
|
RUN \
|
||||||
|
ARCH="${BUILD_ARCH}" \
|
||||||
|
&& if [ "${BUILD_ARCH}" = "aarch64" ]; then ARCH="arm64"; fi \
|
||||||
|
\
|
||||||
|
&& apk update \
|
||||||
|
&& apk --no-cache add python3 py3-idna py3-certifi py3-chardet py3-yaml py3-urllib3 py3-requests \
|
||||||
|
&& apk --no-cache add --virtual builddeps py-pip \
|
||||||
|
\
|
||||||
|
&& curl -J -L -o /tmp/prometheus.tar.gz \
|
||||||
|
https://github.com/prometheus/prometheus/releases/download/v${PROMETHEUS_VERSION}/prometheus-${PROMETHEUS_VERSION}.linux-${ARCH}.tar.gz \
|
||||||
|
&& adduser -s /bin/false -D -H prometheus \
|
||||||
|
&& cd /tmp \
|
||||||
|
&& tar -xvf /tmp/prometheus.tar.gz \
|
||||||
|
&& mkdir -p /etc/prometheus \
|
||||||
|
&& cp prometheus-${PROMETHEUS_VERSION}.linux-${ARCH}/promtool /usr/local/bin/ \
|
||||||
|
&& cp prometheus-${PROMETHEUS_VERSION}.linux-${ARCH}/prometheus /usr/local/bin/ \
|
||||||
|
&& cp -R prometheus-${PROMETHEUS_VERSION}.linux-${ARCH}/console_libraries/ /etc/prometheus/ \
|
||||||
|
&& cp -R prometheus-${PROMETHEUS_VERSION}.linux-${ARCH}/consoles/ /etc/prometheus/ \
|
||||||
|
&& rm -r prometheus-${PROMETHEUS_VERSION}.linux-${ARCH} \
|
||||||
|
&& chown -R prometheus:prometheus /etc/prometheus \
|
||||||
|
&& pip3 install -r /opt/prometheus-configgen/requirements.txt \
|
||||||
|
&& apk del builddeps
|
||||||
|
|
||||||
|
# Build arguments
|
||||||
|
ARG BUILD_DATE
|
||||||
|
ARG BUILD_REF
|
||||||
|
ARG BUILD_VERSION
|
||||||
|
|
||||||
|
# Labels
|
||||||
|
LABEL \
|
||||||
|
io.hass.name="Prometheus" \
|
||||||
|
io.hass.description="Cloud native metrics" \
|
||||||
|
io.hass.arch="${BUILD_ARCH}" \
|
||||||
|
io.hass.type="addon" \
|
||||||
|
io.hass.version=${BUILD_VERSION} \
|
||||||
|
maintainer="Robbert Müller <homeassistant@grols.ch>" \
|
||||||
|
org.opencontainers.image.title="Prometheus" \
|
||||||
|
org.opencontainers.image.description="Cloud native metrics" \
|
||||||
|
org.opencontainers.image.vendor="Home Assistant Community Add-ons" \
|
||||||
|
org.opencontainers.image.authors="Robbert Müller <homeassistant@grols.ch>" \
|
||||||
|
org.opencontainers.image.licenses="MIT" \
|
||||||
|
org.opencontainers.image.url="https://addons.community" \
|
||||||
|
org.opencontainers.image.source="https://github.com/hassio-addons/addon-prometheus" \
|
||||||
|
org.opencontainers.image.documentation="https://github.com/hassio-addons/addon-prometheus/blob/master/README.md" \
|
||||||
|
org.opencontainers.image.created=${BUILD_DATE} \
|
||||||
|
org.opencontainers.image.revision=${BUILD_REF} \
|
||||||
|
org.opencontainers.image.version=${BUILD_VERSION}
|
8
prometheus/build.json
Normal file
8
prometheus/build.json
Normal file
|
@ -0,0 +1,8 @@
|
||||||
|
{
|
||||||
|
"build_from": {
|
||||||
|
"aarch64": "hassioaddons/base-aarch64:8.0.1",
|
||||||
|
"amd64": "hassioaddons/base-amd64:8.0.1",
|
||||||
|
"armv7": "hassioaddons/base-armv7:8.0.1"
|
||||||
|
},
|
||||||
|
"args": {}
|
||||||
|
}
|
29
prometheus/config.json
Executable file
29
prometheus/config.json
Executable file
|
@ -0,0 +1,29 @@
|
||||||
|
{
|
||||||
|
"name": "Prometheus",
|
||||||
|
"version": "dev",
|
||||||
|
"slug": "prometheus",
|
||||||
|
"description": "Cloud native metrics",
|
||||||
|
"url": "https://github.com/hassio-addons/addon-prometheus",
|
||||||
|
"startup": "services",
|
||||||
|
"ingress": true,
|
||||||
|
"ingress_port": 9090,
|
||||||
|
"ingress_entry": "graph",
|
||||||
|
"panel_icon": "mdi:chart-timeline",
|
||||||
|
"panel_title": "Prometheus",
|
||||||
|
"arch": ["aarch64", "amd64", "armv7"],
|
||||||
|
"boot": "auto",
|
||||||
|
"hassio_api": true,
|
||||||
|
"homeassistant_api": true,
|
||||||
|
"hassio_role": "default",
|
||||||
|
"map": ["share:rw"],
|
||||||
|
"options": {
|
||||||
|
},
|
||||||
|
"ports": {
|
||||||
|
"9090/tcp": null
|
||||||
|
},
|
||||||
|
"ports_description": {
|
||||||
|
"9090/tcp": "Not required for Ingress"
|
||||||
|
},
|
||||||
|
"schema": {
|
||||||
|
}
|
||||||
|
}
|
BIN
prometheus/icon.png
Normal file
BIN
prometheus/icon.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 15 KiB |
BIN
prometheus/logo.png
Normal file
BIN
prometheus/logo.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 15 KiB |
9
prometheus/rootfs/etc/cont-init.d/prometheus.sh
Normal file
9
prometheus/rootfs/etc/cont-init.d/prometheus.sh
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
#!/usr/bin/with-contenv bashio
|
||||||
|
# ==============================================================================
|
||||||
|
# Home Assistant Community Add-on: Prometheus
|
||||||
|
# Configures Prometheus
|
||||||
|
# ==============================================================================
|
||||||
|
readonly CONFIG="/etc/prometheus/prometheus.yml"
|
||||||
|
|
||||||
|
echo "${HASSIO_TOKEN}" > '/run/home-assistant.token'
|
||||||
|
|
40
prometheus/rootfs/etc/prometheus/prometheus.yml
Normal file
40
prometheus/rootfs/etc/prometheus/prometheus.yml
Normal file
|
@ -0,0 +1,40 @@
|
||||||
|
---
|
||||||
|
# my global config
|
||||||
|
global:
|
||||||
|
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
|
||||||
|
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
|
||||||
|
# scrape_timeout is set to the global default (10s).
|
||||||
|
|
||||||
|
# Alertmanager configuration
|
||||||
|
alerting:
|
||||||
|
alertmanagers:
|
||||||
|
- static_configs:
|
||||||
|
- targets:
|
||||||
|
# - alertmanager:9093
|
||||||
|
|
||||||
|
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
|
||||||
|
rule_files:
|
||||||
|
- "/share/prometheus/rules/*.yml"
|
||||||
|
- "/share/prometheus/rules/*.yaml"
|
||||||
|
|
||||||
|
# A scrape configuration containing exactly one endpoint to scrape:
|
||||||
|
# Here it's Prometheus itself.
|
||||||
|
scrape_configs:
|
||||||
|
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
|
||||||
|
- job_name: 'prometheus'
|
||||||
|
|
||||||
|
# metrics_path defaults to '/metrics'
|
||||||
|
# scheme defaults to 'http'.
|
||||||
|
|
||||||
|
static_configs:
|
||||||
|
- targets: ['localhost:9090']
|
||||||
|
- job_name: 'home-assistant'
|
||||||
|
scrape_interval: 60s
|
||||||
|
metrics_path: /core/api/prometheus
|
||||||
|
|
||||||
|
# Long-Lived Access Token
|
||||||
|
bearer_token_file: '/run/home-assistant.token'
|
||||||
|
|
||||||
|
scheme: http
|
||||||
|
static_configs:
|
||||||
|
- targets: ['supervisor:80']
|
|
@ -0,0 +1,9 @@
|
||||||
|
#!/usr/bin/execlineb -S0
|
||||||
|
# ==============================================================================
|
||||||
|
# Home Assistant Community Add-on: Prometheus
|
||||||
|
# Take down the S6 supervision tree when Prometheus fails
|
||||||
|
# ==============================================================================
|
||||||
|
if { s6-test ${1} -ne 0 }
|
||||||
|
if { s6-test ${1} -ne 256 }
|
||||||
|
|
||||||
|
s6-svscanctl -t /var/run/s6/services
|
14
prometheus/rootfs/etc/services.d/prometheus-configgen/run
Executable file
14
prometheus/rootfs/etc/services.d/prometheus-configgen/run
Executable file
|
@ -0,0 +1,14 @@
|
||||||
|
#!/usr/bin/with-contenv bashio
|
||||||
|
# shellcheck disable=SC2191
|
||||||
|
|
||||||
|
bashio::log.info 'Starting prometheus config generator...'
|
||||||
|
|
||||||
|
if [[ ! -d /share/promethus/targets ]] ; then
|
||||||
|
mkdir -p /share/prometheus/targets
|
||||||
|
chown -R prometheus:prometheus /share/prometheus/targets
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd /opt/prometheus-configgen
|
||||||
|
|
||||||
|
# Run Prometheus
|
||||||
|
exec s6-setuidgid prometheus python3 combiner
|
9
prometheus/rootfs/etc/services.d/prometheus/finish
Normal file
9
prometheus/rootfs/etc/services.d/prometheus/finish
Normal file
|
@ -0,0 +1,9 @@
|
||||||
|
#!/usr/bin/execlineb -S0
|
||||||
|
# ==============================================================================
|
||||||
|
# Home Assistant Community Add-on: Prometheus
|
||||||
|
# Take down the S6 supervision tree when Prometheus fails
|
||||||
|
# ==============================================================================
|
||||||
|
if { s6-test ${1} -ne 0 }
|
||||||
|
if { s6-test ${1} -ne 256 }
|
||||||
|
|
||||||
|
s6-svscanctl -t /var/run/s6/services
|
45
prometheus/rootfs/etc/services.d/prometheus/run
Executable file
45
prometheus/rootfs/etc/services.d/prometheus/run
Executable file
|
@ -0,0 +1,45 @@
|
||||||
|
#!/usr/bin/with-contenv bashio
|
||||||
|
# shellcheck disable=SC2191
|
||||||
|
# ==============================================================================
|
||||||
|
# Home Assistant Community Add-on: Prometheus
|
||||||
|
# Runs the Prometheus Server
|
||||||
|
# ==============================================================================
|
||||||
|
declare -a options
|
||||||
|
declare name
|
||||||
|
declare value
|
||||||
|
|
||||||
|
bashio::log.info 'Starting prometheus...'
|
||||||
|
|
||||||
|
options+=(--config.file="/etc/prometheus/prometheus.yml" )
|
||||||
|
options+=(--storage.tsdb.path="/data/prometheus" )
|
||||||
|
options+=(--web.console.libraries="/usr/share/prometheus/console_libraries" )
|
||||||
|
options+=(--web.console.templates="/usr/share/prometheus/consoles" )
|
||||||
|
options+=(--web.route-prefix="/" )
|
||||||
|
options+=(--web.external-url="http://localhost:9090$(bashio::addon.ingress_entry)/" )
|
||||||
|
options+=(--web.enable-lifecycle )
|
||||||
|
|
||||||
|
# Load custom environment variables
|
||||||
|
for var in $(bashio::config 'env_vars|keys'); do
|
||||||
|
name=$(bashio::config "env_vars[${var}].name")
|
||||||
|
value=$(bashio::config "env_vars[${var}].value")
|
||||||
|
bashio::log.info "Setting ${name} to ${value}"
|
||||||
|
export "${name}=${value}"
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ ! -d /data/prometheus ]] ; then
|
||||||
|
mkdir -p /data/prometheus
|
||||||
|
chown prometheus:prometheus /data/prometheus
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -d /share/promethus/rules ]] ; then
|
||||||
|
mkdir -p /share/prometheus/rules
|
||||||
|
chown -R prometheus:prometheus /share/prometheus/rules
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -d /share/promethus/targets ]] ; then
|
||||||
|
mkdir -p /share/prometheus/targets
|
||||||
|
chown -R prometheus:prometheus /share/prometheus/targets
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Run Prometheus
|
||||||
|
exec s6-setuidgid prometheus /usr/local/bin/prometheus "${options[@]}"
|
103
prometheus/rootfs/opt/prometheus-configgen/combiner
Normal file
103
prometheus/rootfs/opt/prometheus-configgen/combiner
Normal file
|
@ -0,0 +1,103 @@
|
||||||
|
#!/usr/bin/env python
|
||||||
|
|
||||||
|
import sys
|
||||||
|
import asyncio
|
||||||
|
import aionotify
|
||||||
|
import yaml
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
import requests
|
||||||
|
|
||||||
|
from yamlinclude import YamlIncludeConstructor
|
||||||
|
|
||||||
|
|
||||||
|
def generateConfig():
|
||||||
|
YamlIncludeConstructor.add_to_loader_class(
|
||||||
|
loader_class=yaml.FullLoader, base_dir="/share/prometheus/"
|
||||||
|
)
|
||||||
|
|
||||||
|
with open("prometheus.template") as f:
|
||||||
|
data = yaml.load(f, Loader=yaml.FullLoader)
|
||||||
|
|
||||||
|
data["scrape_configs"] = (
|
||||||
|
data[".scrape_configs_static"] + data[".scrape_configs_included"]
|
||||||
|
)
|
||||||
|
del data[".scrape_configs_static"]
|
||||||
|
del data[".scrape_configs_included"]
|
||||||
|
return yaml.dump(data, default_flow_style=False, default_style="")
|
||||||
|
|
||||||
|
|
||||||
|
def testConfig(config):
|
||||||
|
tmp = None
|
||||||
|
result = False
|
||||||
|
try:
|
||||||
|
tmp = tempfile.NamedTemporaryFile()
|
||||||
|
with open(tmp.name, "w") as f:
|
||||||
|
f.write(config)
|
||||||
|
r = os.system("promtool check config " + tmp.name + "> /dev/null")
|
||||||
|
result = r == 0
|
||||||
|
except:
|
||||||
|
print("Failed to validate")
|
||||||
|
raise
|
||||||
|
if result == False:
|
||||||
|
raise Exception("validation error")
|
||||||
|
return result
|
||||||
|
|
||||||
|
|
||||||
|
def writeConfig(config, file):
|
||||||
|
try:
|
||||||
|
with open(file, "w") as f:
|
||||||
|
f.write(config)
|
||||||
|
r = requests.post(url="http://localhost:9090/-/reload", data={})
|
||||||
|
except:
|
||||||
|
print("Exception")
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
loop = asyncio.get_event_loop()
|
||||||
|
paths_to_watch = ["/share/prometheus/targets/"]
|
||||||
|
|
||||||
|
lock = asyncio.Lock()
|
||||||
|
|
||||||
|
|
||||||
|
async def compile():
|
||||||
|
if lock.locked() == False:
|
||||||
|
await lock.acquire()
|
||||||
|
try:
|
||||||
|
config = generateConfig()
|
||||||
|
testConfig(config)
|
||||||
|
writeConfig(config, "/etc/prometheus/prometheus.yml")
|
||||||
|
print("Compiled")
|
||||||
|
except:
|
||||||
|
pass
|
||||||
|
finally:
|
||||||
|
lock.release()
|
||||||
|
|
||||||
|
|
||||||
|
async def watcher():
|
||||||
|
asyncio.create_task(compile())
|
||||||
|
filewatch = aionotify.Watcher()
|
||||||
|
for path in paths_to_watch:
|
||||||
|
filewatch.watch(
|
||||||
|
path,
|
||||||
|
aionotify.Flags.MODIFY | aionotify.Flags.CREATE | aionotify.Flags.DELETE,
|
||||||
|
)
|
||||||
|
print(path)
|
||||||
|
await filewatch.setup(loop)
|
||||||
|
while True:
|
||||||
|
event = await filewatch.get_event()
|
||||||
|
sys.stdout.write("Got event: %s\n" % repr(event))
|
||||||
|
asyncio.create_task(compile())
|
||||||
|
filewatch.close()
|
||||||
|
|
||||||
|
|
||||||
|
def main():
|
||||||
|
try:
|
||||||
|
loop.run_until_complete(watcher())
|
||||||
|
finally:
|
||||||
|
# loop.close()
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
main()
|
|
@ -0,0 +1,38 @@
|
||||||
|
---
|
||||||
|
# my global config
|
||||||
|
global:
|
||||||
|
scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
|
||||||
|
evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
|
||||||
|
# scrape_timeout is set to the global default (10s).
|
||||||
|
|
||||||
|
# Alertmanager configuration
|
||||||
|
alerting:
|
||||||
|
alertmanagers:
|
||||||
|
- static_configs:
|
||||||
|
- targets:
|
||||||
|
# - alertmanager:9093
|
||||||
|
|
||||||
|
# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
|
||||||
|
rule_files:
|
||||||
|
- "/share/prometheus/rules/*.yaml"
|
||||||
|
|
||||||
|
|
||||||
|
.scrape_configs_included: !include targets/*.yaml
|
||||||
|
.scrape_configs_static:
|
||||||
|
- job_name: 'home-assistant'
|
||||||
|
scrape_interval: 60s
|
||||||
|
metrics_path: /core/api/prometheus
|
||||||
|
|
||||||
|
# Long-Lived Access Token
|
||||||
|
bearer_token_file: '/run/home-assistant.token'
|
||||||
|
|
||||||
|
scheme: http
|
||||||
|
static_configs:
|
||||||
|
- targets: ['supervisor:80']
|
||||||
|
- job_name: 'prometheus'
|
||||||
|
|
||||||
|
# metrics_path defaults to '/metrics'
|
||||||
|
# scheme defaults to 'http'.
|
||||||
|
|
||||||
|
static_configs:
|
||||||
|
- targets: ['localhost:9090']
|
|
@ -0,0 +1,5 @@
|
||||||
|
PyYAML>=5.3.1
|
||||||
|
pyyaml-include>=1.2
|
||||||
|
requests>=2.23.0
|
||||||
|
aionotify
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue