Promtail is configured in a YAML file (usually referred to as config.yaml
)
which contains information on the Promtail server, where positions are stored,
and how to scrape logs from files.
- Configuration File Reference
- server_config
- client_config
- position_config
- scrape_config
- target_config
- Example Docker Config
- Example Static Config
- Example Journal Config
- Example Syslog Config
Configuration File Reference
To specify which configuration file to load, pass the -config.file
flag at the
command line. The file is written in YAML format,
defined by the schema below. Brackets indicate that a parameter is optional. For
non-list parameters the value is set to the specified default.
For more detailed information on configuring how to discover and scrape logs from
targets, see Scraping. For more information on transforming logs
from scraped targets, see Pipelines.
Generic placeholders are defined as follows:
<boolean>
: a boolean that can take the valuestrue
orfalse
<int>
: any integer matching the regular expression[1-9]+[0-9]*
<duration>
: a duration matching the regular expression[0-9]+(ms|[smhdwy])
<labelname>
: a string matching the regular expression[a-zA-Z_][a-zA-Z0-9_]*
<labelvalue>
: a string of Unicode characters<filename>
: a valid path relative to current working directory or an
absolute path.<host>
: a valid string consisting of a hostname or IP followed by an optional port number<string>
: a regular string<secret>
: a regular string that is a secret, such as a password
Supported contents and default values of config.yaml
:
1 | # Configures the server for Promtail. |
server_config
The server_config
block configures Promtail’s behavior as an HTTP server:
1 | # HTTP server listen host |
client_config
The client_config
block configures how Promtail connects to an instance of
Loki:
1 | # The URL where Loki is listening, denoted in Loki as http_listen_address and |
position_config
The position_config
block configures where Promtail will save a file
indicating how far it has read into a file. It is needed for when Promtail
is restarted to allow it to continue from where it left off.
1 | # Location of positions file |
scrape_config
The scrape_config
block configures how Promtail can scrape logs from a series
of targets using a specified discovery method:
1 | # Name to identify this scrape config in the Promtail UI. |
pipeline_stages
The pipeline stages (pipeline_stages
) is used to transform
log entries and their labels after discovery and consists of a list of any of the items listed below.
Stages serve several purposes, more detail can be found here, however generally you extract data with regex
or json
stages into a temporary map which can then be use as labels
or output
or any of the other stages aside from docker
and cri
which are explained in more detail below.
1 | - [ |
docker
The Docker stage parses the contents of logs from Docker containers, and is defined by name with an empty object:
1 | docker: {} |
The docker stage will match and parse log lines of this format:
1 | `{"log":"level=info ts=2019-04-30T02:12:41.844179Z caller=filetargetmanager.go:180 msg=\"Adding target\"\n","stream":"stderr","time":"2019-04-30T02:12:41.8443515Z"}` |
Automatically extracting the time
into the logs timestamp, stream
into a label, and log
field into the output, this can be very helpful as docker is wrapping your application log in this way and this will unwrap it for further pipeline processing of just the log content.
The Docker stage is just a convenience wrapper for this definition:
1 | - json: |
cri
The CRI stage parses the contents of logs from CRI containers, and is defined by name with an empty object:
1 | cri: {} |
The CRI stage will match and parse log lines of this format:
1 | 2019-01-01T01:00:00.000000001Z stderr P some log message |
Automatically extracting the time
into the logs timestamp, stream
into a label, and the remaining message into the output, this can be very helpful as CRI is wrapping your application log in this way and this will unwrap it for further pipeline processing of just the log content.
The CRI stage is just a convenience wrapper for this definition:
1 | - regex: |
regex
The Regex stage takes a regular expression and extracts captured named groups to
be used in further stages.
1 | regex: |
json
The JSON stage parses a log line as JSON and takes
JMESPath expressions to extract data from the JSON to be
used in further stages.
1 | json: |
template
The template stage uses Go’stext/template
language to manipulate
values.
1 | template: |
Example:
1 | template: |
match
The match stage conditionally executes a set of stages when a log entry matches
a configurable LogQL stream selector.
1 | match: |
timestamp
The timestamp stage parses data from the extracted map and overrides the final
time value of the log that is stored by Loki. If this stage isn’t present,
Promtail will associate the timestamp of the log entry with the time that
log entry was read.
1 | timestamp: |
output
The output stage takes data from the extracted map and sets the contents of the
log entry that will be stored by Loki.
1 | output: |
labels
The labels stage takes data from the extracted map and sets additional labels
on the log entry that will be sent to Loki.
1 | labels: |
metrics
The metrics stage allows for defining metrics from the extracted data.
Created metrics are not pushed to Loki and are instead exposed via Promtail’s/metrics
endpoint. Prometheus should be configured to scrape Promtail to be
able to retrieve the metrics configured by this stage.
1 | # A map where the key is the name of the metric and the value is a specific |
counter
Defines a counter metric whose value only goes up.
1 | # The metric type. Must be Counter. |
gauge
Defines a gauge metric whose value can go up or down.
1 | # The metric type. Must be Gauge. |
histogram
Defines a histogram metric whose values are bucketed.
1 | # The metric type. Must be Histogram. |
tenant
The tenant stage is an action stage that sets the tenant ID for the log entry
picking it from a field in the extracted data map.
1 | tenant: |
journal_config
The journal_config
block configures reading from the systemd journal from
Promtail. Requires a build of Promtail that has journal support enabled. If
using the AMD64 Docker image, this is enabled by default.
1 | # When true, log messages from the journal are passed through the |
syslog_config
The syslog_config
block configures a syslog listener allowing users to push
logs to promtail with the syslog protocol.
Currently supported is IETF Syslog (RFC5424)
with and without octet counting.
The recommended deployment is to have a dedicated syslog forwarder like syslog-ng or rsyslog
in front of promtail. The forwarder can take care of the various specifications
and transports that exist (UDP, BSD syslog, …).
Octet counting is recommended as the
message framing method. In a stream with non-transparent framing,
promtail needs to wait for the next message to catch multi-line messages,
therefore delays between messages can occur.
See recommended output configurations for
syslog-ng and
rsyslog. Both configurations enable
IETF Syslog with octet-counting.
You may need to increase the open files limit for the promtail process
if many clients are connected. (ulimit -Sn
)
1 | # TCP address to listen on. Has the format of "host:port". |
Available Labels
__syslog_connection_ip_address
: The remote IP address.__syslog_connection_hostname
: The remote hostname.__syslog_message_severity
: The syslog severity parsed from the message. Symbolic name as per syslog_message.go.__syslog_message_facility
: The syslog facility parsed from the message. Symbolic name as per syslog_message.go andsyslog(3)
.__syslog_message_hostname
: The hostname parsed from the message.__syslog_message_app_name
: The app-name field parsed from the message.__syslog_message_proc_id
: The procid field parsed from the message.__syslog_message_msg_id
: The msgid field parsed from the message.__syslog_message_sd_<sd_id>[_<iana_enterprise_id>]_<sd_name>
: The structured-data field parsed from the message. The data field[custom@99770 example="1"]
becomes__syslog_message_sd_custom_99770_example
.
relabel_config
Relabeling is a powerful tool to dynamically rewrite the label set of a target
before it gets scraped. Multiple relabeling steps can be configured per scrape
configuration. They are applied to the label set of each target in order of
their appearance in the configuration file.
After relabeling, the instance
label is set to the value of __address__
by
default if it was not set during relabeling. The __scheme__
and__metrics_path__
labels are set to the scheme and metrics path of the target
respectively. The __param_<name>
label is set to the value of the first passed
URL parameter called <name>
.
Additional labels prefixed with __meta_
may be available during the relabeling
phase. They are set by the service discovery mechanism that provided the target
and vary between mechanisms.
Labels starting with __
will be removed from the label set after target
relabeling is completed.
If a relabeling step needs to store a label value only temporarily (as the
input to a subsequent relabeling step), use the __tmp
label name prefix. This
prefix is guaranteed to never be used by Prometheus itself.
1 | # The source labels select values from existing labels. Their content is concatenated |
<regex>
is any valid
RE2 regular expression. It is
required for the replace
, keep
, drop
, labelmap
,labeldrop
andlabelkeep
actions. The regex is anchored on both ends. To un-anchor the regex,
use .*<regex>.*
.
<relabel_action>
determines the relabeling action to take:
replace
: Matchregex
against the concatenatedsource_labels
. Then, settarget_label
toreplacement
, with match group references
(${1}
,${2}
, …) inreplacement
substituted by their value. Ifregex
does not match, no replacement takes place.keep
: Drop targets for whichregex
does not match the concatenatedsource_labels
.drop
: Drop targets for whichregex
matches the concatenatedsource_labels
.hashmod
: Settarget_label
to themodulus
of a hash of the concatenatedsource_labels
.labelmap
: Matchregex
against all label names. Then copy the values of the matching labels
to label names given byreplacement
with match group references
(${1}
,${2}
, …) inreplacement
substituted by their value.labeldrop
: Matchregex
against all label names. Any label that matches will be
removed from the set of labels.labelkeep
: Matchregex
against all label names. Any label that does not match will be
removed from the set of labels.
Care must be taken with labeldrop
and labelkeep
to ensure that logs are
still uniquely labeled once the labels are removed.
static_config
A static_config
allows specifying a list of targets and a common label set
for them. It is the canonical way to specify static targets in a scrape
configuration.
1 | # Configures the discovery to look on the current machine. Must be either |
file_sd_config
File-based service discovery provides a more generic way to configure static
targets and serves as an interface to plug in custom service discovery
mechanisms.
It reads a set of files containing a list of zero or more<static_config>
s. Changes to all defined files are detected via disk watches
and applied immediately. Files may be provided in YAML or JSON format. Only
changes resulting in well-formed target groups are applied.
The JSON file must contain a list of static configs, using this format:
1 | [ |
As a fallback, the file contents are also re-read periodically at the specified
refresh interval.
Each target has a meta label __meta_filepath
during the
relabeling phase. Its value is set to the
filepath from which the target was extracted.
1 | # Patterns for files from which target groups are extracted. |
Where <filename_pattern>
may be a path ending in .json
, .yml
or .yaml
.
The last path segment may contain a single *
that matches any character
sequence, e.g. my/path/tg_*.json
.
kubernetes_sd_config
Kubernetes SD configurations allow retrieving scrape targets from
Kubernetes’ REST API and always staying synchronized
with the cluster state.
One of the following role
types can be configured to discover targets:
node
The node
role discovers one target per cluster node with the address
defaulting to the Kubelet’s HTTP port.
The target address defaults to the first existing address of the Kubernetes
node object in the address type order of NodeInternalIP
, NodeExternalIP
,NodeLegacyHostIP
, and NodeHostName
.
Available meta labels:
__meta_kubernetes_node_name
: The name of the node object.__meta_kubernetes_node_label_<labelname>
: Each label from the node object.__meta_kubernetes_node_labelpresent_<labelname>
:true
for each label from the node object.__meta_kubernetes_node_annotation_<annotationname>
: Each annotation from the node object.__meta_kubernetes_node_annotationpresent_<annotationname>
:true
for each annotation from the node object.__meta_kubernetes_node_address_<address_type>
: The first address for each node address type, if it exists.
In addition, the instance
label for the node will be set to the node name
as retrieved from the API server.
service
The service
role discovers a target for each service port of each service.
This is generally useful for blackbox monitoring of a service.
The address will be set to the Kubernetes DNS name of the service and respective
service port.
Available meta labels:
__meta_kubernetes_namespace
: The namespace of the service object.__meta_kubernetes_service_annotation_<annotationname>
: Each annotation from the service object.__meta_kubernetes_service_annotationpresent_<annotationname>
: “true” for each annotation of the service object.__meta_kubernetes_service_cluster_ip
: The cluster IP address of the service. (Does not apply to services of type ExternalName)__meta_kubernetes_service_external_name
: The DNS name of the service. (Applies to services of type ExternalName)__meta_kubernetes_service_label_<labelname>
: Each label from the service object.__meta_kubernetes_service_labelpresent_<labelname>
:true
for each label of the service object.__meta_kubernetes_service_name
: The name of the service object.__meta_kubernetes_service_port_name
: Name of the service port for the target.__meta_kubernetes_service_port_protocol
: Protocol of the service port for the target.
pod
The pod
role discovers all pods and exposes their containers as targets. For
each declared port of a container, a single target is generated. If a container
has no specified ports, a port-free target per container is created for manually
adding a port via relabeling.
Available meta labels:
__meta_kubernetes_namespace
: The namespace of the pod object.__meta_kubernetes_pod_name
: The name of the pod object.__meta_kubernetes_pod_ip
: The pod IP of the pod object.__meta_kubernetes_pod_label_<labelname>
: Each label from the pod object.__meta_kubernetes_pod_labelpresent_<labelname>
:true
for each label from the pod object.__meta_kubernetes_pod_annotation_<annotationname>
: Each annotation from the pod object.__meta_kubernetes_pod_annotationpresent_<annotationname>
:true
for each annotation from the pod object.__meta_kubernetes_pod_container_init
:true
if the container is an InitContainer__meta_kubernetes_pod_container_name
: Name of the container the target address points to.__meta_kubernetes_pod_container_port_name
: Name of the container port.__meta_kubernetes_pod_container_port_number
: Number of the container port.__meta_kubernetes_pod_container_port_protocol
: Protocol of the container port.__meta_kubernetes_pod_ready
: Set totrue
orfalse
for the pod’s ready state.__meta_kubernetes_pod_phase
: Set toPending
,Running
,Succeeded
,Failed
orUnknown
in the lifecycle.__meta_kubernetes_pod_node_name
: The name of the node the pod is scheduled onto.__meta_kubernetes_pod_host_ip
: The current host IP of the pod object.__meta_kubernetes_pod_uid
: The UID of the pod object.__meta_kubernetes_pod_controller_kind
: Object kind of the pod controller.__meta_kubernetes_pod_controller_name
: Name of the pod controller.
endpoints
The endpoints
role discovers targets from listed endpoints of a service. For
each endpoint address one target is discovered per port. If the endpoint is
backed by a pod, all additional container ports of the pod, not bound to an
endpoint port, are discovered as targets as well.
Available meta labels:
__meta_kubernetes_namespace
: The namespace of the endpoints object.__meta_kubernetes_endpoints_name
: The names of the endpoints object.- For all targets discovered directly from the endpoints list (those not additionally inferred
from underlying pods), the following labels are attached:__meta_kubernetes_endpoint_hostname
: Hostname of the endpoint.__meta_kubernetes_endpoint_node_name
: Name of the node hosting the endpoint.__meta_kubernetes_endpoint_ready
: Set totrue
orfalse
for the endpoint’s ready state.__meta_kubernetes_endpoint_port_name
: Name of the endpoint port.__meta_kubernetes_endpoint_port_protocol
: Protocol of the endpoint port.__meta_kubernetes_endpoint_address_target_kind
: Kind of the endpoint address target.__meta_kubernetes_endpoint_address_target_name
: Name of the endpoint address target.
- If the endpoints belong to a service, all labels of the
role: service
discovery are attached. - For all targets backed by a pod, all labels of the
role: pod
discovery are attached.
ingress
The ingress
role discovers a target for each path of each ingress.
This is generally useful for blackbox monitoring of an ingress.
The address will be set to the host specified in the ingress spec.
Available meta labels:
__meta_kubernetes_namespace
: The namespace of the ingress object.__meta_kubernetes_ingress_name
: The name of the ingress object.__meta_kubernetes_ingress_label_<labelname>
: Each label from the ingress object.__meta_kubernetes_ingress_labelpresent_<labelname>
:true
for each label from the ingress object.__meta_kubernetes_ingress_annotation_<annotationname>
: Each annotation from the ingress object.__meta_kubernetes_ingress_annotationpresent_<annotationname>
:true
for each annotation from the ingress object.__meta_kubernetes_ingress_scheme
: Protocol scheme of ingress,https
if TLS
config is set. Defaults tohttp
.__meta_kubernetes_ingress_path
: Path from ingress spec. Defaults to/
.
See below for the configuration options for Kubernetes discovery:
1 | # The information to access the Kubernetes API. |
Where <role>
must be endpoints
, service
, pod
, node
, oringress
.
See
this example Prometheus configuration file
for a detailed example of configuring Prometheus for Kubernetes.
You may wish to check out the 3rd party
Prometheus Operator,
which automates the Prometheus setup on top of Kubernetes.
target_config
The target_config
block controls the behavior of reading files from discovered
targets.
1 | # Period to resync directories being watched and files being tailed to discover |
Example Docker Config
It’s fairly difficult to tail Docker files on a standalone machine because they are in different locations for every OS. We recommend the Docker logging driver for local Docker installs or Docker Compose.
If running in a Kubernetes environment, you should look at the defined configs which are in helm and jsonnet, these leverage the prometheus service discovery libraries (and give promtail it’s name) for automatically finding and tailing pods. The jsonnet config explains with comments what each section is for.
Example Static Config
While promtail may have been named for the prometheus service discovery code, that same code works very well for tailing logs without containers or container environments directly on virtual machines or bare metal.
1 | server: |
Example Journal Config
This example reads entries from a systemd journal:
1 | server: |
Example Syslog Config
This example starts Promtail as a syslog receiver and can accept syslog entries in Promtail over TCP:
1 | server: |