Lab 4 - Exploring more pipelines
Lab Goal
To continue exploring more telemetry pipelines using Fluent Bit by creating a pipeline for
parsing duplicate messages and a pipeline for processing metrics before exposing them in the
format that Prometheus can collect.
Intermezzo - Jumping to the solution
If you happen to be exploring Fluent Bit as an architect and want to jump to the solution in
action, we've included the configuration files in the easy install project from the source
install support directory, see the previous installing from source lab. Instead of creating all
the configurations as shown in this lab, you'll find them
ready to use as shown below from the fluentbit-install-demo
root directory:
$ ls -l support/configs-lab-4/
-rw-r--r--@ 1 erics staff 166 Jul 31 13:12 Buildfile
-rw-r--r-- 1 erics staff 1437 Jul 31 14:02 workshop-fb.yaml
More pipelines - Configuration parsing messages pipeline
So far we've installed Fluent Bit, either from source or in a container, and started setting
up our first telemetry pipelines. In this lab we're going to create a few more pipelines with
more complex phases. Our first telemetry pipeline is going to have multiple messages on the same
key, requiring a parser to be used for cleaning up those messages. We begin with new Fluent Bit
configuration file workshop-fb.yaml
, starting with the
INPUT
phase generating success and failure messages as follows:
service:
flush: 1
log_level: info
pipeline:
inputs:
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
More pipelines - Breaking down input configuration
We are using the
dummy
input plugin to generate fake events on set
intervals, 1 second by default. There are three keys used to setup our inputs:
- Name - the name of the plugin to be used.
- Tag - the tag we assign, can be anything, to help find events of this type in the
matching phase.
- Dummy - Where the exact event output can be defined.
Our configuration is tagging each successful event with
event.success
and failure events with
event.error
. The confusion will be caused by
configuring the dummy message with the same key,
message
, for both event
definitions. This will cause our incoming events to be confusing to deal with.
More pipelines - Configuring first outputs
Now let's create a new output section adding the following to our configuration file:
# This entry directs all tags (it matches any we encounter) to print to
# standard output, which is our console.
outputs:
- name: stdout
match: '*'
format: json_lines
More pipelines - Running this pipeline (source)
To see if our configuration works we can test run it with our Fluent Bit installation. Depending
on the chosen install method, here we show how to run it using the source installation followed
by the container version. Below the source install is shown from the directory we created to hold
all our configuration files:
$ [PATH_TO]/fluent-bit --config=workshop-fb.yaml
More pipelines - Console output for pipeline (source)
The console output should look something like this. Note the alternating generated event lines
with all using a message
key that is hard to separate. This runs until
exiting with CTRL_C:
...
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 14:57:46] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 14:57:46] [ info] [sp] stream processor started
{"date":1722430667.441367,"message":"true 200 success"}
{"date":1722430667.443119,"message":"false 500 error"}
{"date":1722430668.441389,"message":"true 200 success"}
{"date":1722430668.441615,"message":"false 500 error"}
{"date":1722430669.441384,"message":"true 200 success"}
{"date":1722430669.441622,"message":"false 500 error"}
...
More pipelines - Testing this pipeline (container)
Let's now try testing our configuration by running it using a container image. First thing that
is needed is to open in our favorite editor, a new file called Buildfile
.
This is going to be used to build a new container image and insert our configuration. Note
this file needs to be in the same directory as our configuration file, otherwise adjust the
file path names:
FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
More pipelines - Building this pipeline (container)
Now we'll build a new container image, naming it with a version tag, as follows using the
Buildfile
and assuming you are in the same directory:
$ podman build -t workshop-fb:v4 -f Buildfile
STEP 1/3: FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
STEP 2/3: COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
STEP 3/3: CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
COMMIT workshop-fb:v4
Successfully tagged localhost/workshop-fb:v4
bcd69f8a85a024ac39604013bdf847131ddb06b1827aae91812b57479009e79a
More pipelines - Running this pipeline (container)
Now we'll run our new container image:
More pipelines - Console output this pipeline (container)
The console output should look something like this. Note the alternating generated event lines
with all using a message
key that is hard to separate. This runs until
exiting with CTRL_C:
...
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 14:57:46] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 14:57:46] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 14:57:46] [ info] [sp] stream processor started
{"date":1722430667.441367,"message":"true 200 success"}
{"date":1722430667.443119,"message":"false 500 error"}
{"date":1722430668.441389,"message":"true 200 success"}
{"date":1722430668.441615,"message":"false 500 error"}
{"date":1722430669.441384,"message":"true 200 success"}
{"date":1722430669.441622,"message":"false 500 error"}
...
More pipelines - A filtering solution
Now we have dirty ingested data coming into our pipeline, showing that we have multiple messages
on the same key. To be able to clean this up for usage before passing on to the backend (output),
we need to make use of the Filter
phase.
For clarity, this is where we are working in our telemetry pipeline:
More pipelines - Adding filter configuration
To set up the filter configuration we add the following filter entry in our
workshop-fb.yaml
configuration file. This filter is named
modify
, tries matching all incoming messages to apply this filter, looking
for the key message
to then replace the message with new key value pairs to
create a new message as follows:
# This filter conditionally modifies events that match.
filters:
- name: modify
match: '*'
condition:
- Key_Value_Equals message 'true 200 success'
remove: message
add:
- valid_message true
- code 200
- type success
More pipelines - Verifying filtering solution (source)
To see if our configuration works we can test run it with our Fluent Bit installation. Depending
on the chosen install method, here we show how to run it using the source installation followed
by the container version. Below the source install is shown from the directory we created to hold
our configuration:
$ [PATH_TO]/fluent-bit --config=workshop-fb.conf
More pipelines - Console output for filtering (source)
The console output should look something like this. Note the alternating generated event lines
with filtering for success messages that now contain keys to simplify later querying. This runs
until exiting with CTRL_C:
...
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.0] initializing
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.1] initializing
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/08/01 11:07:46] [ info] [output:stdout:stdout.0] worker #0 started
[2024/08/01 11:07:46] [ info] [sp] stream processor started
{"date":1722503267.908427,"valid_message":"true","code":"200","type":"success"}
{"date":1722503267.909142,"message":"false 500 error"}
{"date":1722503268.908447,"valid_message":"true","code":"200","type":"success"}
{"date":1722503268.908705,"message":"false 500 error"}
{"date":1722503269.908412,"valid_message":"true","code":"200","type":"success"}
{"date":1722503269.908672,"message":"false 500 error"}
...
More pipelines - Verify filtering solution (container)
First we need to rebuild a new container image, naming it with a version tag, as follows using the
Buildfile
and assuming you are in the same directory:
$ podman build -t workshop-fb:v5 -f Buildfile
STEP 1/3: FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
STEP 2/3: COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
STEP 3/3: CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
COMMIT workshop-fb:v5
Successfully tagged localhost/workshop-fb:v5
5dfac44c37eb9d55cb5b16c101af8183654b39d5fe84cd82edf87069072d09c2
More pipelines - Running this pipeline (container)
Now we'll run our new container image:
More pipelines - Console output this pipeline (container)
The console output should look something like this. Note the alternating generated event lines
with filtering for success messages that now contain keys to simplify later querying. This runs
until exiting with CTRL_C:
...
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.0] initializing
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.1] initializing
[2024/08/01 11:07:46] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/08/01 11:07:46] [ info] [output:stdout:stdout.0] worker #0 started
[2024/08/01 11:07:46] [ info] [sp] stream processor started
{"date":1722503267.908427,"valid_message":"true","code":"200","type":"success"}
{"date":1722503267.909142,"message":"false 500 error"}
{"date":1722503268.908447,"valid_message":"true","code":"200","type":"success"}
{"date":1722503268.908705,"message":"false 500 error"}
{"date":1722503269.908412,"valid_message":"true","code":"200","type":"success"}
{"date":1722503269.908672,"message":"false 500 error"}
...
More pipelines - Filtering for errors too
Let's add another filter configuration to sort out the error messages. In our
workshop-fb.yaml
configuration file add a second filter that is named
modify
, matching all incoming messages to apply this filter, looking
for the key message
with error values to then replace the message with a
note that the workshop is broken (reporting errors):
# This filter conditionally modifies events that match.
- name: modify
match: '*'
condition: Key_Value_Equals message 'false 500 error'
remove: message
add: workshop_status BROKEN
More pipelines - Verifying filtering errors (source)
To see if our configuration works we can test run it with our Fluent Bit installation. Depending
on the chosen install method, here we show how to run it using the source installation followed
by the container version. Below the source install is shown from the directory we created to hold
our configuration:
$ [PATH_TO]/fluent-bit --config=workshop-fb.conf
More pipelines - Console output filtering errors (source)
The console output should look something like this. Note the alternating generated event lines
with filtering for ERROR messages and reporting it broken. This runs until exiting with CTRL_C:
...
[2024/08/01 11:28:33] [ info] [input:dummy:dummy.0] initializing
[2024/08/01 11:28:33] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/08/01 11:28:33] [ info] [input:dummy:dummy.1] initializing
[2024/08/01 11:28:33] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/08/01 11:28:33] [ info] [output:stdout:stdout.0] worker #0 started
[2024/08/01 11:28:33] [ info] [sp] stream processor started
{"date":1722504514.185207,"valid_message":"true","code":"200","type":"success"}
{"date":1722504514.188409,"workshop_status":"BROKEN"}
{"date":1722504515.186183,"valid_message":"true","code":"200","type":"success"}
{"date":1722504515.186532,"workshop_status":"BROKEN"}
{"date":1722504516.187007,"valid_message":"true","code":"200","type":"success"}
{"date":1722504516.187225,"workshop_status":"BROKEN"}
...
More pipelines - Verify filtering errors (container)
Repeat the previous process for a new container image, naming it with a version tag, as follows
using theBuildfile
and assuming you are in the same directory and run to
see similar output (exercise for the reader):
$ podman build -t workshop-fb:v6 -f Buildfile
$ podman run --rm workshop-fb:v6
Filtering messages in a pipeline completed!
More pipelines - Metrics processing pipeline
For the next telemetry pipeline, we'll configure our Fluent Bit instance to collect metrics and
then format them for exposure as a Prometheus metrics endpoint. We continue on with our previous
configuration files, starting with the inputs
phase adding metrics from
Fluent Bit itself:
pipeline:
inputs:
- name: dummy
tag: event.success
dummy: '{"message":"true 200 success"}'
- name: dummy
tag: event.error
dummy: '{"message":"false 500 error"}'
- name: fluentbit_metrics
tag: fb_metrics
host: 0.0.0.0
scrape_interval: 5s
More pipelines - Breaking down input configuration
We are using the
fluentbit_metrics
input plugin to collect metrics on set
intervals. There are three keys used to setup our inputs:
- Name - the name of the plugin to be used.
- Tag - the tag we assign, can be anything, to help find events of this type in the
matching phase.
- Host - The instance to gather metrics from, our local machine.
- Scrape_Interval - How often metrics are collected.
Our configuration is tagging each successful event with
fb_metrics
, which is
needed in the output phase to match to these events.
More pipelines - Configuring first outputs
Now let's expand the output configuration file outputs.conf
using our
favorite editor. Add the following configuration:
# This entry directs all tags (it matches any we encounter)
# to print to standard output, which is our console.
outputs:
- name: stdout
match: '*'
format: json_lines
# This entry directs all events matching the tag to be exposed as a
# Prometheus endpoint on the given IP and port number while adding
# a few tags.
- name: prometheus_exporter
match: fb_metrics
host: 0.0.0.0
port: 9999
# add user-defined labels
add_label:
- app fluent-bit
- workshop lab4
More pipelines - Running this pipeline (source)
To see if our configuration works we can test run it with our Fluent Bit installation. Depending
on the chosen install method, here we show how to run it using the source installation followed
by the container version. Below the source install is shown from the directory we created to hold
all our configuration files:
$ [PATH_TO]/fluent-bit --config=workshop-fb.conf
More pipelines - Console output for pipeline (source)
The console output should look something like this. We see the previous pipeline messages
processing and then after 5 seconds, the Fluent Bit metrics from our machine are collected before
going back to the previous pipeline messages. This runs until exiting with CTRL_C:
...
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.0] initializing
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.1] initializing
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/08/01 11:48:04] [ info] [input:fluentbit_metrics:fluentbit_metrics.2] initializing
[2024/08/01 11:48:04] [ info] [input:fluentbit_metrics:fluentbit_metrics.2] storage_strategy='memory' (memory only)
[2024/08/01 11:48:04] [ info] [output:stdout:stdout.0] worker #0 started
[2024/08/01 11:48:04] [ info] [output:prometheus_exporter:prometheus_exporter.1] listening iface=0.0.0.0 tcp_port=9999
[2024/08/01 11:48:04] [ info] [sp] stream processor started
{"date":1722505685.185163,"valid_message":"true","code":"200","type":"success"}
{"date":1722505685.187804,"workshop_status":"BROKEN"}
{"date":1722505686.185274,"valid_message":"true","code":"200","type":"success"}
{"date":1722505686.185503,"workshop_status":"BROKEN"}
{"date":1722505687.180423,"valid_message":"true","code":"200","type":"success"}
{"date":1722505687.180707,"workshop_status":"BROKEN"}
{"date":1722505688.181152,"valid_message":"true","code":"200","type":"success"}
{"date":1722505688.181327,"workshop_status":"BROKEN"}
{"date":1722505689.182123,"valid_message":"true","code":"200","type":"success"}
{"date":1722505689.182267,"workshop_status":"BROKEN"}
2024-08-01T09:48:09.182332484Z fluentbit_uptime{hostname="Erics-Pro-M2.local"} = 5
2024-08-01T09:48:09.182243358Z fluentbit_input_bytes_total{name="dummy.0"} = 235
2024-08-01T09:48:09.182243358Z fluentbit_input_records_total{name="dummy.0"} = 5
2024-08-01T09:48:09.182308525Z fluentbit_input_bytes_total{name="dummy.1"} = 230
2024-08-01T09:48:09.182308525Z fluentbit_input_records_total{name="dummy.1"} = 5
2024-08-01T09:48:04.177513624Z fluentbit_input_bytes_total{name="fluentbit_metrics.2"} = 0
...
More pipelines - Verifying metrics pipeline (source)
Fluent Bit metrics collection pipeline is cycling through metrics collection every 5 seconds and
it's exposing a Prometheus compatible metrics endpoint which you can verify in a web browser
using
http://localhost:9999/metrics
:
# HELP fluentbit_uptime Number of seconds that Fluent Bit has been running.
# TYPE fluentbit_uptime counter
fluentbit_uptime{app="fluent-bit",workshop="lab4",hostname="Erics-Pro-M2.local"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="dummy.0"} 235
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="dummy.0"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="dummy.1"} 230
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="dummy.1"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 0
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 0
# HELP fluentbit_input_metrics_scrapes_total Number of total metrics scrapes
# TYPE fluentbit_input_metrics_scrapes_total counter
fluentbit_input_metrics_scrapes_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 1
# HELP fluentbit_filter_records_total Total number of new records processed.
# TYPE fluentbit_filter_records_total counter
fluentbit_filter_records_total{app="fluent-bit",workshop="lab4",name="message_cleaning_parser"} 10
# HELP fluentbit_filter_bytes_total Total number of new bytes processed.
# TYPE fluentbit_filter_bytes_total counter
fluentbit_filter_bytes_total{app="fluent-bit",workshop="lab4",name="message_cleaning_parser"} 625
...
More pipelines - Building this pipeline (container)
Now we'll build a new container image to test our metrics pipeline, naming it with a version tag,
as follows using the Buildfile
and assuming you are in the same directory:
$ podman build -t workshop-fb:v7 -f Buildfile
STEP 1/3: FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
STEP 2/3: COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
STEP 3/3: CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
COMMIT workshop-fb:v7
Successfully tagged localhost/workshop-fb:v7
0aa626b103aca59a11c0c3e5d996b4bba7b225c86ef30f226ce51f8e877a7bb5
More pipelines - Running metrics pipeline (container)
Now we'll run our new container image with port mapping from localhost 9999 to the containers
port 9999:
More pipelines - Console output metrics pipeline (container)
The console output should look something like this. We see the previous pipeline messages
processing and then after 5 seconds, the Fluent Bit metrics from our machine are collected
before going back to the previous pipeline messages. This runs until exiting with CTRL_C:
...
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.0] initializing
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.1] initializing
[2024/08/01 11:48:04] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/08/01 11:48:04] [ info] [input:fluentbit_metrics:fluentbit_metrics.2] initializing
[2024/08/01 11:48:04] [ info] [input:fluentbit_metrics:fluentbit_metrics.2] storage_strategy='memory' (memory only)
[2024/08/01 11:48:04] [ info] [output:stdout:stdout.0] worker #0 started
[2024/08/01 11:48:04] [ info] [output:prometheus_exporter:prometheus_exporter.1] listening iface=0.0.0.0 tcp_port=9999
[2024/08/01 11:48:04] [ info] [sp] stream processor started
{"date":1722505685.185163,"valid_message":"true","code":"200","type":"success"}
{"date":1722505685.187804,"workshop_status":"BROKEN"}
{"date":1722505686.185274,"valid_message":"true","code":"200","type":"success"}
{"date":1722505686.185503,"workshop_status":"BROKEN"}
{"date":1722505687.180423,"valid_message":"true","code":"200","type":"success"}
{"date":1722505687.180707,"workshop_status":"BROKEN"}
{"date":1722505688.181152,"valid_message":"true","code":"200","type":"success"}
{"date":1722505688.181327,"workshop_status":"BROKEN"}
{"date":1722505689.182123,"valid_message":"true","code":"200","type":"success"}
{"date":1722505689.182267,"workshop_status":"BROKEN"}
2024-08-01T09:48:09.182332484Z fluentbit_uptime{hostname="Erics-Pro-M2.local"} = 5
2024-08-01T09:48:09.182243358Z fluentbit_input_bytes_total{name="dummy.0"} = 235
2024-08-01T09:48:09.182243358Z fluentbit_input_records_total{name="dummy.0"} = 5
2024-08-01T09:48:09.182308525Z fluentbit_input_bytes_total{name="dummy.1"} = 230
2024-08-01T09:48:09.182308525Z fluentbit_input_records_total{name="dummy.1"} = 5
2024-08-01T09:48:04.177513624Z fluentbit_input_bytes_total{name="fluentbit_metrics.2"} = 0
...
More pipelines - Verifying metrics pipeline (container)
Fluent Bit metrics collection pipeline is cycling through metrics collection every 5 seconds and
it's exposing a Prometheus compatible metrics endpoint which you can verify in a web browser
using
http://localhost:9999/metrics
:
# HELP fluentbit_uptime Number of seconds that Fluent Bit has been running.
# TYPE fluentbit_uptime counter
fluentbit_uptime{app="fluent-bit",workshop="lab4",hostname="Erics-Pro-M2.local"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="dummy.0"} 235
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="dummy.0"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="dummy.1"} 230
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="dummy.1"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 0
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 0
# HELP fluentbit_input_metrics_scrapes_total Number of total metrics scrapes
# TYPE fluentbit_input_metrics_scrapes_total counter
fluentbit_input_metrics_scrapes_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 1
# HELP fluentbit_filter_records_total Total number of new records processed.
# TYPE fluentbit_filter_records_total counter
fluentbit_filter_records_total{app="fluent-bit",workshop="lab4",name="message_cleaning_parser"} 10
# HELP fluentbit_filter_bytes_total Total number of new bytes processed.
# TYPE fluentbit_filter_bytes_total counter
fluentbit_filter_bytes_total{app="fluent-bit",workshop="lab4",name="message_cleaning_parser"} 625
...
More pipelines - Conclusion metrics pipeline
Once you have your metrics collection pipeline up and running, you can configure a Prometheus
instance to scrape the provided endpoint, which stores them in a time series database, and
provides you with the ability to query them for visualization and dashboard creation.
Metrics processing pipeline completed!
First pipelines - Basic understanding of pipelines
You now have a solid understanding of how to build a telemetry pipeline, have created both basic
and more advanced examples to explore how to manage both logs and metrics. There is much more to
be done, so onwards!
Lab completed - Results
# HELP fluentbit_uptime Number of seconds that Fluent Bit has been running.
# TYPE fluentbit_uptime counter
fluentbit_uptime{app="fluent-bit",workshop="lab4",hostname="Erics-Pro-M2.local"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="dummy.0"} 235
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="dummy.0"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="dummy.1"} 230
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="dummy.1"} 5
# HELP fluentbit_input_bytes_total Number of input bytes.
# TYPE fluentbit_input_bytes_total counter
fluentbit_input_bytes_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 0
# HELP fluentbit_input_records_total Number of input records.
# TYPE fluentbit_input_records_total counter
fluentbit_input_records_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 0
# HELP fluentbit_input_metrics_scrapes_total Number of total metrics scrapes
# TYPE fluentbit_input_metrics_scrapes_total counter
fluentbit_input_metrics_scrapes_total{app="fluent-bit",workshop="lab4",name="fluentbit_metrics.2"} 1
...
Next up, understanding backpressure in a telemetry pipeline...
Contact - are there any questions?