Lab 3 - Exploring first pipelines
Lab Goal
To explore first telemetry pipelines using Fluent Bit by creating three telemetry pipelines;
first a simple one with input and output phases and routing to a single destination, second by
refining to include output to several destinations based on routing rules, and a third that
configures the filtering phase to remove unused data from our incoming events.
Intermezzo - Jumping to the solution
If you happen to be exploring Fluent Bit as an architect and want to jump to the solution in
action, we've included the configuration files in the easy install project from the source
install support directory, see the previous installing from source lab. Instead of creating all
the configurations as shown in this lab, you'll find them
ready to use as shown below from the fluentbit-install-demo
root directory:
$ ls -l support/configs-lab-3/
-rw-r--r--@ 1 erics staff 166 Jul 31 13:12 Buildfile
-rw-r--r-- 1 erics staff 1437 Jul 31 14:02 workshop-fb.yaml
First pipelines - Configuration setup
We've installed Fluent Bit, either from source or in a container, and are ready to start setting
up our first telemetry pipelines. To do that we need to go back and remember the phases that
make up our pipeline. To get started we need to define our INPUT
and
OUTPUT
pipeline phases.
First pipelines - Creating a workshop configuration
Create a new directory workshop-fluentbit
where we will store the first
pipeline configuration file:
$ mkdir workshop-fluentbit
$ touch workshop-fluentbit/workshop-fb.yaml
First pipelines - Configuring first inputs
In our workshop directory workshop-fluentbit
, let's start by opening the
workshop-fb.yaml
file in our favorite editor and add two input
configurations, one to generate test INFO level log messages using the provided
dummy
input plugin and one to generate ERROR level log messages:
service:
flush: 1
log_level: info
pipeline:
inputs:
- name: dummy
tag: workshop.info
dummy: '{"message":"This is workshop INFO message", "level":"INFO", "color": "yellow"}'
- name: dummy
tag: workshop.error
dummy: '{"message":"This is workshop ERROR message", "level":"ERROR", "color": "red"}'
First pipelines - Breaking down input configuration
Explore the
dummy
input plugin documentation
for all the details, but this plugin generates fake events on set intervals, 1 second by
default. There are three keys used to setup our inputs:
- Name - the name of the plugin to be used.
- Tag - the tag we assign, can be anything, to help find events of this type in the
matching phase.
- Dummy - Where the exact event output can be defined. By default it just prints
{ "message" : "dummy"}
.
Our configuration is tagging each INFO level event with
workshop.info
and ERROR level event with
workshop.error
. The configuration also overrides
the default
"dummy"
message with custom event text.
First pipelines - Configuring first outputs
Now let's create a new output configuration by adding the following code below our INPUTS
sections. Note the format is set to provide JSON in the console output. Save the file when done:
# This entry directs all tags (it matches any we encounter) to print to
# standard output, which is our console.
outputs:
- name: stdout
match: '*'
format: json_lines
First pipelines - Running first pipeline (source)
To see if our configuration works we can test run it with our Fluent Bit installation. Depending
on the chosen install method, here we show how to run it using the source installation followed
by the container version. Below the source install is shown from the directory we created to hold
all our configuration files:
$ [PATH_TO]/fluent-bit --config=workshop-fb.yaml
First pipelines - Console output first pipeline (source)
The console output should look something like this, noting that we've cut out the ascii logo
at start up. Note the alternating generated event lines with INFO and ERROR messages that run
until exiting with CTRL_C:
...
[2024/07/31 11:47:45] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 11:47:45] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 11:47:45] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 11:47:45] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 11:47:45] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 11:47:45] [ info] [sp] stream processor started
{"date":1722419266.712218,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722419266.712724,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
{"date":1722419267.713068,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722419267.713238,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
...
First pipelines - Testing first pipeline (container)
Now if we are using containers, we can test the configuration by running it with our Fluent Bit
installation. First thing that is needed is to open in our favorite editor, a new file called
Buildfile
. This is going to be used to build a new container image and insert
our configuration. Note this file needs to be in the same directory as your configuration,
otherwise adjust the file path names:
FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
First pipelines - Building first pipeline (container)
Now we'll build a new container image, naming it with a version tag, as follows using the
Buildfile
and assuming you are in the same directory:
$ podman build -t workshop-fb:v1 -f Buildfile
STEP 1/3: FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
STEP 2/3: COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
STEP 3/3: CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
COMMIT workshop-fb:v1
Successfully tagged localhost/workshop-fb:v1
aa945dcb6a4cea87a26080b814c2b4a522c4f1e84e9a52a0e4e4491630a4b503
First pipelines - Running first pipeline (container)
Now we'll run our new container image:
First pipelines - Console output first pipeline (container)
The console output should look something like this, noting that we've cut out the ascii logo
at start up. Note the alternating generated event lines with INFO and ERROR messages that run
until exiting with CTRL_C:
...
[2024/07/31 11:13:53] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 11:13:53] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 11:13:53] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 11:13:53] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 11:13:53] [ info] [sp] stream processor started
[2024/07/31 11:13:53] [ info] [output:stdout:stdout.0] worker #0 started
{"date":1722424433.480621,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722424433.480725,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
{"date":1722424434.478095,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722424434.478199,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
...
First pipelines - Examining solution phases (input)
Let's review our solution by walking through Fluent Bit's pipeline phases, as discussed in
lab 1. Our input phase defined where we were getting
our data from to feed the pipeline, in our case using a dummy plugin to generate two events
every second:
inputs:
- name: dummy
tag: workshop.info
dummy: '{"message":"This is workshop INFO message", "level":"INFO", "color": "yellow"}'
- name: dummy
tag: workshop.error
dummy: '{"message":"This is workshop ERROR message", "level":"ERROR", "color": "red"}'
First pipelines - Examining solution phases (parser)
Our first pipeline does not make use of a parser phase, as there is no need yet for our
incoming data to be put in a structured format.
First pipelines - Examining solution phases (filter)
Our first pipeline uses the filtering phase to match tags for eventual modifying, enriching, or
deleting of Events
. Up to now it's a very simple match on all events, so
simple filtering:
# This entry directs all tags (it matches any we encounter) to
# print to standard output, which is our console.
outputs:
- name: stdout
match: '*' <<<<<< Filtering phase.
format: json_lines
First pipelines - Examining solution phases (buffer)
Buffering in our first pipeline started with in-memory and does not yet make use of file system
based options.
First pipelines - Examining solution phases (routing)
Our first pipeline uses the assigned Tag
and Match
configurations found in the outputs.conf
file to determine which
output destinations to send data. Ours is simply sending all tagged events to the standard
output (console):
# This entry directs all tags (it matches any we encounter) to
# print to standard output, which is our console.
outputs:
- name: stdout <<<<< Routing phase based on matching.
match: '*' <<<<< in this case, all tags.
format: json_lines
First pipelines - Examining solution phases (output)
The final phase is OUTPUT
, which is where Fluent Bit uses
Finally, our solution is using the standard output plugin to dump all matched events to the
console. Later you'll see other ways to configure outputs:
# This entry directs all tags (it matches any we encounter) to
# print to standard output, which is our console.
outputs:
- name: stdout <<<<< Output phase / destination.
match: '*'
format: json_lines
First pipeline completed!
First pipelines - Adding routing for new outputs
For our second pipeline, we're going to expand our routing and output phases using the existing
generated input events. To do that we just need to add two new output sections to our
configuration file workshop-fb.yaml
.
We want to use a new output plugin called
File
, and have this route all events with the tag *.info
to a file: /tmp/workshop-INFO.log
. We are also going to configure another
section to route all events with the tag *.error
to a different file:
/tmp/workshop-INFO.log
Let's see how we can update our workshop-fb.yaml
file on the next slide.
First pipelines - Adding new outputs sections
To add the new routing and outputs, we add two new output sections to our configuration file
workshop-fb.yaml
as follows (leave existing stdout for console). This
should send every event generated to the console, and fill two files with only messages indicated:
...
# This entry directs all tags (it matches any we encounter) to print to
# standard output, which is our console.
outputs:
- name: stdout
match: '*'
format: json_lines
# This entry directs all INFO level events to its own log file.
- name: file
file: /tmp/workshop-INFO.log
match: '*.info'
# This entry directs all error level events to its own log file.
- name: file
file: /tmp/workshop-ERROR.log
match: '*.error'
First pipelines - Running the second pipeline (source)
To see if our configuration works we can test run it with our Fluent Bit installation, first
using the source installation followed by the container version. Below the source install is
shown from the directory we created to hold all our configuration files:
$ [PATH_TO]/fluent-bit --config=workshop-fb.yaml
First pipelines - Console output second pipeline (source)
The console output should look something like this, noting the same full output of alternating
generated event lines with INFO and ERROR messages that run until exiting with CTRL_C:
...
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 13:25:20] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 13:25:20] [ info] [output:file:file.1] worker #0 started
[2024/07/31 13:25:20] [ info] [output:file:file.2] worker #0 started
[2024/07/31 13:25:20] [ info] [sp] stream processor started
{"date":1722425121.204232,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722425121.206307,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
{"date":1722425122.203805,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722425122.203966,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
{"date":1722425123.200998,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722425123.201261,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
...
First pipelines - File output second pipeline (source)
The file output for workshop-INFO.log
should contain only the generated event
lines with INFO messages, and the workshop-ERROR.log
the ERROR messages that
ran until exiting with CTRL_C:
$ cat /tmp/workshop-INFO.log
workshop.info: [1722425527.482558656, {"message":"This is workshop INFO message","level":"INFO","color":"yellow"}]
workshop.info: [1722425528.477954438, {"message":"This is workshop INFO message","level":"INFO","color":"yellow"}]
workshop.info: [1722425529.482211387, {"message":"This is workshop INFO message","level":"INFO","color":"yellow"}]
...
$ cat /tmp/workshop-ERROR.log
workshop.error: [1722425527.482744241, {"message":"This is workshop ERROR message","level":"ERROR","color":"red"}]
workshop.error: [1722425528.478069646, {"message":"This is workshop ERROR message","level":"ERROR","color":"red"}]
workshop.error: [1722425529.482277679, {"message":"This is workshop ERROR message","level":"ERROR","color":"red"}]
...
First pipelines - Testing second pipeline (container)
To test with a container, we need to rebuild a new container image, naming it with a new version
tag, as follows using the Buildfile
and assuming you are in the same
directory:
$ podman build -t workshop-fb:v2 -f Buildfile
STEP 1/3: FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
STEP 2/3: COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
STEP 3/3: CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
COMMIT workshop-fb:v2
Successfully tagged localhost/workshop-fb:v2
e1130e1e48cf730bef63535af02cf4bdcd8f6eb1938dd8a892ba103ad935fc92
First pipelines - Running second pipeline (container)
Now we'll run our new container image, but we need a way for the container to write to the two
log files so that we can check them (not internally on the container filesystem). We mount our
local workshop directory to the containers tmp directory so we can see the files on our local
machine as follows:
$ podman run --rm -v ./:/tmp workshop-fb:v2
First pipelines - Console output second pipeline (container)
The container console output should look something like this, noting the same full output of
alternating generated event lines with INFO and ERROR messages that run until exiting with
CTRL_C:
...
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 13:25:20] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 13:25:20] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 13:25:20] [ info] [output:file:file.1] worker #0 started
[2024/07/31 13:25:20] [ info] [output:file:file.2] worker #0 started
[2024/07/31 13:25:20] [ info] [sp] stream processor started
{"date":1722425121.204232,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722425121.206307,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
{"date":1722425122.203805,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722425122.203966,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
{"date":1722425123.200998,"message":"This is workshop INFO message","level":"INFO","color":"yellow"}
{"date":1722425123.201261,"message":"This is workshop ERROR message","level":"ERROR","color":"red"}
...
First pipelines - File output second pipeline (container)
Due to mounting the local file system to the container, if you check your current directory you
will find the file output for workshop-INFO.log
and
workshop-ERROR.log
. Validate that they contain the right events as follows:
$ cat workshop-INFO.log
workshop.info: [1722425527.482558656, {"message":"This is workshop INFO message","level":"INFO","color":"yellow"}]
workshop.info: [1722425528.477954438, {"message":"This is workshop INFO message","level":"INFO","color":"yellow"}]
workshop.info: [1722425529.482211387, {"message":"This is workshop INFO message","level":"INFO","color":"yellow"}]
...
$ cat workshop-ERROR.log
workshop.error: [1722425527.482744241, {"message":"This is workshop ERROR message","level":"ERROR","color":"red"}]
workshop.error: [1722425528.478069646, {"message":"This is workshop ERROR message","level":"ERROR","color":"red"}]
workshop.error: [1722425529.482277679, {"message":"This is workshop ERROR message","level":"ERROR","color":"red"}]
...
Second pipeline completed!
First pipelines - Adding filtering to the mix
For our third pipeline, we're going to add to the filtering phase using the existing generated
input events to drop a key value. To do that we are adding a new filter section to our
pipeline section in our configuration file workshop-fb.yaml
.
We are going to remove the color
key from all events using a filter plugin
called Modify
. This filter will apply to all incoming events and remove the
key color
if it exists.
Let's see how we can update our workshop-fb.yaml
configuration file on the
next slide.
First pipelines - Adding new filters configuration
Open our configuration file workshop-fb.yaml
and add a section after the
last inputs
section as follows:
inputs:
- name: dummy
tag: workshop.error
dummy: '{"message":"This is workshop ERROR message", "level":"ERROR", "color": "red"}'
# This filter is applied to all events and removes the key 'color'
# if it exists.
filters:
- name: modify
match: '*'
remove: color
First pipelines - Running the third pipeline (source)
To see if our configuration works we can test run it with our Fluent Bit installation, first
using the source installation followed by the container version. Below the source install is
shown from the directory we created to hold all our configuration files:
$ [PATH_TO]/fluent-bit --config=workshop-fb.yaml
First pipelines - Console output third pipeline (source)
The console output should look something like this, noting the same full output of alternating
generated event lines with INFO and ERROR messages that run until exiting with CTRL_C. Note the
color
key and value pair has been removed:
...
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 13:45:23] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 13:45:23] [ info] [output:file:file.1] worker #0 started
[2024/07/31 13:45:23] [ info] [output:file:file.2] worker #0 started
[2024/07/31 13:45:23] [ info] [sp] stream processor started
{"date":1722426324.033251,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722426324.043854,"message":"This is workshop ERROR message","level":"ERROR"}
{"date":1722426325.036876,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722426325.037228,"message":"This is workshop ERROR message","level":"ERROR"}
{"date":1722426326.034123,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722426326.034346,"message":"This is workshop ERROR message","level":"ERROR"}
...
First pipelines - File output third pipeline (source)
The file output for workshop-INFO.log
should contain only the generated event
lines with INFO messages, and the workshop-ERROR.log
the ERROR messages
without the color key and value pairs:
$ cat /tmp/workshop-INFO.log
workshop.info: [1722426324.033251000, {"message":"This is workshop INFO message","level":"INFO"}]
workshop.info: [1722426325.036876000, {"message":"This is workshop INFO message","level":"INFO"}]
workshop.info: [1722426326.034123000, {"message":"This is workshop INFO message","level":"INFO"}]
...
$ cat /tmp/workshop-ERROR.log
workshop.error: [1722426324.043854000, {"message":"This is workshop ERROR message","level":"ERROR"}]
workshop.error: [1722426325.037228000, {"message":"This is workshop ERROR message","level":"ERROR"}]
workshop.error: [1722426326.034346000, {"message":"This is workshop ERROR message","level":"ERROR"}]
...
First pipelines - Building third pipeline (container)
Using the adjusted Buildfile
, rebuild a new container image giving it a new
version tag, as follows:
$ podman build -t workshop-fb:v3 -f Buildfile
STEP 1/3: FROM cr.fluentbit.io/fluent/fluent-bit:3.1.4
STEP 2/3: COPY ./workshop-fb.yaml /fluent-bit/etc/workshop-fb.yaml
STEP 3/3: CMD [ "fluent-bit", "-c", "/fluent-bit/etc/workshop-fb.yaml"]
COMMIT workshop-fb:v3
Successfully tagged localhost/workshop-fb:v3
cbaf3efbafd3114b17051803812750baa495b7fd0c9319e5ba9711fa31302a37
First pipelines - Running third pipeline (container)
Now we'll run our new container image, but we need a way for the container to write to the two
log files so that we can check them (not internally on the container filesystem). We mount our
local workshop directory to the containers tmp directory so we can see the files on our local
machine as follows:
$ podman run --rm -v ./:/tmp workshop-fb:v3
First pipelines - Console output third pipeline (container)
The container console output should look something like this, noting the same full output of
alternating generated event lines with INFO and ERROR messages that run until exiting with
CTRL_C, except the color
key and value pair is removed:
...
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 13:45:23] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 13:45:23] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 13:45:23] [ info] [output:file:file.1] worker #0 started
[2024/07/31 13:45:23] [ info] [output:file:file.2] worker #0 started
[2024/07/31 13:45:23] [ info] [sp] stream processor started
{"date":1722426324.033251,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722426324.043854,"message":"This is workshop ERROR message","level":"ERROR"}
{"date":1722426325.036876,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722426325.037228,"message":"This is workshop ERROR message","level":"ERROR"}
{"date":1722426326.034123,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722426326.034346,"message":"This is workshop ERROR message","level":"ERROR"}
...
First pipelines - File output second pipeline (container)
Due to mounting the local file system to the container, if you check your current directory you
will find the file output for workshop-INFO.log
and
workshop-ERROR.log
. Validate that they contain the right events without
the color
key and value pair as follows:
$ cat workshop-INFO.log
workshop.info: [1722426324.033251000, {"message":"This is workshop INFO message","level":"INFO"}]
workshop.info: [1722426325.036876000, {"message":"This is workshop INFO message","level":"INFO"}]
workshop.info: [1722426326.034123000, {"message":"This is workshop INFO message","level":"INFO"}]
...
$ cat workshop-ERROR.log
workshop.error: [1722426324.043854000, {"message":"This is workshop ERROR message","level":"ERROR"}]
workshop.error: [1722426325.037228000, {"message":"This is workshop ERROR message","level":"ERROR"}]
workshop.error: [1722426326.034346000, {"message":"This is workshop ERROR message","level":"ERROR"}]
...
Third pipeline completed!
Bonus filtering - Conditionally modifying events
This is going to be a bonus exercise where we're going to conditionally modify our input events.
To do that we are modifying the filter section in our configuration file
workshop-fb.yaml
.
In addition to removing the color
key from all events, we'll be using the
filter plugin Modify
to look for events with levels set to ERROR. In those
cases we'll have the filter remove the key level
if it exists and set a
new key to note that the workshop is broken.
Let's see how we can update our configuration file on the next slide.
Bonus filtering - Conditionally filtering
Open the configuration file workshop-fb.yaml
and add a second filter as
follows:
# This filter is applied to all events and removes the key 'color'
# if it exists.
filters:
- name: modify
match: '*'
remove: color
# This filter conditionally modifies events that match.
- name: modify
match '*'
condition: Key_Value_Equals level ERROR
remove: level
add: workshop_status BROKEN
Bonus filtering - Breaking down configuration
Explore the
modify
input plugin documentation
for all the details, but we're using this plugin to modify events base on matching a key or
by matching a condition as follows:
- Name - the name of the plugin to be used.
- Match - set to match all events (using star).
- Condition - if the condition is met, in this case
Key_Value_Equals
the following key and value pair, then we apply any following filter lines. In this case
we apply removal of the key level
and add a new key as shown.
Bonus filtering - Verifying conditional filter
Using either source (running with new config) or container (building a new one) version as done
previously, we run our conditionally filtered pipeline and our console output should look
something like this:
...
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 14:02:29] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 14:02:29] [ info] [output:file:file.1] worker #0 started
[2024/07/31 14:02:29] [ info] [sp] stream processor started
[2024/07/31 14:02:29] [ info] [output:file:file.2] worker #0 started
{"date":1722427350.297903,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722427350.298642,"message":"This is workshop ERROR message","workshop_status":"BROKEN"}
{"date":1722427351.294622,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722427351.295032,"message":"This is workshop ERROR message","workshop_status":"BROKEN"}
{"date":1722427352.29802,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722427352.298218,"message":"This is workshop ERROR message","workshop_status":"BROKEN"}
...
Bonus filtering pipeline completed!
First pipelines - Basic understanding of pipelines
You now have a basic understanding of how to build a simple telemetry pipeline and spent time
creating a few more slight adjustments for new telemetry pipelines. There is much more to be
done, so onwards!
Lab completed - Results
...
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.0] initializing
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.1] initializing
[2024/07/31 14:02:29] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/07/31 14:02:29] [ info] [output:stdout:stdout.0] worker #0 started
[2024/07/31 14:02:29] [ info] [output:file:file.1] worker #0 started
[2024/07/31 14:02:29] [ info] [sp] stream processor started
[2024/07/31 14:02:29] [ info] [output:file:file.2] worker #0 started
{"date":1722427350.297903,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722427350.298642,"message":"This is workshop ERROR message","workshop_status":"BROKEN"}
{"date":1722427351.294622,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722427351.295032,"message":"This is workshop ERROR message","workshop_status":"BROKEN"}
{"date":1722427352.29802,"message":"This is workshop INFO message","level":"INFO"}
{"date":1722427352.298218,"message":"This is workshop ERROR message","workshop_status":"BROKEN"}
...
Next up, exploring more telemetry pipelines...
Contact - are there any questions?