OTel integration - Configuring the collector
There are three things to configure when processing telemetry data through an OTel Collector; a
receiver
, processor
, and exporter
as
shown below. Let's get started by installing an OTel Collector from a released binary:
OTel integration - Installing the collector
Installing the OTel collector on your local machine can be done using a released binary found at
the
OTel Collector Releases page.
Here you find all assets listed, providing the choice for our local operating system. In
this lab all examples are shown using the OSX version of the collector which we download from
the following link (click to download):
On the next slide we will install the OTel collector from the above file.
OTel integration - Unpacking the collector
From the location of our downloaded file we can now unpack into a new directory in our workshop
directory:
$ mkdir otelcol-binary
$ cd otelcol-binary
$ $ tar xzvf {PATH_TO_DOWNLOAD}/otelcol-contrib_0.105.0_darwin_amd64.tar.gz
x README.md
x otelcol-contrib
OTel integration - Verifying the collector
Now let's make sure it's working correctly by running the following command (note you might
have to approve this newly downloaded software first to get it running), which reports the version
of this OTel collector:
$ ./otelcol-contrib -v
otelcol-contrib version 0.105.0
OTel integration - Configuring the receivers
Next we need to configure our collector starting with configuring the
receivers
section, where we define a delivery destination that our Fluent
Bit pipeline can push its telemetry (logs) to. Create a new configuration file
workshop-otel.yaml
and add the following to provide an OLTP endpoint at
localhost:4317
.
# This file is our workshop OpenTelemetry configuration.
#
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4317
OTel integration - Configuring the exporters
Now expand the configuration file workshop-otel.yaml
with an
exporters
section to define where our telemetry (logs) will be sent after
processing. In this case we want to provide for output to the console as follows:
# This file is our workshop OpenTelemetry configuration.
#
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4317
exporters:
logging:
loglevel: info
OTel integration - Configuring the processing
Finally, we add a service
section to complete our collector configuration.
Here we are defining the level of log reporting (debug
) and creating a
collector pipeline for our logs. We assign specific receivers
and
exporters
, so that we are receiving through the OTLP
protocol (using our Fluent Bit OTel Envelope) and exporting to the console only:
# This file is our workshop OpenTelemetry configuration.
#
receivers:
otlp:
protocols:
http:
endpoint: 0.0.0.0:4317
exporters:
logging:
loglevel: info
service:
telemetry:
logs:
level: debug
pipelines:
logs:
receivers: [otlp]
exporters: [logging]
OTel integration - Running the collector
To see if our configuration works we can test run it. Below we use our collector binary install
from the directory we created to hold all our configuration files:
$ ./otelcol-contrib --config workshop-otel.yaml
OTel integration - Console output collector
The console output should look something like this (scroll to right to view longer lines). Note
the highlighted line shows that the port is open and receiving on
localhost:4317
. This runs until exiting with CTRL_C, but for now
LEAVE THE COLLECTOR RUNNING
as we need to start sending log events from
our pipline yet:
2024-08-02T07:42:45.349Z info service@v0.105.0/service.go:116 Setting up own telemetry...
2024-08-02T07:42:45.349Z info service@v0.105.0/service.go:119 OpenCensus bridge is disabled for Collector telemetry and will be removed in a future version, use --feature-gates=-service.disableOpenCensusBridge to re-enable
2024-08-02T07:42:45.349Z info service@v0.105.0/telemetry.go:96 Serving metrics {"address": ":8888", "metrics level": "Normal"}
2024-08-02T07:42:45.350Z info exporter@v0.105.0/exporter.go:280 Deprecated component. Will be removed in future releases. {"kind": "exporter", "data_type": "logs", "name": "logging"}
2024-08-02T07:42:45.350Z warn common/factory.go:68 'loglevel' option is deprecated in favor of 'verbosity'. Set 'verbosity' to equivalent value to preserve behavior. {"kind": "exporter", "data_type": "logs", "name": "logging", "loglevel": "info", "equivalent verbosity level": "Normal"}
2024-08-02T07:42:45.350Z debug receiver@v0.105.0/receiver.go:313 Beta component. May change in the future. {"kind": "receiver", "name": "otlp", "data_type": "logs"}
2024-08-02T07:42:45.350Z info service@v0.105.0/service.go:198 Starting otelcol-contrib... {"Version": "0.105.0", "NumCPU": 4}
2024-08-02T07:42:45.350Z info extensions/extensions.go:34 Starting extensions...
2024-08-02T07:42:45.350Z info otlpreceiver@v0.105.0/otlp.go:152 Starting HTTP server {"kind": "receiver", "name": "otlp", "data_type": "logs", "endpoint": "0.0.0.0:4317"}
2024-08-02T07:42:45.350Z info service@v0.105.0/service.go:224 Everything is ready. Begin running and processing data.
2024-08-02T07:42:45.351Z info localhostgate/featuregate.go:63 The default endpoints for all servers in components have changed to use localhost instead of 0.0.0.0. Disable the feature gate to temporarily revert to the previous default. {"feature gate ID": "component.UseLocalHostAsDefaultHost"}
...
OTel integration - Adding pipeline outputs
Now we head back to our Fluent Bit pipeline to add an output destination to push our telemetry
data from our pipeline to the collector end point. Add a second entry called
opentelemetry
to the exiting outputs section in our configuration file
workshop-fb.yaml
as shown (and be sure to save the file when done):
...
# This entry directs all tags (it matches any we encounter)
# to print to standard output, which is our console.
outputs:
- name: stdout
match: '*'
format: json_lines
# this entry is for pushing logs to an OTel collector.
- name: opentelemetry
match: '*'
host: ${OTEL_HOST}
port: 4317
logs_body_key_attributes: true # allows for OTel attribute modification.
OTel integration - Running collector pipeline
To see if our configuration works we can test run it with our Fluent Bit installation. Depending
on the chosen install method, here we show how to run it using the source installation followed
by the container version. Below the source install is shown from the directory we created to hold
all our configuration files:
$ [PATH_TO]/fluent-bit --config=workshop-fb.yaml
OTel integration - Console output collector pipeline
The console output from the Fluent Bit pipeline looks something like this. Note that every time
a log event is ingested, it's processed into an OTel Envelope with three lines in the console,
and then sent to the collector on port 4317. Let's go back to the collector console and check
ingestion (Note: This runs until exiting with CTRL_C):
...
[2024/08/04 11:40:36] [ info] [input:dummy:dummy.0] initializing
[2024/08/04 11:40:36] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/08/04 11:40:36] [ info] [output:stdout:stdout.0] worker #0 started
[2024/08/04 11:40:36] [ info] [sp] stream processor started
{"date":4294967295.0,"resource":{},"scope":{}}
{"date":1722764437.781882,"service":"backend","log_entry":"Generating a 200 success code."}
{"date":4294967294.0}
[2024/08/04 11:40:38] [ info] [output:opentelemetry:opentelemetry.1] 127.0.0.1:4317, HTTP status=200
{"date":4294967295.0,"resource":{},"scope":{}}
{"date":1722764438.781915,"service":"backend","log_entry":"Generating a 200 success code."}
{"date":4294967294.0}
[2024/08/04 11:40:39] [ info] [output:opentelemetry:opentelemetry.1] 127.0.0.1:4317, HTTP status=200
{"date":4294967295.0,"resource":{},"scope":{}}
{"date":1722764439.781968,"service":"backend","log_entry":"Generating a 200 success code."}
{"date":4294967294.0}
[2024/08/04 11:40:40] [ info] [output:opentelemetry:opentelemetry.1] 127.0.0.1:4317, HTTP status=200
...
OTel integration - Console output collector
The console output from the OTel collector looks something like this, with one line for each
log event sent from our Fluent Bit based pipeline. This means it's working fine! What we don't
see is the log event details, so let's add a new exporter to send our log data to a file. (Note:
This runs until exiting with CTRL_C):
...
2024-08-04T11:40:38.823+0200 info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
2024-08-04T11:40:39.784+0200 info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
2024-08-04T11:40:40.782+0200 info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
...
OTel integration - Configuring collector file exporter
Stop the collector with CTRL-C and open its configuration file
workshop-otel.yaml
. Add the following to write log output to a file in the
same directory we run the collector from called output.json
. We also need to
add the new exporter in the pipelines
section for logs as shown below:
...
exporters:
file:
path: output.json
logging:
loglevel: info
service:
telemetry:
logs:
level: debug
pipelines:
logs:
receivers: [otlp]
exporters: [file, logging]
OTel integration - Running collector file exporter
To see if our configuration works we can test run it. Below we use our collector binary install
from the directory we created to hold all our configuration files:
$ ./otelcol-contrib --config workshop-otel.yaml
OTel integration - Verifying logs to collector to file
While the collector is waiting on telemetry data, we can (re)run the pipeline again to start
ingesting, processing, and pushing log events to our collector:
$ [PATH_TO]/fluent-bit --config=workshop-fb.yaml
OTel integration - Output pipeline to collector to file
Verify that the pipeline with Fluent Bit is ingesting, processing and forwarding logs (console).
Check that the collector is receiving these logs (console). Finally, verify that the collector
is exporting to output.json
file as shown below with 1 log events for each:
# Pipeline with Fluent Bit ingest, processing and output.
...
{"date":4294967295.0,"resource":{},"scope":{}}
{"date":1722765219.745945,"service":"backend","log_entry":"Generating a 200 success code."}
{"date":4294967294.0}
[2024/08/04 11:53:40] [ info] [output:opentelemetry:opentelemetry.1] 127.0.0.1:4317, HTTP status=200
...
# OTel collector ingesting events.
...
2024-08-04T11:53:40.785+0200 info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
...
# Viewing file output.json
$ tail -f ./output.json
{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1722765219745945000",
"body":{"kvlistValue":{"values":[{"key":"service","value":{"stringValue":"backend"}},{"key":"log_entry",
"value":{"stringValue":"Generating a 200 success code."}}]}},"traceId":"","spanId":""}]}]}]}
...
OTel integration - Pretty file output
A more visually pleasing view of the JSON file can be achieved as follows with each log event
presented in structured JSON:
$ tail -f ./output.json | jq
{
"resourceLogs": [
{
"resource": {},
"scopeLogs": [
{
"scope": {},
"logRecords": [
{
"timeUnixNano": "1722765219745945000",
"body": {
"kvlistValue": {
"values": [
{
"key": "service",
"value": { "stringValue": "backend"}
},
{
"key": "log_entry",
"value": {
"stringValue": "Generating a 200 success code."
}
}
]
}
},
"traceId": "", "spanId": ""
...
Integration of pipelines with Fluent Bit to OpenTelemetry completed!
Lab completed - Results
# Pipeline with Fluent Bit ingest, processing and output.
...
{"date":4294967295.0,"resource":{},"scope":{}}
{"date":1722765219.745945,"service":"backend","log_entry":"Generating a 200 success code."}
{"date":4294967294.0}
[2024/08/04 11:53:40] [ info] [output:opentelemetry:opentelemetry.1] 127.0.0.1:4317, HTTP status=200
...
# OTel collector ingesting events.
...
2024-08-04T11:53:40.785+0200 info LogsExporter {"kind": "exporter", "data_type": "logs", "name": "logging", "resource logs": 1, "log records": 1}
...
# Viewing file output.json
$ tail -f ./output.json
{"resourceLogs":[{"resource":{},"scopeLogs":[{"scope":{},"logRecords":[{"timeUnixNano":"1722765219745945000",
"body":{"kvlistValue":{"values":[{"key":"service","value":{"stringValue":"backend"}},{"key":"log_entry",
"value":{"stringValue":"Generating a 200 success code."}}]}},"traceId":"","spanId":""}]}]}]}
...
Contact - are there any questions?
Lab and workshop completed!
OTel integration - Configuring the collector There are three things to configure when processing telemetry data through an OTel Collector; a receiver , processor , and exporter as
shown below. Let's get started by installing an OTel Collector from a released binary: