Skip to content

Commit

Permalink
Add notes about enriching the collector data with k8s attributes (#364)
Browse files Browse the repository at this point in the history
Co-authored-by: Florian Bacher <[email protected]>
  • Loading branch information
pirgeo and bacherfl authored Nov 11, 2024
1 parent 3279dc3 commit 9609841
Show file tree
Hide file tree
Showing 2 changed files with 179 additions and 4 deletions.
18 changes: 18 additions & 0 deletions .chloggen/dash-add-selfmon-attributes.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Use this changelog template to create an entry for release notes.

# One of 'breaking', 'deprecation', 'new_component', 'enhancement', 'bug_fix'
change_type: enhancement

# The name of the component, or a single word describing the area of concern, (e.g. filelogreceiver)
component: docs

# A brief description of the change. Surround your text with quotes ("") if it needs to start with a backtick (`).
note: Add a note about k8s enrichment of collector internal telemetry

# Mandatory: One or more tracking issues related to the change. You can use the PR number here if no issue exists.
issues: [ 364 ]

# (Optional) One or more lines of additional information to render under the primary note.
# These lines will be padded with 2 spaces and then inserted directly into the document.
# Use pipe (|) for multiline entries.
subtext:
165 changes: 161 additions & 4 deletions docs/dashboards/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,11 +30,11 @@ Required attributes are:
Dynatrace accepts metrics data with Delta temporality via OTLP/HTTP.
Collector and Collector Contrib versions 0.107.0 and above as well as Dynatrace collector versions 0.12.0 and above support exporting metrics data in that format.
Earlier versions ignore the `temporality_preference` flag and would, therefore, require additional processing (cumulative to delta conversion) before ingestion.
It is possible to to this conversion in a collector, but would make the setup more complicated, so it is initially omitted in this document.
It is possible to do this conversion in a collector, but it would make the setup more complicated, so it is initially omitted in this document.

The dashboards only use metrics that have a `service.name` from this list: `dynatrace-otel-collector,otelcorecol,otelcontribcol,otelcol,otelcol-contrib`.
At the top of the dashboards, you can filter for specific `service.name`s.
You can also edit the variable and add service names if your collector has a different `service.name` and does therefore not show up on the dash.
You can also edit the variable and add service names if your collector has a different `service.name` and does not show up on the dash.

### Adding `service.instance.id` to the allow list
While `service.name` is on the Dynatrace OTLP metrics ingest allow list by default, `service.instance.id` is not.
Expand All @@ -50,13 +50,13 @@ Self-monitoring data can be exported from the collector via the OTLP protocol.
The configuration below assumes the environment variables `DT_ENDPOINT` and `DT_API_TOKEN` to be set.
In order to send data to Dynatrace via OTLP, you will need to supply a Dynatrace endpoint and an ingest token with the `metrics.ingest` scope set.
See the [Dynatrace docs](https://docs.dynatrace.com/docs/extend-dynatrace/opentelemetry/getting-started/otlp-export) for more information.
The `DT_ENDPOINT` environment variable should contain the base url and the base `/api/v2/otlp` (e.g. `https://{your-environment-id}.live.dynatrace.com/api/v2/otlp`).
The `DT_ENDPOINT` environment variable should contain the base URL and the base `/api/v2/otlp` (e.g. `https://{your-environment-id}.live.dynatrace.com/api/v2/otlp`).

To send self-monitoring data to Dynatrace, use the following configuration:

```yaml
service:
# turn on selfmon
# turn on self-monitoring
telemetry:
metrics:
# metrics verbosity level. Higher verbosity means more metrics.
Expand All @@ -83,6 +83,163 @@ Note that the OTel collector can automatically merge configuration files for you

Of course, you can also add the configuration directly to your existing collector configuration.

## Enriching OTel collector self-monitoring data with Kubernetes attributes

Out of the box, the collector will add `service.instance.id` to all exported metrics.
This allows distinguishing between collector instances.
However, the `service.instance.id` is a randomly created UUID, and is therefore not very easy to interpret.
It is possible to enrich the exported data with more attributes, for examples Kubernetes attributes, that are more easily interpreted by humans.

There are two main ways of adding Kubernetes attributes to OTel collector telemetry data:
1. Routing all collector self-monitoring data through a "gateway" or "self-monitoring" collector, and using the `k8sattributesprocessor` to enrich that telemetry data.
2. Injecting the relevant attributes into the container environment, reading them in the collector, and adding them to the telemetry generated on the collector.

```mermaid
flowchart LR
subgraph alignOthersSg[" "]
subgraph dtOtlpApi[" "]
otlpApi[Dynatrace OTLP API]:::API
end
subgraph legend[Legend]
direction LR
leg1[ ]:::hide-- application telemetry data -->leg2[ ]:::hide
leg3[ ]:::hide-. collector self-monitoring data .-> leg4[ ]:::hide
leg5[ ]:::hide-. inject k8s attributes .-> leg6[ ]:::hide
classDef hide height:0px
end
end
subgraph alignAlternativesSg[" "]
subgraph alt1[Self-monitoring collector]
attributesProcessorApi[k8s API]
app1[Application w/ OTel]:::OpenTelemetryApp
app2[Application w/ OTel]:::OpenTelemetryApp
app3[Application w/ OTel]:::OpenTelemetryApp
otelcol1[OpenTelemetry Collector]:::OTelCollector
otelcol2[OpenTelemetry Collector]:::OTelCollector
selfmonGateway[Self-monitoring Collector with k8sattributesprocessor]:::SelfmonCollector
app1-->otelcol1
app2-->otelcol2
app3-->otelcol2
otelcol1-->otlpApi
otelcol1-.->selfmonGateway
otelcol2-.->selfmonGateway
otelcol2-->otlpApi
attributesProcessorApi -. k8sattributesprocessor retrieves k8s attributes from the k8s API .- selfmonGateway
selfmonGateway-.->otlpApi
end
subgraph alt2[Direct k8s enrichment]
k8sInjection[k8s API]
app4[Application w/ OTel]:::OpenTelemetryApp
app5[Application w/ OTel]:::OpenTelemetryApp
app6[Application w/ OTel]:::OpenTelemetryApp
otelcol4[OpenTelemetry Collector]:::OTelCollector
otelcol5[OpenTelemetry Collector]:::OTelCollector
k8sInjection -.- otelcol4
k8sInjection -. inject k8s attributes as environment variables via k8s downwards API .- otelcol5
app4-->otelcol4
app5-->otelcol5
app6-->otelcol5
otelcol4-.->otlpApi
otelcol5-.->otlpApi
otelcol4-->otlpApi
otelcol5-->otlpApi
end
end
classDef OpenTelemetryApp fill:#adc9ff,stroke:#1966FF
classDef OTelCollector fill:#9afee0,stroke:#02D394
classDef SelfmonCollector fill:#C2C2C2,stroke:#707070
classDef AlternativeClass rx:100,fill:
classDef hideSg fill:white,stroke:white
classDef API fill:#ae70ff,stroke:#7F1AFF
class alignAlternativesSg,alignOthersSg,dtOtlpApi hideSg
class alt1,alt2 AlternativeClass
linkStyle default stroke-width:3px
linkStyle 1,7,8,11,17,18 stroke:#707070
linkStyle 2,10,12,13 stroke:#C2C2C2,stroke-width:2px
```

### Gateway collector

In this approach, collectors are configured to send their internal telemetry to one dedicated collector, which enriches the incoming telemetry with k8s attributes using the `k8sattributesprocessor`.
The `k8sattributesprocessor` retrieves this data from the k8s API, and attaches it to the telemetry passing through it.
One limitation of this approach is that the automatic enrichment will not work for the telemetry that that one collector instance (the "gateway" collector instance) exports itself.
[Learn how to set up the gateway collector with the k8sattributesprocessor in the Dynatrace docs](https://docs.dynatrace.com/docs/extend-dynatrace/opentelemetry/collector/use-cases/kubernetes/k8s-enrich#kubernetes).

### Read attributes from the container environment

The Kubernetes downwards API allows injecting information about the Kubernetes environment that a certain pod is running in.
Pod and container information can be exposed to the container via environment variables.
[The Kubernetes documentation](https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables) explains how to specify data like the node name, namespace, or pod name as environment variables.
These variables are then available to be read from inside the container.
An example pod spec could be (values in `<>` are placeholders for your actual pod specifications, dependent on your setup):

```yaml
apiVersion: v1
kind: Pod
metadata:
name: <your-pod-name>
spec:
containers:
- name: <your-container>
image: <your-image>
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
```

The OTel collector configuration for self-monitoring data allows adding attributes based on environment variables.
The configuration below assumes that you injected the environment variables `MY_NODE_NAME`, `MY_POD_NAME`, and `MY_POD_NAMESPACE` into your OTel collector container, and adds the attributes on the exported telemetry.
No extra collector is required to enrich in-flight telemetry.

```yaml
service:
telemetry:
resource:
# This section reads the previously injected environment variables
# and attaches them to the telemetry the collector generates about itself.
k8s.namespace.name: "${env:MY_POD_NAMESPACE}"
k8s.pod.name: "${env:MY_POD_NAME}"
k8s.node.name: "${env:MY_NODE_NAME}"
# the rest of the configuration did not change compared to above.
metrics:
level: detailed
readers:
- periodic:
exporter:
otlp:
endpoint: <...>
```

## More screenshots

### Dashboard containing all collectors
Expand Down

0 comments on commit 9609841

Please sign in to comment.