Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Creating Exponential Histograms #4383

Open
jjneely opened this issue Jan 9, 2025 · 1 comment
Open

Creating Exponential Histograms #4383

jjneely opened this issue Jan 9, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@jjneely
Copy link

jjneely commented Jan 9, 2025

Describe your environment

OS: macOS Sonoma
Python version: Python 3.13.1
Versions:
opentelemetry-api==1.29.0
opentelemetry-distro==0.50b0
opentelemetry-exporter-otlp==1.29.0
opentelemetry-exporter-otlp-proto-common==1.29.0
opentelemetry-exporter-otlp-proto-grpc==1.29.0
opentelemetry-exporter-otlp-proto-http==1.29.0
opentelemetry-instrumentation==0.50b0
opentelemetry-proto==1.29.0
opentelemetry-sdk==1.29.0
opentelemetry-semantic-conventions==0.50b0

What happened?

When I want to have exponential Histograms I can use the environment variable to make them the default like this:

OTEL_EXPORTER_OTLP_METRICS_DEFAULT_HISTOGRAM_AGGREGATION=base2_exponential_bucket_histogram python ./init_metrics.py

But if I want to make exponential histograms the default in code it looks like the correct way is to make a View that matches the histogram instrument to set the aggregation policy. I've tried something like this:

def init_metrics(exporter: Optional[MetricExporter]=OTLPMetricExporter()) -> None:
    metric_reader = PeriodicExportingMetricReader(exporter)
    provider = MeterProvider(
        metric_readers=[metric_reader],

        # Use Exponential Histograms by default
        views=[
            View(
                instrument_type=Histogram,
                instrument_name="*",
                aggregation=ExponentialBucketHistogramAggregation(),
                #aggregation=ExplicitBucketHistogramAggregation(
                #   boundaries=(0.0, 1.0, 5.0, 10.0, 25.0, 50.0, 75.0, 100.0, 250.0), record_min_max=True
                #)
            )
        ]
    )

    # Sets the global default meter provider
    metrics.set_meter_provider(provider)

However, the last approach produces an ExplicitBounds histogram using the default bucket boundaries every time. I've tried using this to set an ExplicitBucketHistogramAggregation with custom buckets....but my custom buckets are ignored.

Looking at get_meter() I'm concerned that it doesn't give the instruments access to the defined views and therefor just ignores any global views I try to configure.

https://github.com/open-telemetry/opentelemetry-python/blob/main/opentelemetry-sdk/src/opentelemetry/sdk/metrics/_internal/__init__.py#L555

Am I setting this up incorrectly, or is this a bug?

Steps to Reproduce

Uploading init_metrics.py.txt…

Expected Result

I should see my local OTEL Collector in debug mode dump out exponential buckets,

Actual Result

Unless I use the environment variable I always get Explicit Buckets with the default bucketing.

Additional context

I'm attempting to build some examples and boiler plate for custom metrics for my teams.

Would you like to implement a fix?

None

@jjneely jjneely added the bug Something isn't working label Jan 9, 2025
@jjneely
Copy link
Author

jjneely commented Jan 9, 2025

Does the upload not work?

import time
from typing import Iterable, Optional

from opentelemetry import metrics
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.view import View, ExponentialBucketHistogramAggregation, ExplicitBucketHistogramAggregation
from opentelemetry.sdk.metrics.export import (
    ConsoleMetricExporter,
    PeriodicExportingMetricReader,
    MetricExporter,
    Histogram
)
from opentelemetry.exporter.otlp.proto.grpc.metric_exporter import OTLPMetricExporter

# init_metrics() -- Use this in your application to init and configure metrics.
# This function can take an optional exporter, but the default is to use the
# gRPC OTLP binary protocol to a Collector.  Other interesting exporters might
# be ConsoleMetricExporter to view the metric data points on the console in
# testing.  See example below.
def init_metrics(exporter: Optional[MetricExporter]=OTLPMetricExporter()) -> None:
    metric_reader = PeriodicExportingMetricReader(exporter)
    provider = MeterProvider(
        metric_readers=[metric_reader],

        # Use Exponential Histograms by default
        views=[
            View(
                instrument_type=Histogram,
                instrument_name="*",
                #aggregation=ExponentialBucketHistogramAggregation(),
                aggregation=ExplicitBucketHistogramAggregation(
                    boundaries=(0.0, 1.0, 5.0, 10.0, 25.0, 50.0, 75.0, 100.0, 250.0), record_min_max=True
                )
            )
        ]
    )

    # Sets the global default meter provider
    metrics.set_meter_provider(provider)


# test_metrics(): Generate all three types of metrics and dump their data
# points to the console.
def test_metrics() -> None:
    def gauge_callback(options: metrics.CallbackOptions) -> Iterable[metrics.Observation]:
        # Very simple callback for an Observable Gauge returns real time
        # measurements
        yield metrics.Observation(time.time(), {"timezone": "utc"})

    #init_metrics(ConsoleMetricExporter()) # Write metrics to STDOUT for a human to see
    init_metrics() # Write metrics to the OTEL Collector
    meter = metrics.get_meter(__name__)

    # Create a simple counter
    traffic_counter = meter.create_counter(
        "YOURAPP.traffic.counter",
        unit="1",
        description="Counts incoming API requests"
    )

    # Observable Gauge
    meter.create_observable_gauge(
        name="YOURAPP.time.seconds",
        description="Current UNIX Epoch",
        callbacks=[gauge_callback],
        unit="seconds"
    )

    # Exponential histogram -- Look Ma!  No buckets!
    histogram = meter.create_histogram(
        name="YOURAPP.request_latency.nanoseconds",
        description="Request duration distribution",
        unit="ns"
    )
    print(dir(histogram))

    for i in range(100):
        # Time request started
        start = time.time()

        # Increment counter
        traffic_counter.add(1, {"type": "INSERT"})

        # Measure duration
        print((time.time() - start)*1000000)
        histogram.record((time.time() - start)*1000000)

if __name__ == "__main__":
    test_metrics()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant