b9e3e226a2ce1cf1d969b9fee02b0b30d5f29208
This PR contains the following updates: | Package | Change | [Age](https://docs.renovatebot.com/merge-confidence/) | [Confidence](https://docs.renovatebot.com/merge-confidence/) | |---|---|---|---| | [go.opentelemetry.io/collector/processor/processortest](https://github.com/open-telemetry/opentelemetry-collector) | `v0.147.0` → `v0.148.0` |  |  | --- ### Release Notes <details> <summary>open-telemetry/opentelemetry-collector (go.opentelemetry.io/collector/processor/processortest)</summary> ### [`v0.148.0`](https://github.com/open-telemetry/opentelemetry-collector/blob/HEAD/CHANGELOG.md#v1540v01480) [Compare Source](https://github.com/open-telemetry/opentelemetry-collector/compare/v0.147.0...v0.148.0) ##### ❗ Known Issues ❗ - `service`: The collector's internal Prometheus metrics endpoint (`:8888`) now emits OTel service labels with underscore names (`service_name`, `service_instance_id`, `service_version`) instead of dot-notation names (`service.name`, `service.instance.id`, `service.version`). Users scraping this endpoint with the Prometheus receiver will see these renamed labels in resource and datapoint attributes. As a workaround, add the following `metric_relabel_configs` to your scrape config in prometheus receiver: ```yaml metric_relabel_configs: - source_labels: [service_name] target_label: service.name - source_labels: [service_instance_id] target_label: service.instance.id - source_labels: [service_version] target_label: service.version - regex: service_name|service_instance_id|service_version action: labeldrop ``` See [#​14814](https://github.com/open-telemetry/opentelemetry-collector/issues/14814) for details and updates. ##### 🛑 Breaking changes 🛑 - `all`: Change metric units to be singular to match OTel specification, e.g. `{requests}` -> `{request}` ([#​14753](https://github.com/open-telemetry/opentelemetry-collector/issues/14753)) ##### 💡 Enhancements 💡 - `cmd/mdatagen`: Add deprecated\_type field to allow specifying an alias for component types. ([#​14718](https://github.com/open-telemetry/opentelemetry-collector/issues/14718)) - `cmd/mdatagen`: Generate entity-scoped MetricsBuilder API that enforces entity-metric associations at compile time ([#​14659](https://github.com/open-telemetry/opentelemetry-collector/issues/14659)) - `cmd/mdatagen`: Skip generating reaggregation config options for metrics that have no aggregatable attributes. ([#​14689](https://github.com/open-telemetry/opentelemetry-collector/issues/14689)) - `pkg/service`: The internal status reporter no longer drops repeated Ok and RecoverableError statuses ([#​14282](https://github.com/open-telemetry/opentelemetry-collector/issues/14282)) Status events can now carry metadata and there's value in allowing them to be emitted despite the status value itself not changing. ##### 🧰 Bug fixes 🧰 - `cmd/builder`: Add `.exe` to output binary names when building for Windows targets. ([#​12591](https://github.com/open-telemetry/opentelemetry-collector/issues/12591)) - `exporter/debug`: Add printing of metric metadata in detailed verbosity. ([#​14667](https://github.com/open-telemetry/opentelemetry-collector/issues/14667)) - `exporter/otlp_grpc`: Prevent nil pointer panic when push methods are called before the OTLP exporter initializes its gRPC clients. ([#​14663](https://github.com/open-telemetry/opentelemetry-collector/issues/14663)) When the sending queue and retry are disabled, calling ConsumeTraces, ConsumeMetrics, ConsumeLogs, or ConsumeProfiles before the OTLP exporter initializes its gRPC clients could cause a nil pointer dereference panic. The push methods now return an error instead of panicking. - `exporter/otlp_http`: Show the actual destination URL in error messages when request URL is modified by middleware. ([#​14673](https://github.com/open-telemetry/opentelemetry-collector/issues/14673)) Unwraps the `*url.Error` returned by `http.Client.Do()` to prevent misleading error logs when a middleware extension dynamically updates the endpoint. - `pdata/pprofile`: Switch the dictionary of dictionary tables entries only once when merging profiles ([#​14709](https://github.com/open-telemetry/opentelemetry-collector/issues/14709)) For dictionary table data, we used to switch their dictionaries when doing the switch for the data that uses them. However, when an entry is associated with multiple other data (several samples can use the same stack), we would have been switching the dictionaries of the entry multiple times. We now switch dictionaries for dictionary table data only once, before switching the resource profiles. <!-- previous-version --> </details> --- ### Configuration 📅 **Schedule**: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined). 🚦 **Automerge**: Disabled by config. Please merge this manually once you are satisfied. ♻ **Rebasing**: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox. 🔕 **Ignore**: Close this PR and you won't be reminded about this update again. --- - [ ] <!-- rebase-check -->If you want to rebase/retry this PR, check this box --- This PR has been generated by [Renovate Bot](https://github.com/renovatebot/renovate). <!--renovate-debug:eyJjcmVhdGVkSW5WZXIiOiI0My41LjQiLCJ1cGRhdGVkSW5WZXIiOiI0My41LjQiLCJ0YXJnZXRCcmFuY2giOiJtYWluIiwibGFiZWxzIjpbXX0=--> Reviewed-on: https://gitea.t000-n.de/t.behrendt/tracebasedlogsampler/pulls/40 Reviewed-by: t.behrendt <t.behrendt@noreply.localhost> Co-authored-by: Renovate Bot <renovate@t00n.de> Co-committed-by: Renovate Bot <renovate@t00n.de>
Trace-based Log Sampler
This processor is used to sample logs based on the sampling decision of the trace they correlate to.
How It Works
When a trace is sampled, the processor caches its traceId.
Logs are then filtered:
- If a log references a known sampled
traceId, it is beimg forwarded. - If a log references an unknown or unsampled
traceId, it is buffered for a certain amount of time. After the buffer time expires, thetraceIdis checked again. If it exists, the log is forwarded. If not, it is discarded.
Configuration
| Field | Type | Default | Description |
|---|---|---|---|
| buffer_duration_traces | duration | 180s | The duration for which traceIds are being remembered. The timer starts when the first trace or span of one traceId is received. |
| buffer_duration_logs | duration | 90s | The duration for which logs are being buffered for, before being re-evaluated. If your pipeline includes e.g. a tailbasedsampler processor, set this to above it's collection time. This ensures that logs "wait" until the traces have been processed |
Example Configuration
The followinh config is an example configuration for the processor. It is configured to buffer traceIds for 180 seconds and logs for 90 seconds.
Note that both a traces and logs pipeline is required and both have to use the same instance of the processor.
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
exporters:
otlp:
endpoint: 0.0.0.0:4317
processors:
logtracesampler:
buffer_duration_traces: 180s
buffer_duration_logs: 90s
service:
pipelines:
traces:
receivers: [otlp]
processors: [logtracesampler]
exporters: [otlp]
logs:
receivers: [otlp]
processors: [logtracesampler]
exporters: [otlp]
Building
When building a custom collector you can add this processor to the mainfest like the following (refer to Building a custom collector for more information):
processors:
- gomod: gitea.t000-n.de/t.behrendt/tracebasedlogsampler v0.0.0
Languages
Go
99.7%
Makefile
0.3%