-
Notifications
You must be signed in to change notification settings - Fork 53
6473 – Update collector configuration to allow multiple services #110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
I also renamed to make it more pender specific, to make it obvious that those are only pender related metrics. Adding other exporters and pipelines seems pretty straight forward, just add '/service_name'. But adding a second Prometheus receiver, or making sure it sends scraped data to the correct pipeline seems more complex.
| scrape_interval: 15s | ||
| static_configs: | ||
| - targets: ["pender:3200"] | ||
| - targets: ["${env:pender_metrics_endpoint}"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what if the environment variable was just prometheus_targets and included the ["..."] text? 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A question: Let's say the targets are the pender endpoint, and the check-api endpoint. How do we make sure we send them to the correct exporter, dataset?
I'm wondering if we would have separate prometheus configs for each endpoint, or if we could use one for all of them.
Or does that not matter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah right the way we're using datasets here has them as one per service, and metrics in honeycomb requires it to be attached to a dataset. so we will need to have a separate exporter for each service
we could have them all use the same receiver and then just filter per service based on the metric attribute app or something like that using a processor (https://opentelemetry.io/docs/collector/configuration/#processors) but it might be simpler to just have completely separate pipelines 😒
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found this blog post, which I think might be relevant: https://www.honeycomb.io/blog/simplify-opentelemetry-pipelines-headers-setter
For now, do we want to assume completely separate pipelines or not? If complete separate pipelines, do you still want changes to the env var?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's assume separate pipelines and keep the env var you have! I think you'll have to change the prometheus receiver name to be prometheus/pender or something similarly unique though (https://github.com/open-telemetry/opentelemetry-collector/blob/main/receiver/README.md#configuring-receivers)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that does not work for the prometheus receiver: open-telemetry/opentelemetry-operator#3034
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
... well then. it looks like we will have to run an otel collector for each service 😒 or try to get the processor filtering above working
ew
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can set up different jobs from Prometheus, but the issue here would be how to send each to the correct Dataset, right?
Maybe we can take a step back, and just have one main 'check' Dataset, instead of one per service? I just assumed one per service made sense, I guess. Then we could have a more generic approach. If we need different configuration we could set up a second job, if we don't, we can pass the environment variables as you first suggested.
This would make the configuration easier I think, what do you think? Are there any drawbacks?
So we can route a specific service prometheus metrics to the specific service honeycomb dataset. Notes: 1. We don't need to send the dataset when sending traces, but we still do when sending metrics. 2. While we can aliases most otel-collector components, e.g.: otlp/pender_metrics. The prometheus receiver does not support that, so we need the router connector to send to the right pipeline. (There is a target-allocation feature in the prometheus receiver, but it is specific to kubernetes.)
|
@dmou I was able to get metrics in Honeycomb using the routing connector 🎉 |
amazing! could you update the PR description with how we would add an additional service? just to have some record of what resources would need to be duplicated and which ones can remain 🙂 |
@dmou, I updated the PR, let me know what you think. |
dmou
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm if you're intentionally keeping the for debugging purposes lines for now!
Updates the collector configuration to deal with receiving and exporting metrics from multiple services.
Important
When thinking about adding multiple receivers/exporters, we need to take into consideration:
prometheus/service_1,prometheus/service_2. We can have only one receiver, which can have multiple jobs.The way we dealt with this is by using the routing connector:
If we want to send prometheus metrics from another service, we would need to:
otlp/service_metrics, with the correct Dataset (considering we are also sending metrics to Honeycomb)metrics/servicewith the new exporterNotes
References