Skip to content

feat: produce OpenTelemetry traces with hs-opentelemetry#3140

Open
develop7 wants to merge 1 commit intoPostgREST:mainfrom
develop7:feat_opentelemetry-traces
Open

feat: produce OpenTelemetry traces with hs-opentelemetry#3140
develop7 wants to merge 1 commit intoPostgREST:mainfrom
develop7:feat_opentelemetry-traces

Conversation

@develop7
Copy link
Collaborator

@develop7 develop7 commented Jan 4, 2024

This PR introduces producing OpenTelemetry traces containing, among others, metrics same as in ServerTiming header from before.

TODO:

Running:

I sort of gave up deploying and configuring all the moving bits locally, so you'd need to create the honeycomb.io account for this one (or ask me for the invite). After that, it's quite straightforward:

  1. Build PostgREST executable with stack build, and get its path with stack exec -- which postgrest
  2. Get a PostgreSQL server running (e.g. run nix-shell, then postgrest-with-postgresql-15 --fixture ./test/load/fixture.sql -- cat). Note the server URL, you'll need it when running PostgREST server
  3. get a JWT token with default secret by running postgrest-jwt --exp 36000 postgrest_test_anonymous
  4. Run PostgREST server with
    OTEL_EXPORTER_OTLP_ENDPOINT='https://api.honeycomb.io/' \
    OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<honeycomb_api_key>"  \
    OTEL_SERVICE_NAME='PostgREST' OTEL_LOG_LEVEL='debug' OTEL_TRACES_SAMPLER='always_on' \
    PGRST_DB_URI='<postgresql_server_url>'   \
    postgrest-run 
  5. request some data using the JWT token from above and check the honeycomb dashboard for the traces:

image

Tests

hspec tests are also instrumented, for those to produce traces you need to set OTEL_* vars only:

OTEL_EXPORTER_OTLP_ENDPOINT='https://api.honeycomb.io/' \
OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<honeycomb_api_key>"  \
OTEL_SERVICE_NAME='PostgREST' OTEL_LOG_LEVEL='debug' OTEL_TRACES_SAMPLER='always_on' \
postgrest-test-spec

@steve-chavez
Copy link
Member

Awesome work! 🔥 🔥

I sort of gave up deploying and configuring all the moving bits locally, so you'd need to create the honeycomb.io account for this one

Found this Nix flake that contains an OTel GUI: https://flakestry.dev/flake/github/FriendsOfOpenTelemetry/opentelemetry-nix/1.0.1

I'll try to integrate that once the PR is ready for review.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 8c0e16a to 64a0ee9 Compare January 29, 2024 17:01
@develop7
Copy link
Collaborator Author

The recent problem I'm seemingly stuck with is hs-opentelemetry is using UnliftIO, which seems not quite composable with our (implicit, correct?) monad stack. So the deeper into the call stack the instrumented code is (the one I'm trying to wrap with inSpan), the more ridiculously complex it should be changed to be instrumented, i.e. https://github.com/PostgREST/postgrest/pull/3140/files#diff-5de3ff2b2d013b33dccece6ead9aeb61feffeb0fbd6e38779750511394cf9701R156-R157, up to the point I have no idea how to proceed further (e.g. wrapping App.handleRequests cases with their own spans, which is semantically correct)

There's a more straightforward MonadIO-involving opentelemetry library, with less activity and quite different approach to the telemetry data export (GHC eventlog → file/pipe by the GHC runtime). It looks less invasive approach, refactoring-wise, but requires more hoops to jump to actually deliver traces to Honeycomb/Lightstep/whatnot (pull eventlog → convert it to zipkin/jaeger/b3 → upload somewhere for analysis).

It also seems to boil down to the conceptual choice between online and offline traces' delivery-wise, or push and pull model.

@steve-chavez @wolfgangwalther @laurenceisla what do you think guys?

@steve-chavez
Copy link
Member

@develop7 Would vault help? It was introduced on #1988, I recall it helped with IORef handling.

It's still used on

jwtDurKey :: Vault.Key Double
jwtDurKey = unsafePerformIO Vault.newKey
{-# NOINLINE jwtDurKey #-}
getJwtDur :: Wai.Request -> Maybe Double
getJwtDur = Vault.lookup jwtDurKey . Wai.vault

I'm still not that familiar with OTel but the basic idea I had was to store these traces on AppState and export them async.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 6b891c2 to 586e7a1 Compare February 12, 2024 14:26
@steve-chavez
Copy link
Member

@develop7 Recently merged #3213, which logs schema cache stats to stderr. Perhaps that can be used for introductory OTel integration instead? It might be easier since the scache stats are already in IO space.

@develop7
Copy link
Collaborator Author

Would vault help?

hs-opentelemetry is using it already

basic idea I had was to store these traces on AppState and export them async

Not only that, you want traces in tests too, for one.

The good news is hs-opentelemetry-utils-exceptions seems to be just what we need, let me try it.

Perhaps that can be used for introductory OTel integration instead?

Good call @steve-chavez, thank you for the suggestion. Will try too.

@develop7
Copy link
Collaborator Author

image

it works!

@steve-chavez
Copy link
Member

steve-chavez commented Feb 21, 2024

Since now we have an observer function and Observation module

handleRequest :: AuthResult -> AppConfig -> AppState.AppState -> Bool -> Bool -> PgVersion -> ApiRequest -> SchemaCache ->
Maybe Double -> Maybe Double -> (Observation -> IO ()) -> Handler IO Wai.Response
handleRequest AuthResult{..} conf appState authenticated prepared pgVer apiReq@ApiRequest{..} sCache jwtTime parseTime observer =

data Observation
= AdminStartObs (Maybe Int)
| AppStartObs ByteString
| AppServerPortObs NS.PortNumber

Perhaps we can add some observations for the timings?

Also the Logger is now used like:

logObservation :: LoggerState -> Observation -> IO ()
logObservation loggerState obs = logWithZTime loggerState $ observationMessage obs

CmdRun -> App.run appState (Logger.logObservation loggerState))

For OTel, maybe the following would make sense:

otelState <- Otel.init

App.run appState (Logger.logObservation loggerState >> OTel.tracer otelState)) 

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from dc882f1 to 7794848 Compare February 23, 2024 15:44
@develop7
Copy link
Collaborator Author

Perhaps we can add some observations for the timings?

Agreed, server timings definitely belong there.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 7794848 to 398206b Compare February 23, 2024 16:04
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 398206b to 4cd99c6 Compare March 7, 2024 14:58
@develop7 develop7 requested a review from steve-chavez March 11, 2024 15:37
@develop7 develop7 marked this pull request as ready for review March 11, 2024 15:38
@develop7
Copy link
Collaborator Author

Okay, the PR is in the cooking for long enough, let's pull the plug and start small. Let's have it reviewed while I'm fixing the remaining CI failures.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 4cd99c6 to 94d2b9b Compare March 11, 2024 15:49
@wolfgangwalther
Copy link
Member

hs-opentelemetry is, according to the repo, in alpha state. According to the TODO list above, the issue tracker and the repo description, it does not support:

  • GHC 9.8.x
  • Windows
  • Metrics or Logging

I don't think we depend on this in the current state. And we should certainly not depend on an even-less-maintained fork of the same.

So to go forward here, there needs to be some effort put into the upstream package first, to make it usable for us.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch 2 times, most recently from 590d142 to e809a65 Compare March 12, 2024 16:31
@develop7 develop7 marked this pull request as draft March 29, 2024 16:43
@develop7
Copy link
Collaborator Author

A status update:

  • GHC 9.8: hs-opentelemetry-sdk doesn't build against 9.8 because of hs-opentelemetry-exporter-otlpproto-lens chain. Given the upstream of the latter being bit unresponsive for the suggestions to bump upper bounds, I've managed to make the latter build for 9.8 in develop7/proto-lens@985290f, but haven't figured out how to pick it up to the project since it depends on the google's protobuf compiler installed and the protobuf's source checked out. Another approach is to not use hs-o-sdk and hs-o-e-otlp altogether, which I probably should've tried way before.

@wolfgangwalther
Copy link
Member

  • GHC 9.8: hs-opentelemetry-sdk doesn't build against 9.8 because of hs-opentelemetry-exporter-otlpproto-lens chain. Given the upstream of the latter being bit unresponsive for the suggestions to bump upper bounds, I've managed to make the latter build for 9.8 in develop7/proto-lens@985290f,

Hm. I looked at your fork. It depends on support for GHC 9.8 in ghc-source-gen. This repo has a PR, which just was updated 3 days ago. I wouldn't call that "unresponsive", yet. Once ghc-source-gen is GHC 9.8 compatible, you could open a PR to update bounds in proto-lens itself. But since the last release for GHC 9.6 support was in December... I would not expect this to take too long to get responded to. It certainly doesn't look like it's unmaintained.

I guess for GHC 9.8 support it's just a matter of time.

What about the other issues mentioned above? Were you able to make progress on those?

@mkleczek
Copy link
Contributor

mkleczek commented Apr 4, 2024

The recent problem I'm seemingly stuck with is hs-opentelemetry is using UnliftIO, which seems not quite composable with our (implicit, correct?) monad stack. So the deeper into the call stack the instrumented code is (the one I'm trying to wrap with inSpan), the more ridiculously complex it should be changed to be instrumented, i.e. https://github.com/PostgREST/postgrest/pull/3140/files#diff-5de3ff2b2d013b33dccece6ead9aeb61feffeb0fbd6e38779750511394cf9701R156-R157, up to the point I have no idea how to proceed further (e.g. wrapping App.handleRequests cases with their own spans, which is semantically correct)

There's a more straightforward MonadIO-involving opentelemetry library, with less activity and quite different approach to the telemetry data export (GHC eventlog → file/pipe by the GHC runtime). It looks less invasive approach, refactoring-wise, but requires more hoops to jump to actually deliver traces to Honeycomb/Lightstep/whatnot (pull eventlog → convert it to zipkin/jaeger/b3 → upload somewhere for analysis).

It also seems to boil down to the conceptual choice between online and offline traces' delivery-wise, or push and pull model.

@steve-chavez @wolfgangwalther @laurenceisla what do you think guys?

In my prototype I actually played with replacing HASQL Session with an https://github.com/haskell-effectful/effectful based monad to make it extensible:

https://github.com/mkleczek/hasql-api/blob/master/src/Hasql/Api/Eff/Session.hs#L37

Using it in PostgREST required some mixins usage in Cabal:

29b946e#diff-eb6a76805a0bd3204e7abf68dcceb024912d0200dee7e4e9b9bce3040153f1e1R140

Some work was required in PostgREST startup/configuration code to set-up appropriate effect handlers and middlewares but the changes were quite well isolated.

At the end of the day I think basing your monad stack on an effect library (effectful, cleff etc.) is the way forward as it makes the solution highly extensible and configurable.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from e809a65 to 4697009 Compare October 23, 2024 17:01
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 650d008 to ac33872 Compare October 31, 2024 13:40
@develop7
Copy link
Collaborator Author

develop7 commented Nov 4, 2024

Update: rebased the PR against latest master, updated hs-opentelemetry (with the Windows support merged!) & asked hs-opentelemetry maintainers to cut a new release in iand675/hs-opentelemetry#154 so we don't have to depend on forks again.

@develop7 develop7 force-pushed the feat_opentelemetry-traces branch 2 times, most recently from 0205761 to 9c1b361 Compare January 29, 2026 17:08
@PostgREST PostgREST deleted a comment from Copilot AI Jan 29, 2026
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch 9 times, most recently from 659d8c2 to 4755f08 Compare February 2, 2026 15:54
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from 4755f08 to a429185 Compare February 3, 2026 14:25
* Introduces producing OpenTelemetry traces with hs-opentelemetry.
* Adds OTel spans over the whole application
  loop and over each request processing phase
* Preliminary OTel tracing support in spec tests
* Disables tracing in load and memory tests
@develop7 develop7 force-pushed the feat_opentelemetry-traces branch from a429185 to c199822 Compare February 3, 2026 14:30
@mkleczek
Copy link
Contributor

mkleczek commented Feb 3, 2026

@develop7
Would it be possible to add context propagation as local GUC? See: https://github.com/DataDog/pg_tracing?tab=readme-ov-file#trace_context-guc

@laurenceisla
Copy link
Member

I tested the feature in Honeycomb and locally using otel-tui and otel-desktop-viewer and from what I can see it's working on all of them 🎉 !

I executed the following (first env. vars are for Honeycomb):

# OTEL_EXPORTER_OTLP_ENDPOINT="https://api.honeycomb.io:443" \
# OTEL_EXPORTER_OTLP_HEADERS="x-honeycomb-team=<REDACTED>" \
OTEL_EXPORTER_OTLP_ENDPOINT='http://localhost:4318' \
OTEL_EXPORTER_OTLP_PROTOCOL='http/protobuf' \
OTEL_SERVICE_NAME='PostgREST' \
OTEL_LOG_LEVEL='debug' \
OTEL_TRACES_SAMPLER='always_on' \
PGRST_SERVER_OTEL_ENABLED=true \
PGRST_DB_AGGREGATES_ENABLED=true \
PGRST_DB_PLAN_ENABLED=true \
PGRST_SERVER_TIMING_ENABLED=true \
PGRST_LOG_LEVEL=info \
postgrest-with-pg-17 -f ./test/spec/fixtures/load.sql postgrest-run

Print screens:

Honeycomb

image

otel-tui

image

otel-desktop-viewer

image

The only thing out of the ordinary is that otel-desktop-viewer shows the message "Incomplete Trace: missing a root span" for every request. Do you think this is a problem with the implementation or the otel app? I don't see the others showing anything similar.

@develop7
Copy link
Collaborator Author

develop7 commented Feb 4, 2026

@develop7 Would it be possible to add context propagation as local GUC? See: DataDog/pg_tracing#trace_context-guc

@mkleczek it's absolutely worth trying to implement; will look into it.

@develop7
Copy link
Collaborator Author

develop7 commented Feb 4, 2026

I executed the following (first env. vars are for Honeycomb)

I'll add that to examples section of the doc; what collector did you use BTW?

Do you think this is a problem with the implementation or the otel app? I don't see the others showing anything similar.

Seems like upstream issue; which they just fixed in CtrlSpice/otel-desktop-viewer#203

@laurenceisla
Copy link
Member

what collector did you use BTW?

Oh, I didn't use a collector, it sent data directly to Honeycomb. I'll try it out with one and come back with the info.

OTEL_TRACES_SAMPLER='always_on' \
postgrest

Since current OpenTelemetry implementation incurs a small (~6% in our "Loadtest (mixed)" suite)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since OTel is only applied to the timing headers, maybe the loss in perf is not from OTel but from the timing headers? The measured loss matches the one reported in #3410 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

6 participants