Skip to content

Conversation

@danhngo-lx
Copy link

When streaming data from Prometheus with large time ranges (> 1 hour), the StreamRowsChunked function would stall after ~100 rows. This was caused by the bwRows channel (buffer size 100) filling up because processBwRows goroutine was never started.

The issue occurred because StreamRowsChunked creates a datastream with NewDatastreamContext() and pushes rows directly, bypassing Start() which normally starts the processBwRows goroutine.

Changes:

  • Add StartBwProcessor() public method to Datastream to start the bytes-written processor independently
  • Call StartBwProcessor() in StreamRowsChunked before pushing rows

This ensures the bwRows channel is drained, preventing the producer from blocking when the buffer fills up.

Related: #668

When streaming data from Prometheus with large time ranges (> 1 hour),
the StreamRowsChunked function would stall after ~100 rows. This was
caused by the bwRows channel (buffer size 100) filling up because
processBwRows goroutine was never started.

The issue occurred because StreamRowsChunked creates a datastream with
NewDatastreamContext() and pushes rows directly, bypassing Start()
which normally starts the processBwRows goroutine.

Changes:
- Add StartBwProcessor() public method to Datastream to start the
  bytes-written processor independently
- Call StartBwProcessor() in StreamRowsChunked before pushing rows

This ensures the bwRows channel is drained, preventing the producer
from blocking when the buffer fills up.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants