Skip to content

fix: performance issues around insertion & averages#17

Merged
jfro merged 3 commits intomainfrom
jeremy/performance
Nov 6, 2025
Merged

fix: performance issues around insertion & averages#17
jfro merged 3 commits intomainfrom
jeremy/performance

Conversation

@jfro
Copy link
Member

@jfro jfro commented Nov 6, 2025

hopefully reduces some overhead in metrics logging in particular

@jfro
Copy link
Member Author

jfro commented Nov 6, 2025

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

src/lib.rs Outdated
Comment on lines 211 to 222
fn flush(&mut self) -> Result<(), diesel::result::Error> {
use crate::schema::metrics::dsl::metrics;
// trace!("Flushing {} records", self.queue.len());
if self.queue.is_empty() {
self.last_flush = Instant::now();
return Ok(());
}
let mut drain_buffer: Vec<NewMetric> = self.queue.drain(..).collect();
let db = &mut self.db;
let queue = self.queue.drain(..);
db.transaction::<_, diesel::result::Error, _>(|db| {
for rec in queue {
insert_into(metrics).values(&rec).execute(db)?;
}
insert_into(metrics).values(&drain_buffer).execute(db)?;
Ok(())
})?;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Batch flush can exceed SQLite placeholder limit

The new flush drains the entire queue into a Vec and performs one insert_into(metrics).values(&drain_buffer) call. SQLite’s default SQLITE_MAX_VARIABLE_NUMBER is 999, so a queue larger than about 333 records (3 columns per row) will cause Error: too many SQL variables and the flushed metrics are dropped because the queue was drained before the error surfaced. The previous per-row insert avoided this limit. Consider chunking the insert into smaller batches or reducing the flush size to stay under SQLite’s parameter cap.

Useful? React with 👍 / 👎.

@jfro
Copy link
Member Author

jfro commented Nov 6, 2025

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. 👍

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

@jfro jfro merged commit 1039cc4 into main Nov 6, 2025
3 checks passed
@jfro jfro deleted the jeremy/performance branch November 6, 2025 18:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant