Skip to content

Conversation

@jtdowney
Copy link
Contributor

First, thank you for releasing this. Feel free to ignore this and close it if it doesn't align with your project design. I am working on a system where I need a pool of resources, but I also need to have the requestor block (up to some timeout) if a resource isn't available. I first tried to write it with bath but ran into this issue. I ended up switching to poolboy via some FFI.

Introduce apply_blocking that waits for resources when the pool is exhausted instead of returning NoResourcesAvailable immediately.

Introduce apply_blocking that waits for resources when pool is exhausted
instead of returning NoResourcesAvailable immediately.
@isaacharrisholt
Copy link
Contributor

In theory I'd love a feature like this! Having a queue-based system does make total sense. However, I have a slight concern. I might be missing something, but I believe there's a bit of a race condition here. Unless I'm mistaken, the waiter could time out, continue with execution, and then Bath would send the resource message to back to the waiter. This can lead to a memory leak as the original waiter's mailbox would then have a message that never gets removed.

This is why timeouts on functions that expect a reply in gleam_erlang tend to cause crashes deliberately - the process will die and its mailbox will be cleared, so you don't get this problem. See process.call for an example. That function crashes for this reason, and the old try_call function was removed for the same.

It may be possible to work around this using monotonic time, but I'm not sure how reliable a method that is.

Another alternative would be to call Checkout recursively, retrying up to the timeout if the NoResourcesAvailable response is returned, but you might struggle to get a precise timeout working there.

@jtdowney
Copy link
Contributor Author

Thanks for reviewing, and that is a great point. I can see what you mean about the risk of a leak if the caller remains alive after a checkout timeout. I do still think it would be a valuable path to have a blocking checkout, and the API could change to an explicit panic style like process.call with maybe a _forever variant. Happy to make an update with that change if you want.

@isaacharrisholt
Copy link
Contributor

I think that would work great. Would you mind taking a look at implementing that?

Matches process.call behavior - caller dies on timeout, triggering
cleanup via WaiterDown/CallerDown. Removes CheckoutTimedOut error.
@jtdowney
Copy link
Contributor Author

I took a crack at the changes in 11221b8. It will now panic and let the caller deal with the situation.

Copy link
Contributor

@isaacharrisholt isaacharrisholt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you also please add a test or two around how the blocking apply works with lazy/eager? The logic seems to have been changed.

src/bath.gleam Outdated
{
// If we create lazily, just decrement the current size - a new resource
// will be created when required
Lazy -> #(state.resources, current_size)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to decrement the size? It used to.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The decrement was moved to 732, so it applied first, and then each branch dealt with the ramifications. I agree that the comment is confusing now. I am working on a cleaner version.

Comment on lines 789 to 793
// Size hasn't changed
Ok(resource) -> #(
deque.push_back(state.resources, resource),
current_size + 1,
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the comments here are either wrong or the logic is

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as the other comment, this is net zero because 732 does -1 and this does +1. It's a bit confusing, so I want to refactor this to use some smaller helpers.

@jtdowney
Copy link
Contributor Author

I had started down the path of making smaller helpers, and it turned into too much of a refactor for the scope of this PR. So instead, I opted to add the tests and clarify the comments in dcd81c2.

Copy link
Contributor

@isaacharrisholt isaacharrisholt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In theory I'm fine with this PR, but I found myself having to keep a lot in my head when reading it due to the current_size changes. If there's a good justification for it, that's fine, but I think the variable should be called new_size instead of current_size. That said, I'm not sure we need that variable at all.

Apologies for being picky about this, but I'm ultimately going to be the one maintaining this, so I need to make sure I don't get confused 😅

src/bath.gleam Outdated

// Shutdown the old resource
state.shutdown_resource(live_resource.resource)
let current_size = state.current_size - 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm still not sure I prefer this being pulled out here. I think it makes it more difficult to understand what's happening to the tracked size of the pool in the slightly more complex logic below. What was the reason for doing this one?

@jtdowney
Copy link
Contributor Author

jtdowney commented Jan 1, 2026

No apologies needed, I get it. I agree that next_size is a better name for it. I had pulled it out because I found it easier to reason about, but I just sent e2fae45, which factors it back in.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants