Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reap expired connections on drop #224

Closed
wants to merge 2 commits into from
Closed

Conversation

tneely
Copy link
Contributor

@tneely tneely commented Oct 15, 2024

The reaper only runs against the connections in its idle pool. This is fine for reaping idle connections, but for hotly contested connections beyond their maximum lifetime this can prove problematic.

Consider an active connection beyond its lifetime and a reaper that runs every 3 seconds:

  • [t0] Connection is idle
  • [t1] Connection is active
  • [t2] Reaper runs, does not see connection
  • [t3] Connection is idle

This pattern can repeat infinitely with the connection never being reaped.

By checking the max lifetime on drop, we can ensure that expired connections are reaped in a reason amount of time (assuming they eventually do get dropped).

Copy link
Owner

@djc djc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, this makes sense. Suggest some minor changes.

Can you talk about what you're using bb8 for? Would be good to know.

let conn = pool.get().await;

// And wait.
tokio::time::sleep(Duration::from_secs(2)).await;
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: prefer to just import sleep() for this.

bb8/src/internals.rs Show resolved Hide resolved
@@ -149,12 +149,19 @@ where
"handled in caller"
);

let max_lifetime = self.inner.statics.max_lifetime;
let is_expired = max_lifetime.map_or(false, |lt| conn.is_expired(Instant::now() - lt));
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest we bind let is_broken = self.inner.manager.has_broken(&mut conn.conn); here and then use is_broken || is_expired in the match scrutinee.

@tneely
Copy link
Contributor Author

tneely commented Oct 15, 2024

I'm using bb8 inside of PgCat to do transaction pooling. The servers I'm connecting to get cycled periodically for security reasons, so if the reaper lags too far behind then we end up querying a dead server and reporting the connection as broken. I'd rather reap it before we get to that point.

bb8/src/inner.rs Outdated
(ConnectionState::Present, false) => locked.put(conn, None, self.inner.clone()),
(_, is_broken) => {
match (state, is_broken || is_expired) {
(ConnectionState::Present, false) if !is_expired => {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we don't need if !is_expired anymore, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, I totally missed that

The reaper only runs against the connections in its idle pool. This is
fine for reaping idle connections, but for hotly contested connections
beyond their maximum lifetime this can prove problematic.

Consider an active connection beyond its lifetime and a reaper that runs
every 3 seconds:
- [t0] Connection is idle
- [t1] Connection is active
- [t2] Reaper runs, does not see connection
- [t3] Connection is idle

This pattern can repeat infinitely with the connection never being reaped.

By checking the max lifetime on drop, we can ensure that expired
connections are reaped in a reason amount of time (assuming they
eventually do get dropped).
@djc
Copy link
Owner

djc commented Oct 17, 2024

See follow-up in #226.

@djc djc closed this Oct 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants