-
-
Notifications
You must be signed in to change notification settings - Fork 419
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ConnectionPool adapter #835
base: main
Are you sure you want to change the base?
Conversation
a3bdff2
to
595c15f
Compare
@bkeepers Sounds good! I'm taking some time off work so I won't be able to keep my PR going forward but this sounds like a good way to go (and makes my PR not necessary). |
def initialize(pool, &initializer) | ||
@pool = pool | ||
@initializer = initializer | ||
@instances = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could this hang onto resources for too long in some cases, preventing them from being garbage collected? ConnectionPool
has a reload
capability that will release its hold on the existing resources, but this @instances
hash map will retain them and then grow larger as the pool offers up new resources.
I was initially thinking that ObjectSpace::WeakKeyMap
would solve the problem, but this is a funny situation in which the map keys and values both have references to the same resource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, good catch. I just pushed 18b2a1b which makes ConnectionPool
observable. I might open an issue on the uptream repo and see if a change like this would be accepted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems like a good solution. The only other thing that comes to mind is an LRU cache, but I don't know if there's a solid Ruby implementation.
Thanks for working on this, @bkeepers! I started down this same path this morning and then found this PR. I'm looking forward to using it with the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, Flipper returns a new adapter instance (it calls the config.adapter block) for each thread.
Thanks for pointing that out. I have a codebase that uses a single Redis object for Flipper (it does not instantiate a new one inside the block). I'm aware of the mutex inside of the Redis object which is the main reason I'd like Flipper to have pool support. It sounds like I can achieve "pooling" if I instantiate a new Redis in that block, as long as my Redis server has enough connections available to support one per thread. I'd still prefer to pass in my existing Redis pool (used for non-Flipper concerns) but this is good to know in the meantime.
def initialize(pool, &initializer) | ||
@pool = pool | ||
@initializer = initializer | ||
@instances = {} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That seems like a good solution. The only other thing that comes to mind is an LRU cache, but I don't know if there's a solid Ruby implementation.
end | ||
|
||
def reset | ||
@instances.clear |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is threadsafe on MRI due to the GIL, but on other interpreters I could imagine a new object being added to @instances
halfway through the clear and then getting wiped out immediately. I'm not sure that's a real problem here, though, since this adapter caching is just for performance purposes and if an extra one is initialized here or there it shouldn't matter. Am I thinking about this correctly?
@jnunemaker is this worth merging? |
@bkeepers I like this idea. |
@bkeepers anything left on this? It seems like you've responded to all the feedback. Do we just need someone to guinea pig in production somewhere? |
@jnunemaker slightly off-topic, but do you know if there are any plans to also support the new |
@gstokkink sure. I'd just make a new adapter for it and specs. You can start by cloning the current redis one or taking a peek at the redis cache one. I'd just call it RedisClient. |
This is an attempt at creating a generic
ConnectionPool
adapter that can be composed with other adapters as discussed in #826 (review).It just wraps each adapter method call in a
pool.with { }
call, but I think this mostly makes sense since the adapter API already represents atomic actions. It's not a lot of code and can be used with any adapter that requires a connection pool (Redis
,RedisCache
, and probably others).ConnectionPool
doesn't expose a mechanism for extending an existing pool instance, caching objects that depend on connection, or obsevering lifecycle events, so this adapter has to keep a cache of adapter instances for each connection. I'm not 100% sure this is kosher.Usage as primary storage adapter:
Usage with cache adapter:
The
config.adapter
problemCurrently, Flipper returns a new adapter instance (it calls the
config.adapter
block) for each thread. This is to guard against some adapters not being thread safe. Initializing this adapter insideconfig.adapter
block would probably work fine, but it would result in a separate adapter cache for each thread (n threads * x pool size).I'm thinking we should add
threadsafe?
to the adapter API and change Flipper to reuse adapter instances of threadsafe adapters.cc @cymen