-
Notifications
You must be signed in to change notification settings - Fork 65
No more locks in sampler add/del #2441
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
No more locks in sampler add/del #2441
Conversation
|
Just for context, the locking was put in place so that samplers could be changed dynamically in response to observed samples, and so in fact could be called by multiple threads. Not sure how best to enforce that sampler callbacks (or some other thread client side that is acting independently) can't change sampler–probe binding. How much do the locks themselves (as opposed to the other optimizations) slow things down? Keeping them is the easy option for safety and also for this functionality. I guess an alternative would be to have a check in the simulation method to e.g. throw if An alternative to keep the functionality would be to use that steering interface we mooted way back and put these requests on a queue which is consumed by the simulation object between epochs, but that's of course a big deal. For applying many sampler/probe bindings at once but keeping locks, how about a bulk interface? |
Interesting, I didn't have that background. Given the actual problem reported by @jlubo and the payoff I found, I'd be willing to accept the degradation in functionality, i.e. loss of dynamic sampling. To my knowledge there is neither an active use case nor is it tested or documented. For the actual simulation, it adds 4,000,000 samplers (!) in total, so any speed-up is critical.
Removing locks is a small factor in speed-up for adding sampler, but will also speed-up the simulation time as locking is needed when pulling samplers from the map, too. As for the for the suggestions, I like them, but given time / personnel constraints, let's reserve those for an actual use case. Although, I suspect a concurrent hash map (or using a straight-up vector) might also an option. |
|
I always hate to see functionality go away, even if no one is using it :D Nonetheless, in this case it isn't a code maintenance concern but a performance and safety concern — if we're going to remove a capability for performance, it would be good to quantify it to see what cost we are paying. I'm hoping, but haven't got benchmarks, that the integration dominates the run-time of the cell-group advance calls, but in the per-epoch overhead, I don't have a feel for how sampler set up time compares to sampled data wrangling and callbacks. If we leave the locks out, we should guard access to the sampler manipulation methods in the simulator object (i.e. make sure they can't be called while simulator is running), and also document that this is forbidden; it makes things a bit more complicated. |
|
It's more hypothetical functionality, especially as it isn't available from Python. I'll check the numbers again, in particular how they compare against the other optimisations (specialising the common filters like |
|
As for the dominant fraction in profiling: I am often seeing -- in benchmarks, thus biased towards shorter times -- 1:1 splits initialisation ./. simulation. Label resolution and connection sorting being major cost centres. For the first one: #2447. For the latter I am brewing something. The use case that inspired this particular overhaul: @jlubo dumps all the state at all the places at the simulation's end. As that includes all synapses, |
Result: @jlubo's test case used to use 20s to add 1600 sampler; now it's 1.2s.