-
Notifications
You must be signed in to change notification settings - Fork 16
asv benchmarks for imports and tools modules #184
Conversation
…ip mypy errors in IO benchmarks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great start! 🎉
I've managed to run the benchmarks locally with ease, thanks to the instructions 🙏
I've tried to give sensible answers to the great questions 😅 I'll feed back on the mypy
part separately after some more investigations 🤔
I think the key thing missing is a benchmark for the main
function (as discussed)?
IIUC these benchmarks are not part of the CI yet - is it worth opening an issue for that (or is there one already) to discuss the details of what we should compare benchmarks to? (And that seems like a good next step after this?)
Minor point: I wonder whether instead of putting questions for reviewers into the code itself and risking that we inadvertently merge them, should they in future be added as comments on the PR? (the disadvantage would be that it's more annoying to keep track of the questions for the person opening the PR... don't know).
Thanks @sfmig!
Yes please add all comments to the PR itself rather than in the code. And create issues rather than |
On I would suggest to ignore any import errors happening in |
replace exclude by prune in manifest file Co-authored-by: Will Graham <[email protected]>
…be/cellfinder-core into smg/basic-asv-benchmark
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the feedback @alessandrofelder! Sorry that this PR was a bit longer than expected
I think the key thing missing is a benchmark for the main function (as discussed)?
Yes, I added an issue for it #206
IIUC these benchmarks are not part of the CI yet - is it worth opening an issue for that (or is there one already) to discuss the details of what we should compare benchmarks to? (And that seems like a good next step after this?)
Agreed! This is now issue #208 (I added these as cellfinder-core issues for now, with a view to expanding this work to other brainglobe tools later)
Minor point: I wonder whether instead of putting questions for reviewers into the code itself and risking that we inadvertently merge them, should they in future be added as comments on the PR?
Yes, that sounds like a better approach. Maybe opening a draft PR as soon as possible and adding my questions as they come to the main description of the PR is the least cumbersome? Will have a go. Also will aim to open issues rather than TODOs, thanks @adamltyson.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome, @sfmig - thank you!
A suite of asv benchmarks for timing the main imports of the package and several functions from the tools module. Benchmarks are defined as methods of a class, with common setup and teardown functions.
Basic usage
pip install asv
asv run
asv run --bench tools.prep.PrepTF
asv publish
and thenasv preview
.For further details on usage and useful commands, see the benchmarks/README.md file.
Structure
The structure follows the approach from the astropy-benchmarks repo, in which the benchmarks modules roughly mimic the package modules. The numpy benchmarks follow a slightly different structure but it is also a useful reference.
mypy fixes
To avoid mypy errors:
cellfinder_core.tools.prep.*
to the list of modules from which ignore import errors.Not sure if this is best practice but I couldn't find a better way around it. Happy to get feedback on it!
Existing benchmarks
I moved a set of previous memory benchmarks written by @dstansby to a subdir in benchmarks called
mem_benchmarks
.