You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The basic benchmarks we have implemented are not part of CI yet. I need to do a bit of research on how we can do that , and what we should compare benchmarks to.
Can we use GitHub runners or do we need a dedicated SWC machine, even for the simpler benchmarks?
We may have issues with the benchmarks not being comparable if the GitHub runners we get have different specs
this blog post discusses a way around it with 'relative benchmarking' - fantastically explained
the idea is that we use the command asv continuous to run a side-by-side comparison of two commits in the same runner (continuous for continuous integration).
Can we use GitHub runners or do we need a dedicated SWC machine, even for the simpler benchmarks?
I think use GH runners as much as we can, but we have local runners for any long running tests.
A third alternative could be to boot up a dedicated machine on the cloud everytime a benchmark needs to be run, as they describe in this post
Local runners work well, and we can get more machines if needed (or spawn jobs on our internal cluster from the runner), so we should be able to have both:
a) Sufficient compute
b) Consistent benchmarks using the same physical machines
The basic benchmarks we have implemented are not part of CI yet. I need to do a bit of research on how we can do that , and what we should compare benchmarks to.
Can we use GitHub runners or do we need a dedicated SWC machine, even for the simpler benchmarks?
We may have issues with the benchmarks not being comparable if the GitHub runners we get have different specs
asv continuous
to run a side-by-side comparison of two commits in the same runner (continuous
for continuous integration).In astropy they opt for dedicated machines:
The
asv
docs on machine tuning also seem relevant for this questionA third alternative could be to boot up a dedicated machine on the cloud everytime a benchmark needs to be run, as they describe in this post
The text was updated successfully, but these errors were encountered: