Conversation
|
A minor thing but since we are calling the tests in parallel with 4 processes the outputs aren't kept together. So I changed the github actions workflow to call ctest in serial for now. A real fix for this could just be to define the benchmarks as a separate test case entirely which should (hopefully) ensure they do stay together. I've added a few simple tests which just call to benchmark some of the setups in First run in CI: Second run in CI: Next steps:
|
|
This is nice @Waqar-ukaea! Thanks for exploring this. One thought: CI machines can change and I don't think we're guaranteed a specific machine for a given run, making performance comparisons tricky. I really like the idea of building these benchmarks into the test suite, but we may want to limit comparisons to runs on a consistent machine. I'm looking into how we might connect to one, but it could take some time to ensure we're doing it securely (it's behind a firewall). |
|
Sounds good @pshriwise. In that case I can carry on with what I had in mind for this PR and we can keep it unmerged until we figure out what we're doing with the CI machine. |
I think it would be a good idea if XDG also implemented micro-benchmarks alongside its unit tests for the various ray tracing operations so that we have a record of performance of these operations when code changes are made.
This PR aims to add some micro-benchmarks for each of the ray tracing operations currently available in XDG. Catch2 supports benchmarking natively so I was thinking we could just go ahead and use the Catch2 macros rather than bringing in another library. Details of how to implement benchmarks with Catch2 can be found here - https://github.com/catchorg/Catch2/blob/devel/docs/benchmarks.md
If we want more advanced benchmarking control something like https://github.com/google/benchmark may be more suitable though.
Some other considerations:
--verboseas a benchmark won't count as failing.