-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to benchmark a top-level binary #8
Comments
Hey, Right now I am on vacation, but I will take a stab at it on Monday. Best case scenario there'll be a release on the same day, we'll see :) I am curious though, what do you mean by the overhead of a bench function? |
I suppose all we need to do is to add a new function to |
Say I leveraged #6 to implement this, calliper would start capturing from the beginning of my bench, through the fork+exec, the running of my binary, to the end of my bench. By implementing direct support for this, only the running of my binary would be captured
At a high level, it seems fine. I can always give a prototype a test drive and iterate from there. |
Thanks! Gave it a try but I seem to be getting a panic
Looks like its expecting the call to always which makes sense for benchmarking function calls but for benchmarking binaries, they can be expected to have a non-zero exit code. In this case, its a spell checker looking for typos and finding them. |
Uh, yeah, it looks like this assert is no longer valid (in most cases at least). I'll push a patch today. |
Fixed as of a2ff6ab |
That worked, thanks! |
btw I saw in bencherdev/bencher#82 talk of integrating with bencher.dev. Any updates on that? |
Not really, I didn't get to that just yet - development around calliper has kind of stagnated recently. There should be some more activity in the following months and I may just start with bencher.dev integration though - it feels really good to see people actually use this (especially yourself). :) On that note, feedback is most welcome - beyond bencher.dev integration, I feel like run summaries could use some love so that eventually we could make working with benchmark results programatically a bit easier. |
For the most part, I can limp along with features for now because I want to make it easier to have shared features across different styles of benchmarking tools. I had originally planned to hold off on using something like this until after that work is done but I'm really wanting benchmarking CI support for rust-lang/cargo#12207 to make sure some use cases are fast enough and that we get faster See also |
I'm looking to do deterministic end-to-end benchmarking. It'd be great if I could have my target binary and flags passed directly to callgrind. First off, I'm unsure if callgrind is recursive for process spawns (I guess it is with #6 though that gets complicated and I don't want it further recursive). Even if it is, I'd still have the overhead of the "bench" function.
The text was updated successfully, but these errors were encountered: