Adjust normal runtimes for nofib along with related changes.

Authored by AndreasK on Jul 19 2018, 12:19 PM.


Group Reviewers
Restricted Owners Package(Owns No Changed Paths)
rNOFIB08cc9b6b2c7f: Adjust normal runtimes for nofib along with related changes

Runtime for nofib benchmarks was all over the place.
This patch adjusts runtime for most benchmarks such
that it falls into the 0.2-2s range.

This means that:

  • A default run will take longer
  • Time spent will be better distributed among benchmarks.
  • More benchmarks have runtimes long enough to be used for runtime analysis.

Some more changes where done which go hand in hand
with changing runtimes.

  • Some benchmarks now create their input files during boot.
  • Moved input files for anna in their own directory.
  • Remove printing of output for some of the floating point heavy benchmarks.
  • Added a comment about desired runtimes to README.
  • Made grep actually benchmark something.
  • Throw cachgrind out of the default benchmarks. The nondeterministic behaviour has been an issue for a while and it doesn't seem like an essential benchmark.
Test Plan

run nofib in modes slow/normal/fast

Diff Detail

rNOFIB nofib
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.
AndreasK created this revision.Jul 19 2018, 12:19 PM
Owners added a reviewer: Restricted Owners Package.Jul 19 2018, 12:19 PM
AndreasK edited the summary of this revision. (Show Details)Jul 19 2018, 12:29 PM
alpmestan requested changes to this revision.Jul 24 2018, 10:48 AM
alpmestan added a subscriber: alpmestan.

Just a minor request, but otherwise looks good to me and will be very handy (if anything, our plots will look nice & balanced).

This is nice but the previous sentence, when put together with this, makes me wonder what people should aim for for fast and slow. Any chance you could add a sentence for each?

This revision now requires changes to proceed.Jul 24 2018, 10:48 AM
AndreasK added inline comments.Jul 24 2018, 12:40 PM

I left these vague because I'm not sure what people should aim for.

For fast at least there is the general as use case of getting instruction/allocation measurements.
So I guess any representative problem would do there.

For slow I'm just not sure what people use it for.
If used as "the same benchmarks on different problems" which is what I used it for so far then the runtime really doesn't matter and could be in the range of normal.
If people use it to get long running benchmarks then more is better. But then how much is too much? What should be a minimum to aim for?

alpmestan accepted this revision.Jul 24 2018, 4:04 PM

Alright, since there aren't any particularly obvious recommendations to give for the other two, I stand corrected and accept this patch as-is.

This revision is now accepted and ready to land.Jul 24 2018, 4:04 PM
sgraf added a subscriber: sgraf.Nov 8 2018, 11:39 AM

Can this be merged?

Merged in 08cc9b6b2c7f7fdaaaf80361ab84a501f0a573c5. That somehow didn't close the diff automatically, and I can't mark the diff as closed myself.

This revision was automatically updated to reflect the committed changes.
sgraf updated the Trac tickets for this revision.Dec 6 2018, 3:09 AM