- User Since
- Jun 6 2014, 3:01 AM (146 w, 2 d)
Fri, Mar 24
Sun, Mar 19
Fri, Mar 17
Sorry for not checking earlier, but no, this is not sufficient: https://raw.githubusercontent.com/nomeata/ghc-speed-logs/f37051fd6649e468621ee197179a514587b78e27/e0c433c81182c934ee4c4cc5c6cf25a1b6fb8d83.log.broken
Thu, Mar 16
Wed, Mar 15
I was actually thinking about this problem (but in the context of benchmarks in parallel) - maybe we could check-in (or add as submodules?) the packages that are needed for the benchmarks. This would make nofib more hermetic (and we could measure the compile-time required for those packages as another signal for measuring the performance of GHC itself).
This broke testing on perf.haskell.org:
+== make boot --no-print-directory; + in /home/nomeata/logs/ghc-tmp-REV/nofib/spectral/secretary +------------------------------------------------------------------------ +/home/nomeata/logs/ghc-tmp-REV/inplace/bin/ghc-stage2 -M -dep-suffix "" -dep-makefile .depend -osuf o -O2 -Rghc-timing -H32m -hisuf hi -package random -rtsopts Main.lhs +<command line>: cannot satisfy -package random + (use -v for more information) +<<ghc: 10186952 bytes, 7 GCs, 530340/819800 avg/max bytes residency (2 samples), 23M in use, 0.001 INIT (0.000 elapsed), 0.007 MUT (0.006 elapsed), 0.012 GC (0.012 elapsed) :ghc>>
Thu, Mar 9
Tue, Mar 7
Mon, Mar 6
Ok, all reported performance changes are due to the changes to the list functions, and not due to fusing short lists.
Sun, Mar 5
There are two small, but stable runtime changes in nofib. Expected, or a sign that something went wrong?
Sat, Mar 4
Bumping out of the review queue for now.
Fri, Mar 3
This might be interesting for @lukemaurer: According to https://perf.haskell.org/ghc/#revision/2effe18ab51d66474724d38b20e49cc1b8738f60 this patch undid the effect of join points on fannkuch-redux (i..e allocation goes up by 1318762.9%, but runtime goes down by 5%). Was this ever cleared up?
Thu, Mar 2
Wed, Mar 1
Test suite update
Nevermind me then :-)
Do you have a grip on the performance regressions?
Benchmark name previous change now nofib/time/fannkuch-redux 4.706 + 11.62% 5.253 seconds nofib/time/k-nucleotide 5.323 + 4.3% 5.552 seconds
Try to make T2110 more robust
Tue, Feb 28
Feb 22 2017
The last-piece nofib program has regressed considerably (7%), likely due to changes in Data.Map. How should we handle that? Ignore it? Report to the cabal project? I guess I’ll do the latter: https://github.com/haskell/containers/issues/415
Feb 21 2017
Runtime performance improvements in fasta and lambda:
A bit weird, isn’t it?
Feb 16 2017
Feb 15 2017
I have doubts about the VERBOSE setting. If increasing the verbosity can make tests fail, then that is a harsh violation of the principle of least surprise!
Sorry to jump on this after it has been commited, but I have doubts about the VERBOSE setting. If increasing the verbosity can make tests fail, then that is a harsh violation of the principle of least surprise!
Feb 11 2017
Also recenter haddock.compiler
Update perf data.
Rebase to master
Feb 9 2017
https://travis-ci.org/ghc/ghc/builds/200216540 looks good, thanks!
Hmm, baking in the output of a tool that is not under our control, has several implementations and may of course change its error message from versin to version does not strike me as very useful in the long run. Maybe just grep the output for TH_addCStub2.hs:13:13, assuming that at least that part stays stable?
On Travis, this causes test cases to fail: https://api.travis-ci.org/jobs/200040619/log.txt?deansi=true
Feb 8 2017
Feb 7 2017
I see different runtime results from nofib:
Feb 6 2017
@nomeata looks like there's a perf regression though in the testsuite (T4029).
Feb 5 2017
https://perf.haskell.org/ghc/#revision/fbcef83a3aa130d976a201f2a21c5afc5a43d000 reports some speed-ups, but also that some stats in the test suite might need to be adjusted
https://perf.haskell.org/ghc/#revision/a2f39da0461b5da62a9020b0d98a1ce2765dd700 reports some speed-ups!
Feb 4 2017
From a very brief look at this, and assuming for a moment the plugin architecture allowed for hooks prior to the desugarer, would this change still be necessary
perf.haskell.org reports the absence of significant regressions: https://perf.haskell.org/ghc/#revision/b59c2de7abe3cd4e046f11c3536ba8e7137c4f84
Whoohoo! Build time down by 13%. And sizes down. Well spotted!
Redone the patch, and make mkMachInt etc. do the wrapping.
Rebase to master
I disagree with that review.
Feb 3 2017
I have a better approach now. Do you like it?
New approach to not dropping dead code in the desugarer
Obviously I’ll push these as separate commits; but I’d like to get a quick nod from someone else before I do so.
Feb 2 2017
Clean up some code is simpler without ProbOneShot
Feb 1 2017
Does this have to live in a separate library? Can it not be part of base (which already has a bunch of low-level GHC. modules) or ghc-prim?
Jan 24 2017
This had a reproducible effect of increasing runtime of fasta by +4.47%. Are runtime effects here expected?
Jan 19 2017
Good job, tests/alloc/T13056 improves by 4%!
Jan 17 2017
Jan 6 2017
Well, I expect programs that use Monad (Either s) or Functor (Either s) or similar constructs to benefit, but we might not just have them in nofib.
Yes, at https://perf.haskell.org/ghc/#revision/19d5c7312bf0ad9ae764168132aecf3696d5410b. Unfortunately, nothing significant shows up. When I tested it locally, I saw some relevant improvements with the runtime of binary-trees. I continue to conclude that our benchmark situation sucks when it comes to producing hard evidence.