- User Since
- Jun 6 2014, 3:01 AM (159 w, 16 h)
Fri, Jun 2
May 20 2017
May 12 2017
Apr 30 2017
This broke travis:
Clean up imports
Quite right, cleanup patch at https://phabricator.haskell.org/D3511
Apr 28 2017
Combined effect has one runtime regression and one improvement of similar size. See
for dateils. I’ll not raise any concern.
Nevermind, the next commit fixes some of these issues; I should report on their combined effect.
According to https://perf.haskell.org/ghc/#revision/a1b753e8b1475659440f524b3e66dfbea31c5787 this fix is not without a cost:
Allocations up 22% for boyer2, up 5% for mate; runtime up 4.7% for fast and up 3.3% for mate.
Apr 24 2017
Oh, I guess I have to tell Phabricator that I don’t currently plan to change this, but rather that discussion should continue.
Apr 18 2017
Apr 12 2017
I (well, perf.haskell.org) observes a 25% increase in runtime in n-body (but nothing in any other numbers).
Apr 10 2017
Apr 9 2017
Mar 31 2017
Ok, I just did that in 03c7dd0941fb4974be54026ef3e4bb97451c3b1f. It can be reverted if that was uncalled for, but ideally with a patch that avoids spamming the log.
Is there a reason not to quickly and simply disable the warning until this has been looked into?
Mar 29 2017
This commit makes the build on perf.haskell.org produce huge build logs (>130MB) because of a spew of new lint warnings:
Mar 28 2017
Mar 24 2017
Mar 19 2017
Mar 17 2017
Sorry for not checking earlier, but no, this is not sufficient: https://raw.githubusercontent.com/nomeata/ghc-speed-logs/f37051fd6649e468621ee197179a514587b78e27/e0c433c81182c934ee4c4cc5c6cf25a1b6fb8d83.log.broken
Mar 16 2017
Mar 15 2017
I was actually thinking about this problem (but in the context of benchmarks in parallel) - maybe we could check-in (or add as submodules?) the packages that are needed for the benchmarks. This would make nofib more hermetic (and we could measure the compile-time required for those packages as another signal for measuring the performance of GHC itself).
This broke testing on perf.haskell.org:
+== make boot --no-print-directory; + in /home/nomeata/logs/ghc-tmp-REV/nofib/spectral/secretary +------------------------------------------------------------------------ +/home/nomeata/logs/ghc-tmp-REV/inplace/bin/ghc-stage2 -M -dep-suffix "" -dep-makefile .depend -osuf o -O2 -Rghc-timing -H32m -hisuf hi -package random -rtsopts Main.lhs +<command line>: cannot satisfy -package random + (use -v for more information) +<<ghc: 10186952 bytes, 7 GCs, 530340/819800 avg/max bytes residency (2 samples), 23M in use, 0.001 INIT (0.000 elapsed), 0.007 MUT (0.006 elapsed), 0.012 GC (0.012 elapsed) :ghc>>
Mar 9 2017
Mar 7 2017
Mar 6 2017
Ok, all reported performance changes are due to the changes to the list functions, and not due to fusing short lists.
Mar 5 2017
There are two small, but stable runtime changes in nofib. Expected, or a sign that something went wrong?
Mar 4 2017
Bumping out of the review queue for now.
Mar 3 2017
This might be interesting for @lukemaurer: According to https://perf.haskell.org/ghc/#revision/2effe18ab51d66474724d38b20e49cc1b8738f60 this patch undid the effect of join points on fannkuch-redux (i..e allocation goes up by 1318762.9%, but runtime goes down by 5%). Was this ever cleared up?
Mar 2 2017
Mar 1 2017
Test suite update
Nevermind me then :-)
Do you have a grip on the performance regressions?
Benchmark name previous change now nofib/time/fannkuch-redux 4.706 + 11.62% 5.253 seconds nofib/time/k-nucleotide 5.323 + 4.3% 5.552 seconds
Try to make T2110 more robust
Feb 28 2017
Feb 22 2017
The last-piece nofib program has regressed considerably (7%), likely due to changes in Data.Map. How should we handle that? Ignore it? Report to the cabal project? I guess I’ll do the latter: https://github.com/haskell/containers/issues/415
Feb 21 2017
Runtime performance improvements in fasta and lambda:
A bit weird, isn’t it?
Feb 16 2017
Feb 15 2017
I have doubts about the VERBOSE setting. If increasing the verbosity can make tests fail, then that is a harsh violation of the principle of least surprise!
Sorry to jump on this after it has been commited, but I have doubts about the VERBOSE setting. If increasing the verbosity can make tests fail, then that is a harsh violation of the principle of least surprise!
Feb 11 2017
Also recenter haddock.compiler
Update perf data.
Rebase to master
Feb 9 2017
https://travis-ci.org/ghc/ghc/builds/200216540 looks good, thanks!
Hmm, baking in the output of a tool that is not under our control, has several implementations and may of course change its error message from versin to version does not strike me as very useful in the long run. Maybe just grep the output for TH_addCStub2.hs:13:13, assuming that at least that part stays stable?
On Travis, this causes test cases to fail: https://api.travis-ci.org/jobs/200040619/log.txt?deansi=true
Feb 8 2017
Feb 7 2017
I see different runtime results from nofib:
Feb 6 2017
@nomeata looks like there's a perf regression though in the testsuite (T4029).
Feb 5 2017
https://perf.haskell.org/ghc/#revision/fbcef83a3aa130d976a201f2a21c5afc5a43d000 reports some speed-ups, but also that some stats in the test suite might need to be adjusted
https://perf.haskell.org/ghc/#revision/a2f39da0461b5da62a9020b0d98a1ce2765dd700 reports some speed-ups!