- rGHC Glasgow Haskell Compiler
No Unit Test Coverage
- Build Status
Buildable 14622 Build 23326: [GHC] Linux/amd64: Patch building Build 23325: [GHC] OSX/amd64: Continuous Integration Build 23324: [GHC] Windows/amd64: Continuous Integration Build 23323: arc lint + arc unit
If we want to keep Trac #3207 "fixed", then yes, I suppose we need this. But what makes the code in Trac #3207 better than my proposed implementation of unsafeInterleaveIO? Trac #3207 uses trace, which uses unsafePerformIO. Is there any particular reason users should expect it to behave well in optimized code? I'm not confident that Trac #3207 was a valid bug.
Currently, there's a mismatch between the note on has_side_effects, which
says it should be False for read-only primops, and primops.txt.pp, which
makes it True for these. @simonmar, you changed this to "fix" a weird
interaction between lazy ST and trace. Even assuming that was a bug worth
fixing, it looks like something else may have fixed it another way since.
In any case, we definitely need to decide on one consistent story.
Currently, there's a mismatch between the note on has_side_effects, which says it should be False for read-only primops, and primops.txt.pp, which makes it True for these.
Ah, I see. Yes, the rationale in that Note makes sense to me, and it looks like has_side_effects on read-only primops should be removed.
@simonmar, you changed this to "fix" a weird interaction between lazy ST and trace. Even assuming that was a bug worth fixing, it looks like something else may have fixed it another way since.
Can you point to the relevant commits here?
@simonmar, the relevant commit was 8a3ed3364fbc74b1f1b87b049737da2b251f92df. Is there some centralized place where we explain how all the hacks to make IO work fit together? Some is explained in the note on has_side_effects, and some is partially explained in the note on the I/O hack in the demand analyzer, but is there a coherent story somewhere? Speaking of the demand analyzer hack, that seems to handle things *other* than primops; it doesn't seem to handle primops right.
Ugh. No, there's no centralized place that describes these things as far as I know.
The comment I wrote in that commit seems to have been merged into the larger Note [PrimOp can_fail and has_side_effects] in PrimOp.hs (look for "Duplication"). A variant of the original example is there, and a reference to Trac #3207. So it seems like has_side_effects is still necessary to prevent duplication of read primops (in which case perhaps the Note should be made clearer on this point).
I'm not sure whether this is specific to the Lazy ST monad, or whether the problem can occur elsewhere. It seems like absent has_side_effects there's nothing preventing the simplifier from duplicating a read primop and thus reordering it relative to other things.
Well, we have the whole state token thing. Once that isn't in play,
everything goes totally wonky. It's very unclear to me what we're trying to
guarantee there. You did add a test case, I think; it's just in a weird
subdirectory: the code generator test suite.
We have the State# token yes, which enforces ordering of operations with respect to each other. So a read following a write will always occur in that order, as enforced by the state token passing.
However, the problem here (Trac #3207) is we have an operation that takes a State# as input, and returns a pair of two things: a new State# and a value. There's nothing preventing the compiler from doing this operation twice: once to get the value, and again to get the State#. And then we've lost the ordering, because the operation that returns the value does nothing with its State# output.