@NeilWoodward20 @jjfahrenfort as long as the selection criteria are independent from the data, then R-to-M won't happen, so a priori is generally good! this was problem #1 here: https://t.co/b8UkpjmH2h
@DrTLane @lukaswallrich @MarcusCrede thanks ππΌ! if you need a paper rather than video, this is my main stab at the problem: https://t.co/b8UkpjmH2h
Paper 21: Holmes 2009 Criticism: In case you didn't believe my first three papers on the topic, here are too many supplementary graphs & simulations making the same, relatively small, statistical point. The point is correct, but it's a paragraph, tops
Octave (Matlab) code here: https://t.co/eMq6cSI4rx https://t.co/W6Pghzn1aY I've made my career publishing about this, but you can read some more here: https://t.co/b8UkpjmH2h
@sampendu @BEAM_Manchester yeah, probably in my sub-field π - I've trained myself over many years to spot these 'double dips'. editors & reviewers regularly miss them, probably occur in ~1/10 reviews i do. https://t.co/jq3N6FjIDG https://t.co/b8Ukpjm
@sirileknes try this! i discuss various problems in this topic https://t.co/6XISOWROra
... unless you want me to write yet ANOTHER paper on this very common problem*: https://t.co/b8UkpjmH2h * I probably will, tbh - it's so EASY! #InverseEffectiveness #RegressionToTheMean #ReversionToMediocrity #DoubleDip
RT @TheHandLab: @diedrichsenlab Ah, that old chestnut! I've been hacking away at mis-interpretation of correl(x,y-x) for years, to little eβ¦
@diedrichsenlab Ah, that old chestnut! I've been hacking away at mis-interpretation of correl(x,y-x) for years, to little effect: https://t.co/b0t700xivm