Talk 8 by @laurafontanesi who showed how valence and magnitude effects can, to some extent, be accounted for by different decision processes, without requiring modifications/distortions of values. She presented unpublished results, but also this paper htt
@lei_zhang_lz @zaizibai1 @wooyoungahn Hi! Dr. Zhang! That's what we intend to explore in our next step of analysis of the project. I know one paper who has combine RL model and DDM model together as ddmRL model. https://t.co/lpPqoWJAEV
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @StePalminteri: Check this out this new paper by @SebastianGluth , @laurafontanesi and others, on the importance of considering RT (and…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @StePalminteri: Check this out this new paper by @SebastianGluth , @laurafontanesi and others, on the importance of considering RT (and…
RT @laurafontanesi: Very happy to see our paper finally available online! :) We show how to combine RL and DDM models to explain behaviour…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @laurafontanesi: Very happy to see our paper finally available online! :) We show how to combine RL and DDM models to explain behaviour…
Check this out this new paper by @SebastianGluth , @laurafontanesi and others, on the importance of considering RT (and DDM) in reinforcement learning. We followed up on their approach on a related paper here: https://t.co/wh2oiy5oAn
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @laurafontanesi: Very happy to see our paper finally available online! :) We show how to combine RL and DDM models to explain behaviour…
Very happy to see our paper finally available online! :) We show how to combine RL and DDM models to explain behaviour in the presence of higher and lower rewarding options. Code will be also made soon available :D
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
RT @SebastianGluth: Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearnin…
Check out our new (#openaccess) publication with @laurafontanesi and @SpektorMikhail on combining #ReinforcementLearning and the #DDM. In brief: It's tricky because learning influences more than just the drift rate, and value maps onto drift nonlinearly: h