@cabitzaf @grok_ IMO collaboration is a decent model for some people's / household's interaction with dogs. Cf. https://t.co/kuhJOKYsNo for how dogs & monkeys may be moral agents in a household but not legal agents in human society. Certainly they brin
@yoginho I agree, except it's not even about what AI "achieves", it's not useful to talk about technology agency that way. It's about what we can and will achieve through our technology https://t.co/kuhJOKYsNo
That is, we can champion striving for the clear delineation of what is factually determined from what rather must be normative. My recent angry tweet about the narrative of #machineBehaviour was about exactly this, cf. my regulatory x ethics papers e.g. ht
@jadelgador @gabramosp @UNESCO While those things are correlated with moral patiency in humans, unless you choose to use those terms to be exactly equivalent to "moral patiency", I argue their typical characteristics do not determine it. 2018 academic argu
@loehrgui @AnnaStrasser1 @David_Gunkel Yeah, my original position on the AI was "doesn't matter whether you can bc such a bad idea", but when my book comes out he'll have to migrate me towards "cannot". Of course, he's absolutely right on my position on th
@cccalum @dw2 Because what you are actually talking about is wanting to be able to construct something you owe moral obligation to, so talk about that. That's hard enough to work on without throwing in other disputed terms. https://t.co/kuhJOKZ0CW
@shiminwang7 But not the basic view https://t.co/kuhJOKYsNo
@StephenBrancat1 You might be interested in my longer discussions of that https://t.co/kuhJOKYsNo though I'd hoped to have also made it clear in the Wired article that the vast, vast, vast majority of AI is and will always be commercially generated and own
@gfodor @jonst0kes Art has moral salience, but it is not a moral agent https://t.co/kuhJOKYsNo Turing was a total genius, but still wrong about this.
@mark_asbach @johl @Senficon Thinking of machines themselves as having rights is a problem, https://t.co/kuhJOKYsNo EXCEPT as an expression back transitively to the humans who use them. Yeah, interesting point I hadn’t been on top of, thanks!
@MuseumofAI @sirnotto @monobor I don‘t think „collaboration“ is the right metaphore as you may know. But in case https://t.co/kuhJOKYsNo
@cow_trix Ah 1) not possible to build an L5 autonomous car 2) all planning systems vaguely could be said to "rank future states;" yes that has ethical consequences, but that doesn't make them moral agents. see "definition of moral action" here https://t.co
I've come to define #ethics to mean all means a society uses to hold itself together (& morality as the explicit subset of that). That derived from the observation of cultural variation in definitions of moral agency; I used it because #transhumanism:
@existentialand @justMathana @David_Gunkel @atg_abhishek @mitpress The first two ethics papers I wrote nobody read, & a lot of people have thanked me for that chapter, but after conversations I wrote that 2015 apology. Also a better version of the chap
RT @j2bryson: @hadi_a @alan_winfield @mikarv @sedyst @1Br0wn @podehaye @algorithmwatch @KayFButterfield @OECD more formally (for law) https…
RT @j2bryson: @hadi_a @alan_winfield @mikarv @sedyst @1Br0wn @podehaye @algorithmwatch @KayFButterfield @OECD more formally (for law) https…
RT @j2bryson: @hadi_a @alan_winfield @mikarv @sedyst @1Br0wn @podehaye @algorithmwatch @KayFButterfield @OECD more formally (for law) https…
@hadi_a @alan_winfield @mikarv @sedyst @1Br0wn @podehaye @algorithmwatch @KayFButterfield @OECD more formally (for law) https://t.co/jt3zNKAZYJ (oa https://t.co/imO7ccvsIZ ) or (for philosophy) https://t.co/kuhJOKYsNo or (for engineering) https://t.co/FNRP
@LucasCardiell You should also read this more recent article https://t.co/kuhJOKYsNo
@kokociel1 Well if you want to define sentient == complex then yes, but what has that bought you? The most unpredictable thing is white noise, is that sentient? Anyway I don’t have an adequate paper on anthropomorphism yet, but here’s a nearby set of issue
@Inframethod IMO you don‘t want to go down the rabbithole where we decide whether thinking is lit. Just jump in with both feet on the real issues: moral agency & moral patiency. Then you can be functionalist about the rest. https://t.co/kuhJOKYsNo
@Lilyfrank16 @jksmith8806 @David_Gunkel @ResearchGate @MCoeckelbergh @JohnDanaher @autumnedwards @amperjay @JoshGellers @SvenNyholm That is a very successful stab, cf. moral agency and moral patiency. https://t.co/kuhJOKYsNo I'm surprised you haven't alrea
@williamstome @aimeevanrobot @Boring_AI @RespRobotics There's a difference between taking actions of moral import (agent expression moral actions) and being the responsible agent for those actions (the moral agent). E.g. military commanders, AI owners/oper
RT @TravisLacroix: The paper I have in mind, out of this week's readings for a (massive) Lit review on AMAs, is Bryson, J. J. (@j2bryson) (…
The paper I have in mind, out of this week's readings for a (massive) Lit review on AMAs, is Bryson, J. J. (@j2bryson) (2018) "Patiency is not a virtue: the design of intelligent systems and systems of ethics" Ethics and Information Technology. [3/3] https
@OliverBridge12 @aimeevanrobot I agree with this, and again have published both. Here's a paper about moral behaviour in machines, and how this does not necessitate moral patiency (or agency!) https://t.co/kuhJOKYsNo
@rodakker I do talk about it all the time https://t.co/kuhJOKYsNo
Lots to think about in this thread & the associated articles - including @j2bryson 's Patiency: https://t.co/EH0YX7R0ZM
@rodakker Yes. Cf. the papers. It's not intelligence we care about, it's not agency, it's MORAL agency, who is responsible. If you make "intelligence" do too much duty, you can't talk about the components of responsibility. Longer: https://t.co/kuhJOKYsNo
RT @j2bryson: @CjColclough @DorotheaBaur @sd_marlow @johnchavens @WendellWallach @vdignum Back to defs: intelligence is generating actions…
@CjColclough @DorotheaBaur @sd_marlow @johnchavens @WendellWallach @vdignum Back to defs: intelligence is generating actions appropriate to the context (def from 1880s!), so AI can act. But it is not sensible to assign artefacts responsibility/ownership fo
RT @j2bryson: @CjColclough @johnchavens @WendellWallach @vdignum I talked about moral decisions vs moral agents here: https://t.co/kuhJOKYs…
@CjColclough @johnchavens @WendellWallach @vdignum I talked about moral decisions vs moral agents here: https://t.co/kuhJOKYsNo but I try not to get too bogged down in language distinctions like morals vs ethics. 1/2
@anotherday____ @RecklessCoding @vdignum @hooklee75 I'm not sure what work your definition of intelligence is doing for you. Again, I suggest my patiency paper https://t.co/kuhJOKYsNo I think you are confounding something important beyond the definition of
@RecklessCoding @hooklee75 @vdignum cf def or moral actions here https://t.co/kuhJOKYsNo
@teemu_roos @azeem This is the first chapter of my new book, but it's also in a bunch of my papers. Here's a good one https://t.co/kuhJOKYsNo
@mccrickerd That’s not a typo, it’s one of my tag lines? Does this help? https://t.co/kuhJOKYsNo
Most 'AI in ethics' isn't about ethics at all. It's activism, where ethics is neither understood nor defined. It's barely disguised anti-tech ghost hunting, full of hot air about imaginary rights... @j2bryson sets the record straight https://t.co/UMZ1YohnP
Writer remarks “Replikas didn’t choose this ‘life,’ they’re slaves.” Could be human gig-economy serfs ‘Turking’ as AIs (problematic in its own way), otherwise apropos are two from @j2bryson: “Robots Should Be Slaves” https://t.co/voCFlpmydi & https://t
RT @j2bryson: @futsofwelfare @Jacqx13 You two might also like https://t.co/kuhJOKYsNo if you are more concerned about foundations of morali…
@futsofwelfare @Jacqx13 You two might also like https://t.co/kuhJOKYsNo if you are more concerned about foundations of morality than mechanics of consciousness (just got a cursory paragraph in the older paper, but I new Sven had seen the newer one.)
RT @j2bryson: @Grady_Booch @rodneyabrooks We need to stop getting excited about silly flags of apparent humanlikeness like explicit vs impl…
@Grady_Booch @rodneyabrooks We need to stop getting excited about silly flags of apparent humanlikeness like explicit vs implicit memory systems (which we can easily design). The attributes we really care about are technically called moral agency and moral
RT @j2bryson: @mark_riedl @RecklessCoding We need to stop getting excited about flags like explicit vs implicit memory systems (which we ca…
@mark_riedl @RecklessCoding We need to stop getting excited about flags like explicit vs implicit memory systems (which we can easily design) and get serious about understanding moral agency and patiency. https://t.co/kuhJOKYsNo
@mlamons1 We need to stop getting excited about flags like explicit vs implicit memory systems (which we can easily design) and get serious about understanding moral agency and patiency. https://t.co/kuhJOKYsNo
@spillteori @TweetinChar @neilturkewitz @BrettFrischmann @David_Gunkel @LMSacasas @DorotheaBaur @Abebab @dmonett Yeah I didn’t go into that there but the only difference to humans is that regulating our society is what our entire moral apparatus is tuned f
@EyeOnThePitch @nbanteka I argue the same wrt moral agency in https://t.co/kuhJOKGRoO but you probably both know that.
@RosGanley @Abebab @brodiegal @SarahPinsker @theblub Note that unfortunately that misconstrues my work. Cf. the paper they quote, and also https://t.co/kuhJOKYsNo
@LeConcurrential For more (if you are into that kind of discussion) here's a more philosophical discussion of the same topics, just by me (the other paper is with two legal experts in legal personality) https://t.co/kuhJOKYsNo
@SurviveThrive2 @David_Gunkel @BoganiRonny @NoelSharkey @SurviveThrive2 @David_Gunkel @BoganiRonny @NoelSharkey „Eventually?“ MScs have programmed those for decades. https://t.co/FAC53rWElf Not really relevant https://t.co/kuhJOKYsNo
@daltonnyuphilo1 @RobotRules @GrandpaRobot @NoelSharkey @David_Gunkel And they’re collections of humans. When you protect those humans TOO much you get shell companies = corruption of the society. No humans would be even worse. https://t.co/kuhJOKYsNo
@Maperez324 @spillteori @Abebab cf https://t.co/vOKx2ixPwL and https://t.co/kuhJOKYsNo if you haven't already (I edited out all the people I KNOW already have heard about these papers, sorry if I should know you have.)
@theblub I wrote a decent article about this recently, "[moral] Patiency is not a virtue" – it talks about moral agency too https://t.co/kuhJOKYsNo See also the revised (for IJCAI 2011) version of my very first #aiethics paper (from 1996 initially) https:
@JoshGellers @keithfrankish @David_Gunkel @MCoeckelbergh @JohnDanaher For me, start with the 2019 bbva policy article https://t.co/4JCTVIXEW4 then read the more philosophical 2018 Patiency is not a virtue https://t.co/kuhJOKYsNo There’s also a blog P
RT @j2bryson: @kerstingAIML @teemu_roos @kaliouby @wef @AIESConf I've written about that extensively. Here from a law perspective (with lea…
@kerstingAIML @teemu_roos @kaliouby @wef @AIESConf I've written about that extensively. Here from a law perspective (with leading legal experts) https://t.co/bag5BaCCoW here from a philosophical & design perspective https://t.co/kuhJOKYsNo This is not
RT @j2bryson: @be_jenky @NC_Matthews @Faust_III I've written so many blogs about this https://t.co/Sj39HVmn3R & of course academic papers…
@be_jenky @NC_Matthews @Faust_III I've written so many blogs about this https://t.co/Sj39HVmn3R & of course academic papers https://t.co/kuhJOKYsNo & coauthored national level AI policy https://t.co/F2fkHHvnc3 that heavily influenced global policy
RT @future_of_AI: Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/4RQRtlPTiw #ai #machinelea…
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/4RQRtlPTiw #ai #machinelearning #artificialintelligence via @j2bryson
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/8NRE2tG9s0 #AI #Robotics via @j2bryson
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/uDN8DfCF1c #AI #Ethics via @j2bryson
@javi_valls Normally punishment & enforcement these days is determined by law which is in my mind a subpart of ethics, but in traditional moral philosophy I believe generally considered mostly separate from, only hopefully informed by ethics. My weird
@wisdomplexus @ctricot @mjrobbins @anki @paulroetzer It’s not about capability — putting artefacts in charge is really about undermining human accountability, because it would be the action of a human agency to do that. https://t.co/kuhJOKYsNo
RT @j2bryson: @evansd66 @JohnDanaher @AlanMackworth Exactly. So the question isn’t which definition is “right”, definitions are just policy…
RT @j2bryson: @NeuralMimicry @TallinnSummit Humans and neanderthals are both hominids. The difference between them is so small they can int…
RT @j2bryson: @M_Altarriba @markcannon5 @RobMcCargow @WSJProAI me on moral patiency contrasting with consciousness, intelligence etc: https…
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/4RQRtlyhTW #ai #machinelearning #artificialintelligence via @j2bryson
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/uDN8DfCF1c #AI #Ethics via @j2bryson
@evansd66 @JohnDanaher @AlanMackworth Exactly. So the question isn’t which definition is “right”, definitions are just policy, agreed for a purpose. Basically it looks to me like John is agreeing with the agency arguments in my patiency paper, though not m
@NeuralMimicry @TallinnSummit Humans and neanderthals are both hominids. The difference between them is so small they can interbreed. Sexual reproduction with digital technology is not a thing. Maybe this helps? https://t.co/KLVIJdkRhj or this? https://t.c
@M_Altarriba @markcannon5 @RobMcCargow @WSJProAI me on moral patiency contrasting with consciousness, intelligence etc: https://t.co/kuhJOKYsNo me on consciousness (a bit old, with on a very cursory mention of #aiethics) https://t.co/01y9UFBxEc 2/2
@SimonColton q&a Bartle #CoG2019 4) maybe they're already experiencing meaning (I'd say yeah but doesn't matter https://t.co/qOPahq2BbV ) he says no we're sure we haven't. Chair asks will this happen in 10years? IMO consciousness already has & isn'
#cog2019 Bartles: argument against AI as moral agents is that they're just bits. No, it's substantially more sophisticated: https://t.co/h9mf2xzlwx describes moral actors that do not require moral patiency, which is a terrible way to protect AI https://t.c
RT @j2bryson: @PhilosoFox94 @David_Gunkel @eripsa @Twitter @PKathrani @mitpress Already the expression "robot rights" introduces MANY probl…
RT @kennethanderson: Great paper by @j2bryson https://t.co/ScSlij9aFg
Great paper by @j2bryson
@PhilosoFox94 @David_Gunkel @eripsa @Twitter @PKathrani @mitpress Already the expression "robot rights" introduces MANY problematic assumptions, but I finally gave in & made a label on my blog about it anyway https://t.co/Sj39HVmn3R I'd recommend forem
@RebelScience You might like my publications eg policy https://t.co/4JCTVIXEW4 philosophy https://t.co/kuhJOKYsNo law https://t.co/bag5BaCCoW or if you don’t like reading longform a lot of my talks are on YouTube.
@chophshiy People can assign moral agency to anything they like, but doing so to anything other than other people is unstable. Cf my patiency paper. https://t.co/kuhJOKYsNo also/therefore even if we could build things that deserved moral consideration, we
@markcannon5 The definitions you choose have to do with the communication you’re trying to achieve. We can already do most of those things but not EXACTLY like a human without human phenomenological (body based) experience. See https://t.co/kuhJOKYsNo and
RT @j2bryson: @dchackethal See, this is also not the “AI that can do what humans can” definition people are claiming they follow. If you li…
@dchackethal See, this is also not the “AI that can do what humans can” definition people are claiming they follow. If you like philosophy, maybe you’ll like this: https://t.co/kuhJOKGRoO
@mgubrud @j2blather @evansd66 @hoven_j Many people want something maybe it would be bad to have. I used to want that. https://t.co/kuhJOKYsNo
RT @future_of_AI: Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/4RQRtlPTiw #ai #machinelea…
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/4RQRtlPTiw #ai #machinelearning #artificialintelligence via @j2bryson
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/8NRE2tG9s0 #AI #Robotics via @j2bryson
Patiency is not a virtue: the design of intelligent systems and systems of ethics https://t.co/uDN8DfCF1c #AI #Ethics via @j2bryson
@hairyphil @twimlai Hi, thanks for asking. The best / most complete explanation I've published to date is here: https://t.co/kuhJOKYsNo (I'm also meaning to be working on a book but never find time). If that's too academic there are also a lot of posts on
RT @j2bryson: @pekikimkibu To date I mostly write ethics wrt AI but that’s probably changing. But right now best is probably https://t.co/k…
@pekikimkibu To date I mostly write ethics wrt AI but that’s probably changing. But right now best is probably https://t.co/kuhJOKYsNo
RT @julianharris: The most concise and informative tweet on artificial consciousness ever. https://t.co/MgQVwKBbOP
RT @julianharris: The most concise and informative tweet on artificial consciousness ever. https://t.co/MgQVwKBbOP
Ask yourself: 1. Is an insect conscious? 2. Is a human conscious? If you answer yes to 2 (which you will if "conscious" has any meaning at all) but say no to 1 then you've drawn a line, but where does that line lie? Spider? Lizard? Cow? Cat? Monkey? Ape