当前位置: X-MOL 学术Annu. Rev. Psychol. › 论文详情
Our official English website, www.x-mol.net, welcomes your feedback! (Note: you will need to create a separate account there.)
The Moral Psychology of Artificial Intelligence
Annual Review of Psychology ( IF 24.8 ) Pub Date : 2023-09-19 , DOI: 10.1146/annurev-psych-030123-113559
Jean-François Bonnefon 1 , Iyad Rahwan 2 , Azim Shariff 3
Affiliation  

Moral psychology was shaped around three categories of agents and patients: humans, other animals, and supernatural beings. Rapid progress in artificial intelligence has introduced a fourth category for our moral psychology to deal with: intelligent machines. Machines can perform as moral agents, making decisions that affect the outcomes of human patients or solving moral dilemmas without human supervision. Machines can be perceived as moral patients, whose outcomes can be affected by human decisions, with important consequences for human–machine cooperation. Machines can be moral proxies that human agents and patients send as their delegates to moral interactions or use as a disguise in these interactions. Here we review the experimental literature on machines as moral agents, moral patients, and moral proxies, with a focus on recent findings and the open questions that they suggest.

中文翻译:


人工智能的道德心理学



道德心理学是围绕三类主体和病人形成的:人类、其他动物和超自然生物。人工智能的快速进步为我们的道德心理学带来了第四类需要处理的问题:智能机器。机器可以充当道德代理人,在没有人类监督的情况下做出影响人类患者结果的决策或解决道德困境。机器可以被视为道德患者,其结果可能受到人类决策的影响,这对人机合作产生重要影响。机器可以是人类代理人和患者作为道德互动的代表发送的道德代理人,或者在这些互动中用作伪装。在这里,我们回顾了关于机器作为道德主体、道德患者和道德代理人的实验文献,重点关注最近的发现和它们提出的开放性问题。
更新日期:2023-09-19
down
wechat
bug