On the morality of artificial agents
Web1 de jan. de 2011 · Moral situations commonly involve agents and patients. let us define the class A of moral agents as the class of all entities that can in principle qualify as sources … Web1 de ago. de 2024 · It is based on the observation that moral agency is not fixed through elaborated definitions or sophisticated ethical conceptions, but rooted in the moral …
On the morality of artificial agents
Did you know?
Web11 de abr. de 2024 · The Moralities of Intelligent Machines research group headed by Michael Laakasuo investigates people's moral views on imaginary rescue situations where the rescuer is either a human or a robot specifically designed for the task. The rescuer has to decide whether to save, for example, one innocent victim of a boating accident or two … Web2 de jul. de 2024 · Several approaches to the notion of an "artificial moral agent" in a general sense (i.e., levels 3 and 4 of [18]), are criticized as being philosophically illegitimate (e.g., [12, 28]). They can...
Web阅读量: 273. 作者: luciano floridi , j.w. sanders. 展开 . 摘要: Webartificial intelligence (AI) with ToM-like abilities. Virtual and physical AI agents would be better and safer if they could impute unobservable mental states to others. The safety of self-driving cars, for example, would greatly increase if they could anticipate the intentions of pedestrians and human drivers.
Web28 de out. de 2024 · Artificial Morality is a sub-discipline of AI that explores whether and how artificial systems can be furnished with moral capacities. 1 Its goal is to develop artificial moral agents which can take moral decisions and act on them. Artificial moral agents in this sense can be physically embodied robots as well as software agents or … WebArtificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted …
WebCan they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2024 novel Klara and the Sun .
Web21 de set. de 2024 · This distinction between agency and responsibility is important. It turns out that much of the opposition to creating an ethical robot stems from the perceived link between agency and ... diane day facebookWebAn agent is morally good if its actions all respect that threshold; and it is morally evil if some action violates it. That view is particularly informative when the agent constitutes a … diane decker real estate rockwell city iowaWebAIxIA 2024 – Advances in Artificial Intelligence: ... introducing the novel concept of the morality level of an agent and moving towards multi-goal, ... Lorini E A logic for reasoning about moral agents Logique Analyse 2015 58 230 … diane dewhirstWebMachine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral. To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of … diane dewey authorWeb5 de jun. de 2009 · The result is a sketch of an empirically informed anthropocentric ethics that aims at understanding and evaluating what robots do to humans as social and emotional beings in virtue of their appearance, in particular how they may contribute to human good and human flourishing. citb youtubeWeb31 de dez. de 2024 · This Special Issue aims to provide a forum for the exploration of the potential interplay between AI and the dynamics of human collective behavior such as cooperation, coordination, trust and fairness; in particular, the different ways that the advancement of AI might alter the dynamics of human collective behavior, and vice-versa. citb young person risk assessmentWeb1 de set. de 2005 · A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial … diane dewitt facebook