Menu Zoeken English

Can connected machines learn to behave ethically?

Publicatie van Kenniscentrum Creating 010
A.F. Lier,van | Boekdeel | Publicatiedatum: 15 januari 2016
Over the past few years, the rapid development of artificial intelligence, the huge volume of data available in the cloud, and machines’ and software’s increasing capacity for learning have prompted an ever more widespread debate on the social consequences of these developments. Autonomous cars or the application of autonomous weapon systems that operate based on self-learning software without human intervention, are deemed capable of making life-and-death decisions, are leading to questions on a wider scale about whether we as human beings will be able to control this kind of intelligence, autonomy, and interconnected machines. According to Basl, these developments mean ‘ethical cognition itself must be taken as a subject matter of engineering.’ At present, contemporary forms of artificial intelligence, or in the words of Barrat, ‘the ability to solve problems, learn, and take effective, humanlike actions, in a variety of environments,’2 do not yet possess an autonomous moral status or ability to reason. At the same time, it is still unclear which basic features could be exploited in shaping an autonomous moral status for these intelligent systems. For learning and intelligent machines to develop ethical cognition, feedback loops would have to be inserted between the autonomous and intelligent systems. Feedback may help these machines learn behaviour that fits within an ethical framework that is yet to be developed.

Auteur(s) - verbonden aan Hogeschool Rotterdam

Betrokken bij deze publicatie

Wij maken gebruik van functionele en analytische cookies voor de werking van de website en het verbeteren van jouw gebruikerservaring. Deze cookies zijn noodzakelijk. Wil je meer weten? Lees dan ook ons cookiebeleid.