The ethics of robots
Today, robots raise many interrogations, fears, and prospects. Not only experts are wondering about the problems of robotics, but also the public. However, there was always being a fundamental problem with robots: its ethical issues. People tend to think about the possible societal consequences of current developments. We have studied non-stop in order to recreate some of the worlds played in science fiction. First, it is important to explore some of the implications of the technology in our world. We are concerned about it, and with a lot reasons, as we know that theory and practise do not match together sometimes. In this article, we will analysis some of the proposed ideas to deal with these issues. If you want to know more, keep reading!
AI, robots and moral dilemmas
Both philosophers and scientists have already talked and discussed about it. We need to know exactly how a robot can behave. If we do not control all the variables, it can even become something dangerous. And then, which is the best way to teach a robot? Some people say that ethics is the answer. They believe in the inclusion of principles into the machine to allow it to work along them. However, the intelligence of a robot is an AI, an Artificial Intelligence. The main characteristic of it is its learning process and maybe we will not be unable to predict it. Scientists say that the key is the code. Robots will work as they are told, but they can also learn, so it is important to control with codes the way they act.
If people develop robots with instructions and limits, it will be easier to understand its behaviour. And when we talk about how to behave, we are referring to a moral and ethical control of itself. Following this line then, experts have been creating through time a series of principles. The main objective of these is control the behaviour of a robot with internal laws and orders. However, it is more complicated than it seems. We, as humans, do not even know how we should act from time to time. It is what we call “dilemmas” and the development of AI and robots is a dilemma itself.
Asimov’s three Laws of Robotics
In 1950, the famous and recognized writer Asimov created one of the set of principles more followed related to robotics. This author studied in depth the dilemmas of the development and learning of an AI. With the understanding of it, he created three laws that modified the behaviour of the robots. Although all of this is pure fiction, experts have been using it in real life.
First law: A robot may not injure a human being or, through inaction, allow a human being to get hurt.
Second law: A robot must obey the orders given it by human beings except where such orders would conflict with the First law.
Third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second law.
Montréal Declaration for Responsible AI draft principles
Using the laws of Asimov as the base, different people have designed their own principles. In the last years, related with the development of AI, the necessity of these principles has increased. One of the last ones has set and improved the laws of Asimov through the following imprints based in the behaviour of an AI, of a robot.
Autonomy: The development of AI should promote the autonomy of all human beings and control, in a responsible way, the autonomy of computer systems.
Justice: The development of AI should promote justice and seek to eliminate all types of discrimination, notably those linked to gender, age, mental / physical abilities, sexual orientation, ethnic/social origins and religious beliefs.
Privacy: The development of AI should offer guarantees that personal privacy will be respected and allow people who use it to access their personal data as well as the kind of information that any algorithm might use.
Knowledge: The development of AI should promote critical thinking and protect us from propaganda and manipulation.
Democracy: The development of AI should promote informed participation in public life, cooperation and democratic debate.
Responsibility: The various players in the development of AI should assume their responsibility by working against the risks arising from their technological innovations.
Artificial Intelligence and robots can be the perfect companion in our life. It will help and take care of us in any possible way. However, we need to predict and control it in order to make it safe. If you also want introduce yourself into this new world, join us in our Master in Artificial Intelligence and Deep Learning. You will learn not only the theory about robotics’ ethics, but also how to put it into practise. Contact us and we will send you all the information!