Criticised by some but lauded by others, it's no longer possible to ignore the ever-increasing presence of artificial intelligence (AI) in our daily lives. While some applications remain under lock and key in heavily guarded laboratories, it's clear nonetheless that AI and robotics are (very) rapidly evolving.
Aside from the question of how members of the European and Member State legislatures should view the development of AI and robotics, a fundamental debate exists surrounding the integration of robots with AI into our societies: should they be granted legal personality? In a nutshell, legal personality is the ability of a - natural or legal - person to have rights and obligations. This notably takes the form of the right to enter into contracts, bring legal proceedings, purchase property, etc.
The main argument raised by those in support of granting legal personality to robots (often referred to as "electronic personality" or "robot personality") is that this status will make it possible to hold robots responsible for their acts, thereby resolving liability issues in the event of damage or injury.
However, we believe that granting legal personality to robots is not desirable. Indeed, acknowledging liability on the part of robots would entail granting them the right to hold assets that could serve to compensate those affected by the damage they cause. Taken to the extreme, this would mean that robots could freely use their assets to buy and sell property (running the risk seeing robots making themselves insolvent).
In addition to the foregoing, the grant of legal personality to robots mainly poses a problem - in our opinion - in relation to our own liability. Indeed, why would anyone be concerned about the behaviour of a robot if it could be held liable for its actions and possessed assets allowing it to compensate injured parties? In other words, the risk is that humans will no longer care about robots’ choices and will be able to hide behind the actions of robots ("It wasn’t me, it was the robot"), actions which may not always be possible to control or understand.
We argue that, in any case, humans must remain liable for the actions of robots. This of course means that the European and Member State legislatures should begin adapting the law to ensure the concrete integration of AI and robotics into our daily lives. This is especially urgent given that many countries have already begun this process, and it is an area in which the Old World – Belgium in particular – cannot afford to fall too far behind.
Liability law should take into account the persons responsible for robots’ actions and include, for instance, provisions to allow a robot to be equipped with a sort of "black box" to monitor its decision-making and the actions/interactions resulting from these decisions. For instance, in the case of a robot using open-source software, the software designer could be held liable for damage caused by problems stemming from the basic algorithms, while the programmer who adapted the open-source software could be held liable for damage caused by defects in the way the program runs. An additional difficulty arises with so-called deep learning robots, which are programmed to learn and take decisions in a continuous, autonomous manner. Their behaviour changes depending on their environment and their interactions with humans or other robots. That being said, it should be possible to hold humans liable even for this type of robot. That way, inventors will be encouraged to do everything in their power to ensure that they are in control of the robot and take responsibility for the damage it could cause.
Finally, we conclude with a brief etymological remark. The word "robot" comes from the Slavic word robota, which means work, chore or slave. It should be recalled that under Roman law, a slave (servus) had no legal personality and was entirely dependent on his or her master. We should thus welcome these new technologies, which bring with them a number of opportunities, but should ensure that they remain in the service of humans and not the other way around.