Will a legal personality be granted to AI in the future?
The legal debates about AI (artificial intelligence) come from the important impact that artificial intelligence represents. It is a flourishing market, many big companies are interested in this domain, investing an important part of their budget in researches for AI. AI can be employed in many different domains: bank, health, tourism, administration… This impact leads to a certain disruption of the society, there are some interrogations about the place let to robots and the one let to humans. We are wondering wich place will be granted to humans concerning their jobs, or if we can trust the machines while it is driving alone for us.
AI and robots are tied up but remain different. We can wonder to whom would be granted a potential legal personality. Will it be granted to AI, robots, both? Authors appear to agree about favouring machines having autonomy, or a more important ‘intelligence’. So, every robot will not receive a legal personality. We can hardly imagine granting it to a refrigerator, despite some fridge do have a sort of ‘intelligence’ nowadays. The most plausible case would be to be interested in the most advanced robot, so the one having a developed artificial intelligence. It seems abstract to give a legal personality to an AI that does not have a physical body, an AI that only exists as a software in a computer, like a chatbot. But if the question of legal personality emerges, it is to answer the interrogations concerning responsibility. Who is responsible, the builder, the buyer, or the machine itself? If a software-AI is deficient, can it cause damage to a third party? Not in a physical sense, but it might be a moral harm.
Then, will a legal personality be given to robots one day? The answer is not easy, and the opinions differ. No legal personality is planned for now. But technology evolves fast and this could lead to answer the question more quickly than we imagine. For many reasons, people are not prepared to grant this legal personality. First of all, robots are more and more present among us but are not enough sophisticated. Today, a lot of people are threatened by robots representing a menace to humans’ jobs. Granting a legal personality would probably not be well approved. Moreover, the idea of liability regarding robots is still abstract because no AI has already caused damage to a third person.
However, the questions related to this subject and the different hypothesis are interested. They show that we need to take robots as a whole into consideration, their specificities and novelty mean that law must be adapted, whatever solution is finally chosen.
We can notice that the legislative Institutions, national or European, become aware of the problem linked to AI. They publish reports, resolutions… But we still do not have a lot of texts. Law is often blamed because of the lack of adaptability due to a slow process. Once texts are established, they are often taken too late regarding the technologies. The use of soft law could be a good solution to adapt the law to concrete situations.