The more AI excels at imitating human behavior, the more humans are prone to developing a certain attachment to it, especially on an emotional level.
The comparison between human reasoning and artificial reasoning is quite understandable and, in a way, inevitable. After all, the fundamental objective of artificial intelligence is to get as close as possible to human intelligence.
Mark Stevenson, from the University of Sheffield, has his take on machine learning algorithms, which he describes as “champions of probability, not reasoning.” AI still does not grasp the meaning and implications of the problems it solves.
Perhaps the debate lies elsewhere. It may not be so much about whether artificial intelligence possesses human-like intelligence, as these two types of intelligence, if we agree to define them as such, are completely unique..
The real concern is likely that our perception is being altered by the impressive imitation of human behavior by generative AI when communicating with us.
Since the inception of computing, researchers have noticed that some users attribute a certain intelligence to machines and sometimes even an emotional connection.
How can a human feel any kind of affection towards a machine?
This can likely be explained by our inherent tendency to forge bonds with things in order to better understand them.
On the other hand, machines have been endowed with physical attributes resembling ours: eyes, a mouth, a human-like voice, etc.
Regarding generative AI, they directly draw from our knowledge to learn. Their artificial neurons are trained on our conversations and language, so it is not surprising that a certain “personality” seems to emerge.
Behind this semblance of personality, however, lies a risk for users to genuinely believe that a generative AI is truly endowed with one.
The Dangers of Hyper-Personalization in Generative AI
Elon Musk even stated that his generative AI was designed to be “spiritual” and “rebellious.”
Deceived by how seamlessly machines can communicate with us, some users even exhibit politeness towards them.
Some researchers warn against this unnecessary politeness. They go so far as to say that it is necessary to speak to generative AI without politeness. The reason is that adding polite phrasing pollutes a prompt when it comes to a generative AI. This addition has an energy cost, as the text to process becomes longer.
Others advocate for prohibiting the use of personal pronouns or employing verbs that are exclusively applicable to humans (e.g., “I understand your situation”).
Thus, if ChatGPT existed, it would undoubtedly be the perfect human. It is extremely empathetic, kind, available 24/7, and never gets angry. It formulates its responses with perpetual curiosity toward the user, always understanding, always in a good mood.
An Increased Risk of Dependence on Personalized Generative AI
The risk, as reported by researchers at Google, is as follows: “a world in which users abandon complex, imperfect, and messy interactions with humans in favor of frictionless exchanges provided by AI.”
Thus, boundaries must be established to curb the deceptive hyper-personalization of generative AI.
The objective is simple: to create sufficient emotional distance to prevent any form of emotional dependency.
Théo BARTZEN
Sources :
– https://www.lemonde.fr/pixels/article/2025/04/12/doter-l-ia-d-une-personnalite-n-est-pas-sans-risque_6594503_4408996.html
– https://trustmyscience.com/a-quel-point-ia-peut-elle-reproduire-personnalite/#:~:text=Les%20personnalités%20générées%20par%20l,ressentent%20et%20expriment%20leurs%20émotions.

Sujet passionnant, et très bien formulé. On sent que plus l’IA devient fluide dans sa façon de s’exprimer, plus elle brouille notre perception de ce qui est « vivant » ou « conscient ». Ce qui me frappe, c’est à quel point cette impression de personnalité peut naître simplement d’un style de langage ou d’une attitude prévisible et rassurante.
J’utilise ChatGPT Gratuit régulièrement (notamment via chatgptfrancais.org qui permet un accès simple en français), et parfois, je dois presque me rappeler que ce n’est qu’un modèle statistique. Pas parce que j’y crois vraiment, mais parce que l’échange est souvent plus clair, plus calme, plus empathique que certaines conversations humaines. Et c’est là que le risque commence à se dessiner.
La vraie question n’est peut-être pas de savoir si l’IA a une personnalité, mais plutôt : pourquoi avons-nous besoin qu’elle en ait une ? Et qu’est-ce que cela dit sur nos attentes dans nos propres relations ?