You are currently viewing Algorithmic video surveillance: issues and regulations

During the Olympic Games, France implemented advanced technology in public spaces to enhance security and ensure safety: algorithmic video surveillance. Article 10 of the JO 2024 has authorised this process until March 2025.

Cameras capture footage, which is then analysed by algorithms. All behaviours are monitored, and if any unusual activity is detected, a human operator reviews the footage to make a final decision. 

This technology questions privacy, discrimination, and legality. 

 

Algorithmic video surveillance, what are we talking about? 

The CNIL defines these smart cameras as « video devices with associated algorithmic processing implemented by software that allows automatic, real-time, and continuous analysis of the images captured by the camera. ».

Algorithmic video surveillance is capable of detecting an accident or suspicious behaviour, and even simple offences such as jaywalking.

This useful technology is taking root in cities, which are increasingly part of an intelligent city concept. In France, at least 50 cities are using VSA (video surveillance algorithms). However, part of these cities remain cloudy about the process implemented.

For example, Two-I is a French startup that has created a « hypervision platform » to map all the incidents identified within a city so that you can act more quickly and effectively.

 

On what scale? 

In 2019, the Carnegie Foundation published a report and exposed that at least 75 states use artificial intelligence in mass surveillance.

China is one of the best users of this technology and one of the main suppliers in the world. The United States is in second place.

The use of this technology seems to vary according to the type of government. Liberal democracies seem to use this technology more than autocratic states. This can be explained by the ability that democratic states have to limit the use to a specific aim, such as strengthening security for special events, while autocratic states can use AI surveillance to serve their own interests.

 

What questions?

This use raises questions.

This technology can violate the right to privacy. If we imagine a widespread use, people’s habits will be scrutinised and recorded. The government will have precise knowledge of the facts and actions of the population.

AI surveillance also raises the issue of discrimination. There may be mistakes and misunderstandings that call into question the legitimacy of the technology. How do you determine a dangerous behaviour? A person who walks differently because of a pathology will be judged dangerous because she doesn’t have the same movement as a person who doesn’t have a problem.

Furthermore, this raises the issue of algorithm training. The data used to train the AI must be completely free from biases or prejudice.

 

Which regulation? 

In France, the Internal Security Code does not specifically regulate the implementation of smart video surveillance. However, Article 2 of Decree n° 2023-828, dated August 28, 2023, permits the use of AI-enhanced cameras to detect eight specific anomalies:

  • « Presence of abandoned objects.
  • Presence or use of weapons, as defined by Article R. 311-2 of the Internal Security Code.
  • Non-compliance with traffic direction by a person or vehicle.
  • Entry or presence of a person or vehicle in restricted or sensitive areas.
  • Detection of a person lying on the ground due to a fall.
  • Crowd movements.
  • Excessive crowd density.
  • Detection of fires. « 

The analysis of behaviours involves personal data, making it subject to data protection laws, including the General Data Protection Regulation (GDPR) and the French Data Protection Act. Any processing of such data must strictly adhere to these regulations.

At the EU level, the proposed AI Act emphasises the regulation of high-risk AI systems. Recital 95, article 26, outlines that post-remote biometric identification systems should be deployed in a manner that is proportionate, legitimate, and necessary. This includes limiting their use to targeted scenarios based on legally obtained data and specific temporal and spatial contexts.

While these frameworks aim to regulate AI-based surveillance, they do not impose an outright ban, granting states significant flexibility in their implementation strategies.

 

In conclusion, algorithmic video surveillance is a powerful tool for the authorities, which is becoming better and better framed. But a flo lies in the difficulty of articulating the different legislation in force. 

However, this technology undermines people’s privacy and freedom.

 

Links : 

Laisser un commentaire

Ce site utilise Akismet pour réduire les indésirables. En savoir plus sur comment les données de vos commentaires sont utilisées.