Systèmes Intelligents de Surveillance

Resist

Resist : ; Examples Adverses "

In times of societal crises (health, security, etc.), the need for security among individuals and societies tends to grow. One of the emerging technologies that can contribute to a high agility of territories in front of abnormal events is intelligent video surveillance. This technology deploys machine-learning and deep-learning algorithms to automate complex tasks such as person detection, action recognition, tracking of individuals, etc. Although these systems have demonstrated high capabilities to solve large-scale problems, their democratization is still limited. 

The technological problem of "Robustness

AI systems and particularly deep learning systems have recently been shown to be vulnerable to a type of attack called " Examples Adverses". These attacks can be implemented via a " patch " printed and carried by the attacker that results in a total alteration of the target intelligent system, thus compromising its integrity and efficiency. For automatic surveillance systems, smart cameras are particularly vulnerable to these attacks. Given the criticality that these systems can have, ensuring their robustness is a major technological challenge. Existing defenses remain insufficient, especially in a real-world context, and no robustness techniques consider the multi-view component.

The philosophico-legal problematic of "Ethics

Deploying AI-based systems for securing public spaces can create an ethical dilemma that calls for legal solutions. Indeed, these systems can recognize people, recognize actions, do targeted tracking, etc. The challenge is to imagine ways to combine the progress allowed by this technology in terms of security with the guarantee of fundamental freedoms, starting with the respect of the personal data of each individual and the principle of non-discrimination and the right of every citizen to participate freely in the democratic life that takes place on the public highway (freedom of demonstration).

Responsible and trusted AI

Oriented by the challenge of achieving a responsible and trusted AI, the RESIST project proposes to investigate the robustness and ethics of AI-based video surveillance systems by having researchers in hard sciences and social sciences work together, in a polytechnic perspective.

The social sciences component: the "Ethics" program to think about intelligent video surveillance in particular and digital ethics in general

The introduction of a new technology is never neutral. An investigation will be conducted on the risks of bias that intelligent systems can induce, as well as the risks of these systems on the privacy of individuals, on their right not to be discriminated against and on their impact the informational self-determination of individuals. This study mixing technological, legal and ethical issues will lead to recommendations as well as methodological as regulatory and technical to develop a responsible AI. Several questions will be raised in ethical and legal terms: what societal abuses is this type of biometric surveillance likely to generate? Does intelligent video surveillance not entail major legal risks? What guarantees, for example, that intelligent systems are not biased and do not generate discrimination (especially against visible minorities)? Moreover, in what sense are they compatible with privacy? Shouldn't the consent of the individuals be required to film them and use their data? Should we accept the trivialization of this technology and/or authorize it only exceptionally, if not prohibit it? Once it is authorized, how can we legally protect the companies that use it? Similarly, what means of recourse should be given to " video-surveilled " ? 

What are the legal provisions for intelligent video surveillance?

For the moment, what national, European and international legal provisions frame intelligent video surveillance? Are they effective and sufficient to guarantee both the respect of fundamental rights and public order?

The aim will be to list both the legal norms that frame intelligent video surveillance and their flaws. These are the areas that, once identified, will make it possible to envisage the necessary improvements.

.

What legal recommendations could be made to ensure that smart video surveillance fits more within the framework in the philosophy of responsible AI ? Shouldn't citizens have to express themselves systematically upstream of the authorization of the marketing of this technological innovation? In a word, what values, what norms, what institutions, what procedures and what control mechanisms should be invented to articulate the technological-economic performance of intelligent video surveillance with business ethics and democracy ?

Above all, it would be a matter of identifying the precise places in these technologies where the intervention of the law should be that of constraint and those where, on the contrary, the framing of the actors by ethical commitment could be sufficient. The project can thus be used as a basis for projecting oneself on the more global question of the limit of ethics as a means of framing new technologies. Not to exclude it but to give it its rightful place.

This " Ethics " part of the program will be led by Matthieu Caron, lecturer in public law at the Polytechnic University of Hauts-de-France and executive director of The Observatory of Public Ethics. It will result in the production of two books  as well as the publication of a white paper of proposals to perfect digital ethics in France.

Read also