AI applications are everywhere and introduce new risks, including behavior manipulation. How to control its uses? pixabay, CC BY
We all started to realize that the rapid development of AI was really going to change the world we live in. AI is no longer just a branch of computer science, it has escaped research labs with the development of “AI systems,” “software that, for human-defined purposes, generates content, predictions, recommendations, or decisions that influence the environments with which they develop. Reciprocity” (european union definition).
The governance issues of these AI systems -with all the nuances of ethics, control, regulation and regulation- have become crucial, since their development today is in the hands of a few digital empires like them. Glasses-Natu-Batx…who have become the masters of real social choices about automation and “rationalization” of the world.
The complex fabric that intersects AI, ethics, and law is then built into power relations, and collusion, between states and tech giants. But the commitment of the citizens is necessary, to assert other imperatives than a solutionism technology where “everything that can be connected will be connected and made faster”.
An ethics of AI? The fundamental principles in a dead end
Of course the big three Ethical principles They allow us to understand how a true bioethics has been built since Hippocrates: the personal virtue of “critical prudence”, or the rationality of the rules that must be able to be universal, or the evaluation of the consequences of our actions with respect to general happiness .
[Plus de 80 000 lecteurs font confiance à la newsletter de The Conversation pour mieux comprendre les grands enjeux du monde. Abonnez-vous aujourd’hui]
For AI systems, these fundamental principles have also been the basis of hundreds of ethics committees: oath Holberton–Turing, Montreal Declaration, toronto statementprogram of theunesco… and even Facebook ! But the AI ethics cards have never yet given rise to a sanction mechanism, not even the slightest disapproval.
On the one hand, the race for digital innovations is essential to capitalism to overcome the contradictions in the accumulation of profits and it is essential for the states to develop the algorithmic governmentality and a unexpected social control.
But on the other hand, AI systems are always both a remedy and a poison (a pharmacy in the sense of Bernard Stiegler) and thus continually create different ethical situations that are not based on principles but require “complex thinking”; a dialogic in the sense of Edgar Morin, as shown in theanalysis of ethical conflicts around the health data platform Health Data Center.
AI right? A construction between regulation and regulation
Even if the main ethical principles will never be operational, it is from their critical discussion that AI law can emerge. The law encounters particular obstacles here, in particular scientific instability of the definition of AI, the offshore aspect of digital but also the speed with which platforms are developing new New services.
In a development of the AI law, we can see two parallel movements. On the one hand, regulation through simple directives or recommendations for a progressive legal integration of the rules (from technology to law, as cybersecurity certification). On the other hand, actual regulation through binding legislation (from positive law to technology, such as GDPR Regulation about personal data).
Power relations… and complicity
Personal data is often described as a coveted new black goldbecause AI systems have a crucial need for big data to drive statistical learning.
In 2018, the RGPD became a true European regulation of this data that had been able to take advantage of two major scandals, the NSA spy program Prims, and Cambridge Analytica’s hijacked Facebook data program. The GDPR even allowed activist lawyer Max Schrems in 2020 to override all transfers of personal data to the United States by the Court of Justice of the European Union. But reports of complicity between states and digital giants are still numerous: Joe Biden and Ursula von der Leyen are constantly rearranging these disputed data transfers for a new regulation.
The Gafa-Natu-Batx monopolies are currently driving the development of AI systems: they control possible futures through “predictive machines” and management of attentionimpose the complementarity of its services and soon the integration of its systems in the internet of things. States react to this concentration.
In the United States, a lawsuit to force Facebook sell instagram and whatsapp will open in 2023, and a modification of the anti-trust legislation will be voted on.
Also read: Europe proposes rules for artificial intelligence
In Europe from 2024, the regulation on digital markets, theAMD Act, will regulate acquisitions and prohibit “large gatekeepers” from referring to themselves or bundled offers among their various services. Regarding the Regulation of Digital Services, theDSA lawit will force “big platforms” to be transparent about their algorithms, deal swiftly with illegal content, and ban targeted advertising to sensitive features.
But the collusion remains strong, because everyone is also protecting “their” giants by brandishing the Chinese threat. Thus, under threats from the Trump administration, the French government had suspended the payment of his “Gafa tax” still voted by parliament in 2019 and the tax negotiations continue within the framework of the OECD.
A new and original European regulation on the specific risks of AI systems
A spectacular advance in pattern recognition (whether on images, texts, voices or locations) creates prediction systems that present growing risks to health, safety or fundamental rights: manipulation, discrimination, social control, autonomous weapons… After the Chinese regulations on the transparency of recommendation algorithms in March 2022, the adoption of the ISA lawthe European regulation on artificial intelligence, will be a new step in 2023.
This original legislation is based on the risk level AI systems, in a pyramid approach similar to nuclear risks: unacceptable, high risk, low risk, minimal risk. Each level of risk is associated with prohibitions, obligations or requirements, which are specified in the annexes and which are still the subject of negotiations between the Parliament and the Commission. Compliance and sanctions will be supervised by the competent national authorities and the European Committee for Artificial Intelligence.
Citizen participation for AI rights
To those who consider the involvement of citizens in the construction of an AI law to be utopian, we can first recall the strategy of a movement such as International Amnesty : advance international law (treaties, conventions, regulations, human rights courts) and then use it in concrete situations such as Pegasus spyware or the ban on autonomous weapons.
Another successful example is the movement It is none of your business (that is not of your interest): advance European law (GDPR, Court of Justice of the European Union, etc.) through the annual presentation hundreds of complaints against privacy violation practices by digital companies.
Also read: Facial recognition, from phone unlocking to mass surveillance
All these groups of citizens, who work to build and make use of the rights of AI, have very diverse forms and approaches. Since the European associations of consumers who file a joint complaint against the management of Google accounts, until spoilers of 5G antennas that reject the total digitization of the world, going through the population of Toronto who are frustrating the great project of smart city Google or doctors activists free software that wants to protect health data…
This underlining of different ethical imperatives, both opposing and complementary, corresponds well to the Complex thinking ethics proposed by Edgar Morin, accepting resistance and disruption as inherent to change.
The original version of this article was été Posted in The conversationa news site dedicated to the exchange of ideas between academic experts and the general public.
Facial recognition, from phone unlocking to mass surveillance
What should we fear more: artificial intelligence or human stupidity?