While the Palantir company has presented an artificial intelligence project capable of waging war, everything suggests that we are going to experience a real revolution in this area in the coming years.
Last month, Palantir, the company of businessman Peter Thiel, made the demonstration that an artificial intelligence (AI) could very soon go to war. An AI that would be able to process in real time data classified as secret defense or not, but above all to respect “ethics” and “legality”. It's a show of force that Thiel has made, with its offer of "palantir artificial intelligence platform", dubbed AIP.
The demo, in video, shows how a military operator, responsible for monitoring the theater of Eastern Europe, discovers the enemy forces massed near a border. The human then asks the AI to help him deploy reconnaissance drones, but above all tactical responses to what is perceived as aggression from the enemy. The AI is even responsible for organizing the jamming of enemy communications, estimating enemy capabilities and suggesting appropriate responses upon discovering an armored element.
The human unable to process the mass of information
However, there are still a few problems to be solved, in particular the decoys. In 2019, in the journal Défense nationale, two specialists in the sector, Ève Gani and Mohammed Sijelmassi, wondered if artificial intelligence would now be able to "dispel the fog of war", an expression coined by the Prussian general Carl von Clausewitz, who was interested in the vagueness surrounding information for participants in military operations.
Already four years ago, the article claimed that "man is no longer able to process the mass of information generated by ever more numerous and efficient sensors" and that "the staffs, overwhelmed by a unprecedented volume of digitized information, are no longer able to play their role as unifiers of information". Suffice to say that artificial intelligence was already expected as the messiah, being predestined to “help analyze, exploit, control and protect this extraordinary amount of data”.
Ève Gani and Mohammed Sijelmassi, pointing to the concern of humans, who feared becoming "subjects of the AI who will fight in our place", then affirmed that the key word would be the collaboration between man and AI. "Rather than imagining an all-powerful AI that would act in place of humans, we must present AI as an ally that can contribute to the increase of human capacities, accelerate and facilitate decision-making" .
A human “in the loop”, but for how long?
For Palantir, moreover, it will be a question of leaving “a human in the loop” to avoid overflows. But for how long ? The AI is “trained” by what is called “machine learning”. A learning method that allows AI to integrate knowledge, but also ethical choices and values. This philosophical aspect settled, it is clear that the AI will necessarily take modern warfare on its own.
Because war has become more complex over the decades, and AI makes it possible “to exploit more substantial data (joint, combined, diplomatic, industrial)”, write the two experts. It is therefore, with modern technology, "to meet the tactical needs of immediacy and precision". AI will also have a role in reporting and tracking operational capabilities.
The advent of AI is also, once the ethical and legal rules are integrated, the arrival of a war that would move away from human considerations. “With a human operator, there is always the possibility, in theory, that a human can exercise compassion, empathy and human judgment, whereas a system trained from data and pre-programmed to do something does not have this possibility”, summarizes Anna Nadibaidze, of the Center for War Studies, in Denmark.
"The country that leads in the field of AI will dominate the world"
But the expert warns: if countries or private companies are currently developing AI so that it can go to war, these technological advances must be accompanied by regulations. The United Nations has begun its reflection on the subject, but no progress has been made. “When there is the need to regulate weapons with a high level of technology, indicates Paola Gaeta, professor of international law at the IHEID, in Switzerland, the States which hold this technology have no interest in doing so. On the other hand, States more easily find an interest in regulating the weapons of the poorest, such as anti-personnel mines”.
Behind this message, there is reason to worry about the aggravation of a North-South divide. Vladimir Putin, the Russian president, warned six years ago that “the country that will be the leader in the field of artificial intelligence will dominate the world”. He may not have imagined yet that an artificial intelligence could soon wage war instead of humans. We know that wars become more and more complex as time goes by. But this time, technology could well take this evolution into account. “It is possible to develop even smarter weapons, that is to say weapons that can learn themselves in an environment to make decisions that are neither controlled nor pre-programmed by humans”, assures Paola Gaeta. . A real revolution in the field of war.