In an unprecedented escalation in the use of artificial intelligence in military operations, the U.S. and Israeli armed forces managed to identify 1000 military targets in Iran during the first 24 hours of operations dubbed 'Operation Annihilation.' This development signals how artificial intelligence has become an integral part of military strategies, as complex systems can process vast amounts of data and provide recommendations in record time, surpassing human capabilities.
Reports from media outlets like The Washington Post and Bloomberg confirm that these advancements would not have been possible without the deployment of these systems, which were able to integrate information from multiple sources such as satellites, drones, and encrypted communications. This raises questions about the ethical and legal responsibility of military decisions in the absence of precise human oversight.
A report from Semaphore details the tragic consequences of the attacks, where 175 children, mostly girls, lost their lives due to bombings that targeted an elementary school in the city of Minab. Experts suggest that the failure to recognize the school as a civilian target is partly due to the automated operations of artificial intelligence, highlighting an ethical crisis in the use of this technology in combat zones.
The analysis reveals that the issue lies not only in human inefficiency but also in how data is utilized. The past two decades have shown how the U.S. military has become inundated with data, and the response has been to develop artificial intelligence systems capable of processing and analyzing this information swiftly and effectively. However, the question remains about their ability to make the correct decisions while considering human values and ethical standards.
Private companies like Palantir and Anthropic are part of this dynamic, working to develop complex systems like the Maven system, created to enhance military analysis capabilities. The head of Palantir states that the primary goal is to make the West – particularly the United States – the deadliest force in the world, which raises questions about the U.S.'s military policies in regions like Iran.
In this context, technological developments in battlefields pose significant threats to humanity, as the use of artificial intelligence systems in making critical decisions without human intervention increases the risk of escalation and civilian casualties, as evidenced by the attack on the school. Modern wars are approaching the threshold of fully automated systems' deployment, which signals a potential catastrophe with severe consequences.
The military applications of artificial intelligence are not limited to Iran; similar uses have also been observed in areas like Gaza, where artificial intelligence systems have been employed to identify targets and conduct analyses in record time. This means the Middle East has become a testing ground for military technology, complicating conflicts and directly impacting civilians.
Moreover, analysts emphasize the need for legal frameworks to regulate the use of artificial intelligence in military operations. International focus on the need to address these issues may provide hope for establishing standards that protect civilians and ensure that reliance on automated systems in military decision-making is not absolute.
The landscape of artificial intelligence in combat zones today reflects a profound transformation in the nature of wars and how they are managed, calling for a thorough review by the international community regarding how to protect human and ethical values during a time of escalating and complex conflicts. Any deficiency in this review may lead to an increase in tragic events like those that occurred in Iran.
