US military used Anthropic AI for Iran strike despite Trump ban
According to information from The Wall Street Journal, the US military turned to Anthropic's Claude AI assistant for intelligence analysis and targeting during a strike on Iran, just hours after former President Donald Trump issued an order banning the use of these systems.
The military's decision to use Anthropic's technologies, despite the president's direct order, has sparked considerable debate. On one hand, it is clear that AI systems can be an effective tool for analyzing intelligence data and making tactical decisions. However, on the other hand, such disregard for presidential directives raises questions about the accountability system within the armed forces.
It is worth noting that Anthropic, the developer of the Claude AI assistant, had previously stated its ethical principles and unwillingness to cooperate with the military. Nevertheless, it appears that in a critical situation, the military resorted to the company's services, despite the direct order from the commander-in-chief.
This situation raises important questions about the boundaries of the military's use of the latest technologies, especially when it comes to AI systems whose capabilities and limitations have not yet been fully explored. It is clear that such conflicts will arise more and more often in the future, and the development of clear rules and procedures regulating the use of advanced technologies in armed conflicts is required.