Microsoft confirms its AI and cloud support to the Israeli military, denies evidence of civilian harm in Gaza, amid growing scrutiny over tech in warfare.
![]() |
Microsoft admits aiding Israel’s military with AI and cloud tools during the Gaza war but denies tech was used to harm civilians, citing internal review findings. Image: CH |
WASHINGTON, USA – May 17, 2025:
Microsoft has confirmed that it provided artificial intelligence and cloud services to the Israeli military during the ongoing war in Gaza, but insists there is no evidence its technologies were used to target civilians or conduct attacks that violated its ethical guidelines.
In a blog post released on Thursday, the U.S. tech giant acknowledged for the first time its role in supporting Israel’s military following the October 7, 2023, Hamas attack that killed approximately 1,200 Israelis. The retaliation in Gaza has since resulted in tens of thousands of Palestinian deaths, prompting global concern and calls for accountability from corporations involved in wartime technologies.
The statement follows an Associated Press investigation that revealed Microsoft’s previously undisclosed ties with Israel’s Ministry of Defense, showing a sharp uptick in its use of Microsoft's Azure cloud platform after the conflict began. Azure was reportedly used to process surveillance data that could be linked to AI-enabled targeting systems.
Microsoft said its support included cloud infrastructure, cybersecurity tools, and translation services, with the primary intent of aiding hostage rescue operations. However, the company emphasized that it has “not found evidence” its products were used to deliberately harm civilians or violate international norms.
In response to rising internal and public criticism, Microsoft said it initiated an internal review and hired an unnamed external firm to investigate the matter. The company has not disclosed the firm’s findings, nor whether Israeli officials were consulted, raising transparency concerns among watchdogs and employees.
Microsoft noted that its limited visibility into how its tools are deployed once installed on client systems or third-party platforms constrains its ability to track exact usage — particularly in volatile regions like Gaza.
Other U.S. tech companies, including Google, Amazon, and Palantir, also maintain cloud and AI contracts with the Israeli government. Microsoft stated that it enforces ethical usage through its Acceptable Use Policy and AI Code of Conduct, and that no policy violations have been confirmed to date.
Despite these assurances, critics remain unconvinced. The employee-led group “No Azure for Apartheid” accused Microsoft of “corporate whitewashing,” and former employee Hossam Nasr, who was fired after organizing a Palestinian solidarity vigil, condemned the company for refusing to release the full investigation report.
Cindy Cohn of the Electronic Frontier Foundation welcomed Microsoft’s partial transparency but stressed that “key questions remain unanswered,” particularly about how Microsoft tools may have supported operations resulting in high civilian casualties.
Notably, Israeli hostage rescue raids, such as the February operation in Rafah and another in Nuseirat in June, freed some captives but also caused hundreds of Palestinian deaths — events that have intensified the debate over AI’s role in modern warfare.
Experts say Microsoft's statement could mark a shift in corporate accountability, with Emelia Probasco of Georgetown University noting it’s rare for a tech firm to impose ethical restrictions on governments in active conflict.
Still, the controversy surrounding Microsoft’s involvement highlights a growing dilemma for the tech industry: how to reconcile commercial interests and national security partnerships with the ethical risks of deploying AI in armed conflicts.