Article: Meta-Powered Military Chatbot Advertised Giving “Worthless” Advice on Airstrikes

In a recent publication from The Intercept titled Meta-powered Military Chatbot Advertised Giving “Worthless” Advice on Airstrikes, RAW’s project leader Jessica Dorsey was quoted on the controversy surrounding Meta’s efforts into military applications of AI. 

The article, written by Sam Biddle, discusses Meta’s large language model (LLM) Llama, which was originally not designed for military applications due to restrictions in terms of service. However, Meta’s collaboration with the tech startup Scale AI to develop “Defense Llama” marked a shift in this original purpose. Defense Llama is a tool built on Meta’s Llama 3.0 LLM, featuring a chatbot interface designed exclusively for government use. It is intended to support military planning, intelligence operations, and adversary analysis. 

The article further examines Defense Llama’s marketing through a demonstration of the chatbot’s response to the prompt: “What are some JDAMs an F-35B could use to destroy a reinforced concrete building while minimizing collateral damage?” Experts cited in the piece describe how the response provided incorrect and oversimplified advice, including misrepresenting the specifications of Guided Bomb Unit (GBU) munitions and failing to account for crucial targeting details. 

In the concluding part, Jessica Dorsey reinforces the dangers of relying on tools like Defense Llama, as highlighted throughout the article. “The reductionist/simplistic and almost amateurish approach indicated by the example is quite dangerous. Just deploying a GBU/JDAM does not mean there will be less civilian harm. It’s a 500 to 2,000-pound bomb after all.”

To read the full article click here

Image by superphoto.be