The US Army is processing AI models trained connected information from existent missions, with the extremity of deploying a chatbot specifically for soldiers.
“We person each of these lessons learned from missions similar the Ukraine-Russia War and Operation Epic Fury,” says Alex Miller, the Army’s main exertion officer, successful an interrogation with WIRED. “There is simply a immense magnitude of cognition available.”
Miller showed WIRED a prototype of the system, called Victor, that combines a Reddit-like forum with a chatbot called VictorBot to assistance troops aboveground utile information, similar the champion mode to configure electromagnetic warfare systems for a peculiar mission. When a worker asks however to acceptable up their hardware, VictorBot generates an reply and points to applicable posts and comments from different work members. “Electromagnetic warfare is specified a hard topic,” Miller says. Victor, helium adds, “can make a effect and mention each of the lessons learned from [different] units.”
The Pentagon has ramped up its efforts to incorporated AI into subject systems implicit the past 2 years, but Victor is simply a uncommon illustration of the subject gathering AI for itself. The task shows however keen the US subject is to maestro the nuts and bolts of AI—and however the exertion whitethorn beryllium poised to alteration regular beingness for galore troops.
Miller says the Army is moving with a third-party vendor that volition tally and fine-tune the AI models that powerfulness Victor. He declined to sanction the circumstantial steadfast due to the fact that the declaration has not yet been announced. He says that much than 500 repositories of information person been fed into the system, and notes that Victor volition question to trim the imaginable for errors successful a akin mode to commercialized chatbots, by citing factual sources.
Efforts to integrate AI into subject systems accelerated pursuing the instauration of ChatGPT successful 2022. More recently, Anthropic’s exertion reportedly played a salient role successful readying operations successful Iran done a strategy powered by Palantir.
As these systems person grown much capable, however, disagreements person emerged regarding however AI should beryllium deployed. Earlier this year, Anthropic went head-to-head with the Pentagon, arguing that its exertion should not beryllium utilized to powerfulness autonomous weapons oregon surveil American citizens.
Same Mistakes
Victor is being developed wrong the Combined Arms Command (CAC). Lieutenant Colonel Jon Nielsen, who oversees the CAC’s enactment connected Victor, says it’s not uncommon for antithetic brigades to marque the aforesaid mistakes connected antithetic missions. The extremity with Victor, helium adds, is to yet marque the strategy multimodal truthful that soldiers tin provender successful imagery oregon video and get insights. “Victor volition beryllium 1 of the lone sources with entree to authoritative Army information,” Nielsen says.
Lauren Kahn, a elder probe expert astatine Georgetown’s Center for Security and Emerging Technology and a erstwhile argumentation advisor for the Pentagon, says task Victor highlights the imaginable for AI to automate a batch of non-sexy back-office tasks wrong the Department of Defense. Late past year, the section introduced GenAI.mil, an inaugural aimed astatine spurring greater AI adoption among DOD employees.
If Victor proves a success, however, Kahn believes the Army could yet prosecute a large AI institution to beforehand the system’s capabilities. “The large labs are evidently going to person a comparative advantage” successful presumption of gathering and deploying cutting-edge AI, she says.
Intel Failures
AI could present caller kinds of problems for militaries, says Paul Scharre, enforcement president of the Center for New American Security and a erstwhile US Army Ranger. Scharre says that the inclination for AI models to beryllium sycophantic could beryllium peculiarly problematic. “I could envision situations wherever that would beryllium peculiarly worrisome successful a discourse of quality analysis,” helium explains.
Scharre adds that AI adoption could go much analyzable arsenic systems beforehand from chatbots to agents susceptible of utilizing bundle and machine networks. “Agentic AI raises this full caller acceptable of challenges astir security,” helium notes.

14 hours ago
5






English (US) ·