Scout AI Secures $100M to Bring “Frontier” Intelligence to the Battlefield

11

A new player has entered the defense tech arena with a clear, if controversial, mission: teaching artificial intelligence how to navigate and fight in the chaos of war. Scout AI, a startup founded in 2024 by Colby Adcock and Collin Otis, has announced a $100 million Series A funding round led by Align Ventures and Draper Associates.

The company, which describes itself as a “frontier lab for defense,” is not building hardware like tanks or trucks. Instead, it is building the “brains”—a sophisticated software layer designed to turn existing military assets into autonomous agents capable of complex decision-making.

From Logistics to Lethality: The “Fury” Model

At the heart of Scout AI’s strategy is “Fury,” an AI model designed to command military assets. The company is following a phased deployment strategy:
1. Phase One: Logistics. Using autonomous vehicles to handle “dull, dirty, or dangerous” tasks, such as transporting water and ammunition to remote outposts.
2. Phase Two: Combat. Transitioning toward autonomous weapons systems and reconnaissance drones.

To achieve this, Scout is utilizing Vision Language Action (VLA) models. Unlike standard Large Language Models (LLMs) that primarily process text, VLAs combine visual perception with the ability to execute physical actions. CTO Collin Otis compares this process to training a soldier: rather than starting from scratch, they take a “broadly intelligent” model and refine it into a specialized “military AGI” (Artificial General Intelligence) through real-world experience.

The “Bootcamp” Approach: Training in the Dirt

Unlike many AI companies that train their models in sterile data centers, Scout AI relies on physical reinforcement learning. At a high-intensity training range in central California, the company uses all-terrain vehicles (ATVs) to bridge the gap between digital logic and rugged reality.

  • Real-World Complexity: While self-driving cars in cities operate on mapped, predictable streets, Scout’s models must navigate unmarked trails, loose sand, and steep hills.
  • The Feedback Loop: Human drivers operate the vehicles for eight-hour shifts. When the AI struggles, the human takes over; these “interventions” are logged and used to retrain the model, teaching it to better handle uncertainty.
  • The Goal of Scale: Scout believes that true intelligence comes from interacting with the physical world, not just reading the internet. By using the U.S. military’s vast fleet of vehicles as a testing ground, they aim to scale their intelligence faster than labs focused solely on digital AGI.

Bridging the Gap in Modern Warfare

The move toward autonomy is driven by a shift in modern combat, highlighted by recent conflicts in Ukraine. Military experts note that while humans can operate individual drones, they cannot scale effectively to counter “swarms” of low-cost, unmanned systems.

Scout’s vision includes a “quarterback” platform : a high-compute drone that commands a group of smaller, specialized munitions drones. This swarm could autonomously search for targets—such as enemy tanks—and strike them, providing a level of precision that traditional artillery cannot match.

“The AI people don’t want to work with the military,” Otis noted, highlighting a growing divide between Silicon Valley’s “ethical” AI restrictions and the Pentagon’s operational requirements.

The Ethical and Strategic Frontier

While the prospect of autonomous weapons often triggers intense political debate, Scout’s leadership emphasizes human-centric safeguards. They argue that their systems can be programmed with strict parameters, such as only engaging targets within specific zones or requiring human confirmation before firing.

Furthermore, they contend that autonomous systems offer a tactical advantage by removing human fear and hesitation from the equation, potentially reducing errors caused by stress in high-pressure environments.


Conclusion: Scout AI is positioning itself as the essential intelligence layer for the future of defense, betting that the ability to marry advanced reasoning with rugged, real-world autonomy will make them indispensable to the U.S. military.