AIコンサル

AI Drones and the Future of War: The U.S.-China Tech Race and the Defense Tech Frontier

2026-01-21濱本

AI and autonomous drone systems are reshaping national security and the nature of warfare. Shield AI's Ryan Tseng and Skydio's Adam Bry — two founders who bet on this technology when nobody was paying attention — share their views on China's industrial threat, America's software advantage, the lessons of Ukraine, and the ethics of autonomous weapons.

AI Drones and the Future of War: The U.S.-China Tech Race and the Defense Tech Frontier
シェア

This is Hamamoto from TIMEWELL Inc.

When Drones Were Just Expensive Toys

In the current geopolitical environment, AI and autonomous systems have become critical variables in the balance of power between states. Drone technology has evolved from a hobbyist curiosity into one of the most strategically significant domains of military competition — with China's industrial capacity producing drones at scale while the U.S. and its allies attempt to leverage AI and software superiority to maintain a competitive edge.

The war in Ukraine exposed the asymmetry of this new reality: cheap commercial drones destroying expensive weapons systems, and software update cycles determining battlefield outcomes.

This article draws on insights from two founders who anticipated this shift: Ryan Tseng, co-founder and CEO of Shield AI, and Adam Bry, co-founder and CEO of Skydio. Both bet on AI and autonomous drone technology when the industry was still in its infancy. Their perspectives on the technology, the competition with China, and the ethics of autonomous weapons offer a candid view of where this field is heading.

Two Founders, Two Origin Stories

Ryan Tseng built Shield AI out of a personal conviction, reinforced by his brother — a Navy SEAL who recognized that self-driving technology, applied to military systems, could protect soldiers' lives and preserve U.S. military advantage for decades. Tseng describes his search for "a noble mission, great people, and the chance to define what's possible" converging on that single question. He has spent the past ten years building AI pilots for military autonomous systems.

Adam Bry's path was more technical. As an MIT graduate student, he became obsessed with attaching computers and sensors to model aircraft to build AI systems that could outfly human pilots. When small, lightweight quadcopters began appearing around 2013-2014, he saw the scope of what was possible. The key bet Skydio made at founding: AI and autonomy embedded in small quadcopters would become powerful tools across industrial, government, and enterprise applications. Bry initially entered through the consumer market — building a platform he believed would become the foundation for government and military deployment. It worked: Skydio was awarded a U.S. Army short-range reconnaissance contract within a few years.

Looking for AI training and consulting?

Learn about WARP training programs and consulting services in our materials.

What Ukraine Demonstrated

Ukraine has been the most visible demonstration of how drones are transforming the character of warfare. The conflict introduced "mass" into the battlefield at scales not seen before — creating a more distributed, lethal force structure where soldiers in trucks can deploy drone swarms that attack targets from miles away.

What stood out was the asymmetry. Commercial quadcopters costing a few thousand dollars were destroying tanks and armored vehicles worth millions. This non-linear relationship between cost and effect is destabilizing for conventional military doctrine. Ukraine also demonstrated that software development speed matters tactically — forces that could update drone firmware and tactics faster than their opponents gained meaningful advantages.

The U.S. and its allies are watching these lessons carefully. The existing structure of powerful, expensive weapons platforms has advantages — but adversaries have spent decades studying how to counter them. Autonomous, AI-equipped drones will be central to future conflict, and establishing superiority in this domain is increasingly recognized as a national security imperative.

China's Industrial Threat — and America's Response

China's drone manufacturing capacity is, by most assessments, formidable. Bry notes that China has dominated this space since the days of RC aircraft, and as drones evolved to incorporate computers, sensors, and sophisticated software alongside motors and batteries, a comprehensive manufacturing ecosystem formed around them. Skydio has manufactured in the U.S. since its founding — a difficult path that Bry describes as navigating into headwinds for years before more recent domestic manufacturing policy created some tailwind.

Tseng characterizes China's industrial capacity as "terrifying" and acknowledges the difficulty of closing the gap quickly. He frames the current moment as the equivalent of "Max Q" in aerospace — the point of maximum aerodynamic pressure on an ascending rocket — where multiple forces converge simultaneously: military transformation, the emergence of AI as a disruptive technology, and an adversary with growing industrial and military power.

But Tseng's conclusion is not that the U.S. should try to out-manufacture China on quantity alone. His argument is about what "mass" achieves versus "effect." The world is large, and targets are relatively small — throwing large numbers of weapons at problems doesn't guarantee destroying what actually matters. What the U.S. can do better than anyone is software: compressing the OODA loop (Observe, Orient, Decide, Act), deploying software updates in real time, and maximizing the effect of every minute of flight time. That combination of manufacturing recovery and software superiority is the strategy he advocates.

Bry makes a parallel argument: the current generation of drones used in Ukraine still depends heavily on skilled pilots (FPV drones) or simple algorithms (fly to these coordinates, attack). The "second wave" of AI autonomy — where drones are animated by sophisticated, collaborative AI systems rather than direct human control — has not yet fully arrived. When it does, over the next decade, the battlefield dynamics will shift again. And in the AI and autonomy domain, the U.S. has genuine competitive advantages it should be actively protecting and extending.

On swarms: Bry is skeptical of the significance often attributed to China's mass formation light shows. Those displays depend on GPS and constant communication links — both of which are readily disrupted in real combat environments. True autonomy, operating in GPS-denied and communication-degraded environments, is a different and much harder problem. That is where Skydio competes with DJI on technical merit, not just price.

The Ethics of Autonomous Weapons

The development of Lethal Autonomous Weapons Systems (LAWS) — systems capable of selecting and engaging targets without direct human intervention — raises ethical questions that neither Tseng nor Bry avoids.

Bry's starting frame is honest: weapons systems kill people and cause immense suffering. That is the tragic reality of conflict. The ultimate goal of technology development should be deterrence — preventing conflict — and, when conflict is unavoidable, identifying targets with precision and minimizing civilian casualties. AI and autonomy, he argues, could actually improve on the status quo: a 500-pound bomb dropped to neutralize a target creates far more collateral damage than a precision, AI-guided system that identifies and engages only the intended target.

He also challenges the assumption that "autonomy" is something new. A bomb dropped from an aircraft in World War II was, in a sense, autonomous from the moment it left the plane — subject to physics, not human intervention, with no ability to recall it if the target turned out to be wrong. AI raises the stakes of this dynamic but doesn't introduce the fundamental concept.

Bry notes with some respect the seriousness with which U.S. military institutions engage with these questions internally. There are specialists who think carefully about what weapons systems imply and under what authority they should operate. The U.S. aspires to maintain ethical standards in how it conducts conflict, and Bry considers that aspiration a genuine national strength — not just rhetoric.

Tseng approaches the ethics through game theory. Human-machine teams are currently more effective than machine-only teams, and will be for years. A "human in the loop" framework is appropriate now. But he is unwilling to stop the analysis there. If circumstances arise where machine-only systems are most effective for a particular mission, the question of how to respond to that situation needs to be engaged honestly. He cites the Phalanx CIWS — a naval close-in weapons system that operates fully autonomously when activated — as an existing example of automated lethal systems at smaller scale. What does this imply at larger scale? What happens when a unit under intense attack decides to switch to "full auto"? These questions don't have comfortable answers, and Tseng warns against the complacency of assuming "humans are always in the loop."

What Would U.S. Leadership in This Domain Require?

Based on both founders' analyses, the key elements:

  • Sustaining and advancing technological superiority: Continued R&D in AI, autonomy, and software at levels that prevent adversaries from catching up
  • Rebuilding domestic manufacturing capacity: Reducing supply chain vulnerabilities in drone components and electronics
  • Accelerating deployment: Strengthening the government-industry pipeline to get developed technology — especially software — into the field faster
  • Developing ethical and legal frameworks: Leading international norm-setting around autonomous weapons use, consistent with American values
  • Talent development: Building the human capital to lead this domain over the long term

Summary

AI drone technology has moved from hobbyist curiosity to a core axis of great-power competition. The lessons of Ukraine — asymmetric impact of cheap drones, decisive importance of software update speed, value of autonomous operation in degraded environments — are being absorbed rapidly by military planners worldwide.

  • China's threat: Industrial capacity to manufacture drones at scale; a deep and mature manufacturing ecosystem
  • U.S. advantage: AI, software, and autonomy — the ability to maximize effect per unit of hardware
  • Ukraine lessons: Non-linear cost-to-effect ratios; "software war" where update speed determines outcomes
  • The second wave: True AI autonomy in drones has not fully arrived yet — when it does, the strategic shift will be significant
  • Ethics: Human-in-the-loop remains appropriate now; the harder question is how to respond when it isn't, and the U.S. needs to engage that question honestly
  • Leadership requirements: Manufacturing recovery + software superiority + fast deployment + ethical leadership

The next decade will determine which side establishes durable advantage in autonomous systems. For both national security and the broader trajectory of AI development, few domains matter more.

Reference: https://www.youtube.com/watch?v=PZL-0yzCaSQ

Considering AI adoption for your organization?

Our DX and data strategy experts will design the optimal AI adoption plan for your business. First consultation is free.

Share this article if you found it useful

シェア

Newsletter

Get the latest AI and DX insights delivered weekly

Your email will only be used for newsletter delivery.

無料診断ツール

あなたのAIリテラシー、診断してみませんか?

5分で分かるAIリテラシー診断。活用レベルからセキュリティ意識まで、7つの観点で評価します。

Learn More About AIコンサル

Discover the features and case studies for AIコンサル.