AI, Ethics and Business Collide in Anthropic’s Standoff with the Pentagon

By Molly Mitchell


After weeks of escalating tension between Anthropic and the U.S. Department of Defense, CEO Dario Amodei announced Thursday that the company would not revise its policies to allow the Pentagon to use its AI models for applications such as fully autonomous weapons or mass surveillance of American citizens — even at the risk of losing the government contract.

The clash typifies one of the defining tensions of the AI era: how to reconcile commercial opportunity and national security priorities with ethical guardrails. Anthropic’s commitment to hold firm is making waves, but negotiations remain ongoing.

The Darden Report caught up with Marc Ruggiano, director of Darden’s LaCross Institute for Ethical Artificial Intelligence in Business and a navy veteran, to share his insights on the issues at hand and explain how Darden teaches students to navigate the often choppy waters of business, ethics and technology.

As of Friday, Anthropic has declined the Pentagon’s request to loosen restrictions on its AI tools. Holding the line when it comes to ethical boundaries can be unusual, especially with a powerful client. What does this decision say about the AI start-up’s leadership?

Anthropic’s leadership has consistently voiced a focus on AI safety, and this has been a visible part of its public image since its founding. Amodei has been a vocal advocate and the company has dedicated substantial resources to that mission, including teams dedicated to research on safety issues, including alignment and control. As a result, Claude and other Anthropic products more visibly demonstrate this focus on safety than some competitors.

The decision to decline the Pentagon’s request is not surprising. However, I don’t think Thursday’s announcement is the last word on the issue. Anthropic’s leadership team, with the apparent support of investors and board members, has outlined a position that lives up to the company’s ideals and the expectations it has set with the market, its employees and other stakeholders. That Anthropic is pushing hard on this shows that its leaders are willing to take business risks, but I expect that there will be an agreement that satisfies both sides.

I believe we are fortunate that principled leaders in AI, such as Amodei and Anthropic, are the ones engaging with the Pentagon to support our national security interests while advancing the safety of AI-enabled defense applications.

As both a veteran and the director of Darden’s Lacross Institute for Ethical Artificial Intelligence in Business, what can you tell us about the issues at hand between AI, autonomous weapons and mass surveillance? Do you see any misunderstandings at play?

These issues are incredibly complex and discussions surrounding them can easily ignore the nuances that are very likely the focus of the dispute.

AI can and does play different roles, and some of them may be problematic while others might be beneficial.

Autonomy in the physical world, whether in weapon systems, automobiles, industrial equipment or other applications, involves a series of choices about how systems should act in new and poorly defined contexts and which outcomes are most important to achieve or avoid. With weapons systems, the fear of unintended lethal outcomes is paramount, and rightly so. We have historically entrusted humans with the responsibility to employ them properly, even autonomously. And we have equipped them to do so with rules, training and safeguards.

AI plays vital roles in autonomous applications, including in weapons systems. Anthropic’s recent announcement about NASA Perseverance is an example of AI used in autonomous systems in non-lethal ways. Similarly, AI can be used in autonomous weapons without being granted the authority to take lethal action, and it is critical that we prepare AI for these roles, with rules, training and guardrails.

The discussion of AI use in autonomous weapons is much more nuanced than many people think, and in some cases AI use may prevent unintended outcomes, including lethal ones. That is also one of the reasons why I believe the Pentagon and Anthropic will reach an agreement eventually.

Darden emphasizes responsible leadership. What principles would you encourage students to use when facing high-stakes decisions where legal, profitable and ethical considerations don’t neatly align?

Responsible leadership is more important than ever, and to practice it effectively is increasingly difficult when AI is involved. A foundational need is to understand AI, including how it works and what makes it safe and effective.

Like other general-purpose technologies, it is difficult to envision all of the ways AI might be employed in the future, both beneficially and harmfully. Doing so starts with understanding, and AI literacy is essential for everyone in business, government and military, especially leaders.

When AI is involved, decision making is even more challenging for leaders who are used to thinking in static terms, of benefits and costs, for example, when faced with a decision. Because contemporary AI is not static —  it is learning and evolving over time through both training and use — leaders must think about AI dynamically, in systems terms, in order to understand and plan for the impacts and outcomes that matter most.

Even as leaders consider a wide range of stakeholders, which we believe is essential to responsible leadership, decisions involving AI can still be fraught. Not only because AI may have differing impacts for investors, employees, communities and society, but also because AI is powered by a complex ecosystem of players, and ethical considerations arise in many areas. Leaders must consider them all, from the infrastructure, data and algorithms that are the foundation for AI, to the applications themselves, and how they are managed and monitored by users, including companies and the Pentagon, in this case.

I’d counsel leaders, future ones like our students, and current ones in corporate and military positions of authority, to keep these principles top of mind: understand AI, lean on systems thinking approaches, take a broad view of stakeholders, and consider the full value chain of ethical AI.

About the University of Virginia Darden School of Business

The University of Virginia Darden School of Business prepares responsible global leaders through unparalleled transformational learning experiences. Darden’s graduate degree programs (Full-Time MBA, Part-Time MBA, Executive MBA, MSBA and Ph.D.) and Executive Education & Lifelong Learning programs offered by the Darden School Foundation set the stage for a lifetime of career advancement and impact. Darden’s top-ranked faculty, renowned for teaching excellence, inspires and shapes modern business leadership worldwide through research, thought leadership and business publishing. Darden has Grounds in Charlottesville, Virginia, and the Washington, D.C., area and a global community that includes 20,000 alumni in 90 countries. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.

Press Contact

Molly Mitchell
Senior Associate Director, Editorial and Media Relations
Darden School of Business
University of Virginia
MitchellM@darden.virginia.edu