Ethics Is the Defining Issue for the Future of AI. And Time Is Running Short.

By McGregor McCance


As investment in artificial intelligence (AI) continues to surge, a critical element is not getting enough consideration, increasing risks to people, businesses and society. University of Virginia President Scott Beardsley believes significantly more attention must be focused on ethics as it applies to AI, in theory and in practice.

“What’s going well is that the tools, frameworks and conceptual clarity for ethical AI exist and are advancing rapidly,” said Beardsley, former dean of Darden. “What’s going poorly is implementation. Many companies still treat ethics as optional, while structural risks like bias, opacity and concentration of power remain entrenched.”

Time is running short to make a meaningful difference.

The next five years, Beardsley said, will determine whether ethics are embedded as infrastructure — or patched in too late at greater cost.

Darden is leading in the effort to implement ethical AI. The subject also has emerged as a core focus for Darden in the classroom, in research and in thought leadership that helps businesses thrive.

Darden’s LaCross Institute for Ethical Artificial Intelligence in Business, launched in 2024, provides a nexus for AI-related knowledge creation and instruction across Darden and the University of Virginia.

Here are several key issues framing the urgency of the ethical AI challenge, identified from research and scholarship conducted by the LaCross Institute.

Why are AI ethics at a critical inflection point right now?

The technology is scaling faster than governance or safeguards can keep up. AI is already shaping people’s lives, the harms are real, regulation is behind and adoption is accelerating. Decisions made now will shape how AI is embedded into society for decades.

Ethics cannot be bolted on later. Waiting until AI is fully woven into critical systems to correct bias, opacity or governance failures will be like retrofitting seatbelts after cars are already on the road. The next five years represent a window of opportunity to embed ethical frameworks — before risks become locked in and irreversible.

The United Nations’ Ethical AI Agenda 2030 frames the next five years as a window of opportunity: close enough to demand immediate action but long enough to implement structural safeguards.

What factors contributed to the current situation?

A “move fast and fix later” culture may work in consumer tech, but it is dangerous when applied to AI systems that determine creditworthiness or medical treatment. Once these systems are deployed, adding ethics after the fact is slower, costlier and harder to enforce. By 2030, AI will be so embedded in business and government infrastructure that retrofitting ethical standards may be nearly impossible.

Regulatory frameworks are fragmented and lagging. The EU AI Act, which comes fully into force in 2026, represents the first comprehensive regulatory regime. Elsewhere, the landscape is patchy: the U.S. has only partial guidance, while countries like Brazil, South Africa and Indonesia are still developing policies. AI is global, yet rules are national.

What’s the difference between AI ethics and ethical AI?

While they are related, they describe two perspectives: one theoretical, the other practical.

AI ethics is the academic and philosophical study of the moral, social and political issues raised by artificial intelligence. It is concerned with principles, frameworks and normative debates. It addresses the question: What should we do?

"AI ethics is the academic and philosophical study of the moral, social and political issues raised by artificial intelligence. It is concerned with principles, frameworks and normative debates. It addresses the question: What should we do?"

Ethical AI, by contrast, refers to the practical implementation of those principles in the design, development and deployment of AI systems. It is about ensuring that AI behaves in ways that are helpful, honest and harmless — not just in outputs but throughout the development lifecycle. It addresses the question: How do we actually do it?

AI ethics without ethical AI is toothless. Ethical AI without AI ethics is aimless. Both are required. The current imbalance — heavy rhetoric on ethics, lighter focus on practice — is what makes this moment particularly risky.

How does Darden approach ethical AI?

The LaCross Institute frames ethical AI as a value chain — a set of end-to-end activities where ethics must be designed in and continuously verified. In this model, there are five interconnected stages:

  • Infrastructure — compute, cloud, networks and their environmental footprint
  • Measurement & Data — sourcing, preparing and governing data
  • Models & Training — architecture, tuning and optimization choices
  • Applications & Implementation — deployment into real workflows
  • Management & Monitoring Outcomes — ongoing oversight and impact assessment

Each stage creates opportunities for value and distinct ethical risks that need controls and accountability built in from the start. The value chain operationalizes ethics. It turns “be ethical” into who does what, when and with what evidence. It’s the difference between aspirational principles and repeatable management practice — and it’s how leaders make ethics part of AI’s ROI, not a bolt-on cost. LaCross Institute Director Marc Ruggiono is the lead author of an institute white paper on the value chain of ethical AI that will be published in early 2026.

Illustration of a human figure walking through a grid.

Too often, AI ethics have been treated as an afterthought rather than a core design principle. Illustration by Daniel Liévano

Are AI ethics an afterthought for many companies or organizations?

Too often, AI ethics have been treated as an afterthought rather than a core design principle. Organizations may sign on to broad “ethical principles,” but when it comes to building or deploying AI, ethics is bolted on late in the process, if at all.

When ethics is left until the end, it is always the weakest link. Companies find themselves reacting to scandals instead of building trust and resilience.

Do competitive pressures cause businesses to rush implementation of AI?

Organizations often feel pressure from investors, boards or competitors to roll out AI products quickly. When AI is rushed, small errors can scale into systemic harms. For example, biased datasets may lead to discriminatory lending or hiring practices, which can then ripple across markets. Tesla’s Autopilot illustrates how pressure to launch quickly created gaps between what the system could do and how users perceived it — resulting in accidents and regulatory scrutiny.

Speed may provide a temporary competitive edge, but it often backfires. Flawed launches damage consumer trust, attract lawsuits and invite regulatory crackdowns. This creates reputational harm that outweighs early gains. Companies that chase speed without safeguards are gambling with trust, compliance and long-term sustainability.

Is there a competitive advantage to a company committing more attention to ethical AI?

Companies that are transparent and fair build stronger trust and brand loyalty. As the LaCross Institute emphasizes, “Helpful, Honest and Harmless” AI is not a brake on innovation but a foundation for sustainable growth.

Companies that build ethics into their AI strategy gain a dual advantage: They mitigate risks while building trust as a growth engine. Ethical AI is shifting from a cost center to a strategic asset. The companies that understand this early will be better positioned for the next decade.

Where will leadership on these issues come from?

From the actors who design, buy, deploy, insure and audit AI — especially large enterprises, standards bodies, multistakeholder consortia, universities and civil society. In the near term, these players can move faster than legislation and shape norms through procurement, standards adoption and market discipline.

How is AI affecting MBAs?

AI is automating pieces of analysis and content creation, but the managerial work that MBAs are trained to do — framing problems, balancing tradeoffs, governing risk and orchestrating cross-functional execution — grows more important as AI scales.

"AI automates some analysis, but it elevates the need for leaders who can design systems that are reliable, fair and auditable in production. The MBA stays relevant by becoming the degree that teaches how to run the business of AI"

Rather than displacing MBAs, AI is creating management roles: AI product owner, model risk manager, AI procurement lead, responsible AI officer and data governance director. These roles reward graduates who can connect technical teams, legal and compliance functions, and profit and loss owners using shared frameworks and measurable controls.

AI automates some analysis, but it elevates the need for leaders who can design systems that are reliable, fair and auditable in production. The MBA stays relevant by becoming the degree that teaches how to run the business of AI or manage AI as a business function.

What is the LaCross Institute doing differently than other academic institutions focusing on AI?

The LaCross Institute stands out through its operational, managerial focus — distinct from both theoretical ethics centers and purely technical AI labs.

It treats AI ethics as an operational, leadership-driven discipline embedded in research, education and practitioner engagement. Through robust funding, a value-chain management framework, ambitious academic programming and university-wide collaboration, it equips business leaders with real-world tools to govern AI ethically and effectively.

About the University of Virginia Darden School of Business

The University of Virginia Darden School of Business prepares responsible global leaders through unparalleled transformational learning experiences. Darden’s graduate degree programs (Full-Time MBA, Part-Time MBA, Executive MBA, MSBA and Ph.D.) and Executive Education & Lifelong Learning programs offered by the Darden School Foundation set the stage for a lifetime of career advancement and impact. Darden’s top-ranked faculty, renowned for teaching excellence, inspires and shapes modern business leadership worldwide through research, thought leadership and business publishing. Darden has Grounds in Charlottesville, Virginia, and the Washington, D.C., area and a global community that includes 20,000 alumni in 90 countries. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.

 

Press Contact

Molly Mitchell
Senior Associate Director, Editorial and Media Relations
Darden School of Business
University of Virginia
MitchellM@darden.virginia.edu