To Lead Is Human: What Does Responsible Leadership Mean in the Age of AI?

By Michael Blanding


Artificial intelligence (AI) came upon us slowly and then all at once. The rapidity with which generative AI has entered the workplace has led some to reflexively resist its incursion, while others have grappled with new ethical issues that surfaced seemingly overnight. “Whenever a new technology comes along, it has the opportunity to do good things, but also to potentially create problems,” says Dean Scott Beardsley.

“Leadership is needed at several different levels, including management, engineers, shareholders and even users of the technology, who have to figure out just what it is they are playing with.” Beardsley is fresh off a sabbatical at Oxford University’s Uehiro Centre for Practical Ethics, examining the connections between AI, ethics and human wellbeing.

Headshot of Yael Grushka-Cockayne

Professor Yael Grushka-Cockayne

The first lesson for those looking to lead in an AI-powered world is to confront the reality that AI is here to stay, says Professor Yael Grushka-Cockayne. She helped convene a conference on AI leadership at Darden’s northern Virginia campus last winter.

“There was a lot of conversation around learning to embrace AI and engage with it,” she says. “If you shut yourself off and say, ‘We’re definitely not going to use it,’ that’s irresponsible, because then you are not building the competency to understand where the dangers are.” The best thing a leader can do is to educate themselves in the technology and show they know what they are talking about. “That way you gain credibility and trust in the eyes of your clients, your employees and your investors,” says Grushka-Cockayne, “at the same time being able to put boundaries and limitations on it.”

AI Without Strategy: Like the Titanic Hitting the Iceberg With a More Efficient Engine

From a strategy perspective, responsible leadership is about more than just ethics, says Professor Mike Lenox, author of the new book Strategy in the Digital Age: Mastering Digital Transformation (Stanford Business Books, 2023). AI can transform businesses in many different ways, from making operations more efficient to totally changing the way they interact with customers. A company like Spotify, for example, leverages AI to constantly measure the listening habits of its users, along with their immediate social network and other similar users to deliver custom music content. “Businesses used to have just episodic interactions with customers, maybe when they had a point of sale. Now they are constantly in contact with them,” Lenox says. That, in turn, can create new business models, leading companies to transform themselves from selling a product to managing a network or platform.

Headshot of Mike Lenox

Professor Mike Lenox.

As exciting as that is, however, Lenox warns that leaders’ efforts to incorporate AI responsibly require serious advance thinking about the big picture. “Instead of falling into the trap of immediately adopting AI, you need to think about why you are adopting it, how is it helping you and what do you want it to do.” Lenox says. “If you apply AI without thinking strategically, it’s like improving the efficiency of the engines on the Titanic — all you are going to do is hit the iceberg more quickly.”

Lenox suggests leaders think about what their unique value is to their customers, and how the advanced capabilities of AI will help them better provide it. It also means considering “whether you should build, buy or buddy,” in Lenox’s alliterative words. That is, create a model yourself, purchase an existing AI model or partner with an AI company to create one together.

A Responsible Mindset: Managing AI to Be a Means to a Beneficial End

It’s important to remember that AI is “a means to an end, and not an end unto itself,” says Ben Leiner (MBA ’19), an adjunct lecturer at Darden teaching the course “Technology and Ethics.” The development of AI creates some dangerous incentives to cut corners at the expense of customers, says Leiner, who is also a marketing lead at SmartNews, a global news aggregator that uses AI to personalize news for users. “There’s often a desire to push for short-term financial gain without considering long-term societal consequences or other systemic issues that may be at play.”

The power of the technology, combined with a lagging regulatory framework in the United States where most of the leading AI companies are located, means companies may be tempted to aggregate data for models that may create a better product or service but may also violate user privacy or copyright law. When dealing with large language models such as ChatGPT, managers may also overlook underlying data biases that can, in turn, cause models to produce results that have negative impacts on marginalized communities.

"You have people on one side saying, ‘Go faster, faster, faster. This could really solve some of the world’s greatest problems,’ and people on the other saying, ‘This could get out of control, we need to put in some limitations and guardrails to ensure it doesn’t hurt people.'"
Darden School Dean Scott Beardsley

While companies can install safeguards in the way they train models, Leiner says, they can never completely overcome all biases, due to the simple fact that the Internet itself reflects the biases of our larger society. “We’re never going to remove all the bias from these machines,” Leiner says. “But it’s the obligation of generative AI companies to understand the models they are deploying so that they can be used in contexts where they don’t create unintended harm or perpetuate inequality.”

Headshot of Ben Leiner

Lecturer Ben Leiner (MBA ’19)

For leaders to navigate AI’s pitfalls, it requires a clear ethical framework that goes beyond existing regulatory or legal frameworks. “Regulation is where the conversation starts, not where the conversation ends,” Leiner says. “It’s, frankly, sort of sociopathic to think you should be entitled to do whatever you want whenever the law doesn’t have something explicit to say.”

Take, for example, a theoretical instance in which a company is rolling out a new AI model that can identify depression in users. However, the product manager in charge discovers the model has a tendency to diagnose an unusual number of false positives for women.

The manager could roll out the product anyway to grab market share, hoping the company could fix the app before it suffers reputational damage. Truly responsible leadership, however, means a guiding star that looks beyond the financial.

“In class, we teach our students to ask, ‘If I launched my product now, what are the worst things that could happen across a variety of stakeholders, and how do I deal with those eventualities?’” Leiner says. “In this case, you can modify your objective and say, ‘It’s not going to be just about making the most money in the short term, but about finding opportunities to make money and do so without these false positives.’”

Who Is Responsible, and What Are They Responsible for?

With any new technology, there is a race to roll it out quickly, before it can be adequately vetted, says Beardsley. “You have people on one side saying, ‘Go faster, faster, faster. This could really solve some of the world’s greatest problems,’” he says, “and people on the other saying, ‘This could get out of control, we need to put in some limitations and guardrails to ensure it doesn’t hurt people.’” In finding the right balance, he says, leaders need to ask themselves in advance who is responsible, and what they are responsible for, with regard to immediate impacts and potentially unforeseen consequences in the future.

In cases where outcomes are uncertain, he adds, companies can slow down and implement new applications on a smaller scale by running pilots or beta tests that can minimize the impact if something goes wrong. University researchers can play a vital role in this process by helping to test applications of AI before they are rolled out on a large scale, says Grushka-Cockayne — especially in high risk realms of health, education or finance. “Although it feels like the world is rushing along at a million miles an hour, traditional old-fashioned experimentation in a controlled setting is still valid and helpful.”

Darden is poised to help advance the field of AI ethics and leadership head-on, thanks to a gift of more than $100 million from David LaCross (MBA ’78) and his wife, Kathleen LaCross — the largest in the School’s history — to support Darden’s Artificial Intelligence Initiative. The funds will be used to support faculty research, develop courses, write cases and convene forums to examine the challenges and opportunities of AI in business. “AI is a huge business in its own right, as well as a productivity and intelligence tool in so many existing industries, including drug discovery, technology, health care and education,” Beardsley says. “It’s highly relevant to learning and to teaching the next generation what it means to be responsible leaders for the future.”

The new gift bolsters Darden’s reputation as a leader in business ethics. Many faculty members are widely recognized thought leaders examining the implications of AI and other disruptive technologies.

At the end of the day, Darden might have the most impact by educating students on the uniquely thorny issues that can come with the technology and establishing the School as a trusted place where companies can recruit the next generation of responsible AI leaders at any level in the field.

“The way our case method is set up encourages students to be leaders and raise their hands in any context they encounter in business,” Leiner says. “The question is, how do you exercise the power you have?”

About the University of Virginia Darden School of Business

The University of Virginia Darden School of Business prepares responsible global leaders through unparalleled transformational learning experiences. Darden’s graduate degree programs (MBA, MSBA and Ph.D.) and Executive Education & Lifelong Learning programs offered by the Darden School Foundation set the stage for a lifetime of career advancement and impact. Darden’s top-ranked faculty, renowned for teaching excellence, inspires and shapes modern business leadership worldwide through research, thought leadership and business publishing. Darden has Grounds in Charlottesville, Virginia, and the Washington, D.C., area and a global community that includes 18,000 alumni in 90 countries. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.

 

Press Contact

Molly Mitchell
Associate Director of Content Marketing and Social Media
Darden School of Business
University of Virginia
MitchellM@darden.virginia.edu