In Brave New World Of AI, UVA Darden Professors Make Case Business Ethics More Critical Than Ever
By Gosia Glinska
The role of artificial intelligence (AI) in business is a red-hot topic. And yet, as companies are ramping up their investments in AI tools and techniques, those technologies can be easily misapplied, causing harm to individuals, organizations and societies. As AI continues to proliferate, many worry that ethical concerns and regulations lag behind.
To explore AI ethics in business, the Human and Machine Intelligence Group at the University of Virginia recently convened a webinar hosted by Darden School of Business Professor Anton Korinek, an authority on the economic implications of advanced AI.
Panelists included two of Darden’s top thinkers on business ethics and corporate responsibility: Professor Ed Freeman, best known for his work on stakeholder theory and business ethics, and Professor Bobby Parmar, an authority on decision-making in uncertain environments.
The Risks and Dangers of AI
A top-of-mind concern among participants was Big Tech’s power to shape human behavior. Social media platforms are a case in point. Their business model is based on maximizing users’ engagement and showing them targeted ads. By predicting which recommendations would make users keep watching or reading as long as possible, a social media platform can become an addiction machine, Freeman and Parmar said.
AI algorithms, designed to improve customer engagement, can produce troubling outcomes, like deepening political and cultural divisions. “Social media companies,” said Parmar, “are getting people addicted to being right. We’re increasingly addicted to confirmation bias. When I look at my Instagram feed and see an out-group member, it’s easy to say, ‘See, they are wrong,’ and feel morally justified in my anger at this out-group member.”
The AI Alignment Problem
Korinek was particularly concerned about what he called the AI alignment problem. “How do we make sure,” he said, “that AI systems do what we want them to do?” According to Korinek, the AI systems — and corporations — act like agents of their own. “They pursue a set of defined goals that may or may not be aligned with the goals of all their stakeholders.”
Korinek’s immediate worry is our complacency in the face of accelerated AI adoption. “What if,” asked Korinek, “we suddenly have a mix of AI systems and corporations working together, influencing what is happening, and possibly leading to unethical outcomes in our society? One of the important things that business ethics has to deal with is, ‘How do we make sure that the goals of our corporations align with the broader goals of society?’”
The Importance of Teaching AI Ethics
Addressing key ethical questions should start in the classroom, the panelists said. “Students in data science, students in engineering have to understand moral considerations from the outset of developing technology, thinking through who the stakeholders are and how they are affected,” Parmar said. “What are their rights? What are our obligations? And that language of ethics is missing in the process of developing technology.”
Teaching students how to ensure that AI systems are designed and deployed in an ethical way is critical, considering how many technology companies have the build-it-first mindset and often view ethics as a hindrance.
At Darden, said Freeman, ethics is a bedrock of the MBA curriculum. Students learn how to deal with ethical challenges in real-world business contexts. “What’s unique about Darden,” said Freeman, “is that we see ethics as one of the background disciplines of business.”
Freeman hopes to inspire his students to change the existing institutions and start new, more responsible businesses. “Of course, we want to make money,” said Freeman. “But I know students want to be associated with companies that are trying to make the world better. That kind of revolution is sweeping the business world with people talking about purpose and stakeholder capitalism.”
The Role of Stakeholder Capitalism With AI
Stakeholder capitalism, according to Freeman, is a new vision of capitalism. It is based on Freeman’s stakeholder theory, which stresses the interconnected relationships between a business and its customers, suppliers, employees, communities and others who have a stake in the organization. As Freeman argues, companies should strive to build long-term value for all stakeholders, not just shareholders.
“Companies are realizing that they have to step up and take responsibility for a lot of things,” said Freeman. “They don’t know how to do that, but they are saying, ‘One thing we can do is be accountable to our stakeholders — our customers, suppliers, employees and the communities in which we operate.’”
Two years ago, the Business Roundtable, a Washington association of CEOs from the country’s largest companies, issued a statement on corporate purpose. In it, nearly 200 of America’s top executives committed to “deliver value to all stakeholders.” This is where Freeman finds a glimmer of hope for ethical and responsible use of AI.
AI’s practical applications can help us tackle an array of problems. But alongside benefits, the technology can also cause harm. As the late theoretical physicist Stephen Hawking once said, “The rise of powerful AI will be either the best, or the worst, thing ever to happen to humanity.” The question the panelists posed is whether humans have what it takes to preempt the worst.
The University of Virginia Darden School of Business delivers the world’s best business education experience to prepare entrepreneurial, global and responsible leaders through its MBA, Ph.D., MSBA and Executive Education programs. Darden’s top-ranked faculty is renowned for teaching excellence and advances practical business knowledge through research. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.
Director of Media Relations
Darden School of Business
University of Virginia