LaCross Institute Conference to Explore Gaps Between AI Expectations and Reality
By McGregor McCance
Artificial Intelligence is transforming business, augmenting human capabilities, streamlining operations and fueling innovation across industries. But the rapid adoption presents pressing challenges: concerns over job displacement, ethical blind spots, unreliable outputs and a persistent gap between expectation and reality.
Leaders are discovering that AI presents promise and peril, placing a premium on expertise, experimentation and thoughtful strategy.
The LaCross Institute for Ethical Artificial Intelligence in Business will convene academic and business experts and researchers, practitioners and ethicists to explore these issues.
“Minding the Gap: How AI Drives Performance and What Limits Its Impact” is scheduled for 5 Dec. at The Forum Hotel at the University of Virginia Darden School of Business.
The Darden Report checked in with two LaCross Institute experts for more insight before the conference: Institute Academic Director Raj Venkatesan and Marc Ruggiano, Director.
What are examples of the “expectation-versus-reality” gap in AI adoption today?
Venkatesan:
One expectation is that AI can be easily adopted without any customization to the firm’s internal processes, and that employees can adopt and be effective with AI with minimal training.
The reality is that AI is effective when it is customized to the context of the company and is fine-tuned/trained on the firm’s proprietary data. Employees need a lot of training to adapt to AI to improve their productivity. The returns from AI take time to realize.
Ruggiano:
One expectation is that employees will wait for the organization to determine how AI should be used, which AI tools will be provided, and what guidelines must be followed.
In reality, employees are using AI tools whether their organization has endorsed their use or not and are purchasing what they believe will best enable them to succeed at work, while following their own interpretation of what practices are acceptable.
Another is that AI firms are overvalued and investments in them are unlikely to pay off as handsomely as promised. The trend follows past examples such as the cable television, internet service, and dot-com booms, and a similar “bust” is a likely outcome of the “AI bubble” as well.
While there are plenty of skeptics, some predict that AI will develop as the largest opportunity for value creation in human history. However it evolves, this kind of beneficial outcome will not succeed without a sustained commitment to ethics.
What leadership skills and cultural changes are necessary to create an “AI-savvy” organization that embeds ethics at its core?
Venkatesan:
Leaders need to provide a vision of ethical AI use in the organization. They need to clearly communicate the reasons for implementing AI – workforce productivity, competitive advantage, new business models etc. They need to understand the entire value chain of ethical AI. Often organizations use AI in an ad-hoc manner within individual teams, and there is a lack of a coherent strategy. A unified vision, and a view of the value chain of AI, will help leaders surface ethical concerns about AI more effectively.
Ruggiano:
Leaders must recognize that AI literacy is foundational in modern business to ensure that employees at all levels are confident in their ability to embrace AI-enabled activities in every function.
They must provide new growth opportunities for employees in an AI-enabled organization to allow them to develop functional skills, business expertise and managerial judgement that previously came from “climbing the ladder,” a path that is changing in many industries as AI performs or assists in more and more traditionally human tasks.
Companies often face trade-offs between efficiency, innovation and ethical responsibility. How can leaders navigate those while capitalizing on AI’s opportunities?
Venkatesan:
I disagree that there is a trade-off here. Firms that consider the value chain of ethical AI can achieve synergies among innovation and efficiency in an ethical manner. For example, using models that protect consumer privacy but at the same time deliver personalized services and advertisement requires connecting the data, models, application, management and people aspects of AI to ensure an ethical implementation.
Ruggiano:
Some may perceive these as conflicting objectives, but leaders must push back on this mistaken view. They must understand AI, and their businesses, well enough to champion policies and practices that advance in all of these ways. Positioning their organizations appropriately on the efficient frontier of ethical AI adoption and embracing the value chain of ethical AI as a lens to explore these issues and guide their efforts. In this way, ethical AI may bear some resemblance to product safety, where efficiency, innovation and business ethics must all be accounted for to count as success.
What major changes do you anticipate ethical AI will bring to business models and industries by 2030?
Venkatesan:
There will be new types of jobs – prompt engineer, AI ethicist, etc. We are seeing a growth in consumer products (including humanoid robots) that are household assistants. Companies are starting to build governance layers to ensure AI outputs are accurate and ethical. There will be a growth in small language models that are lightweight and fine-tuned for specific tasks.
Ruggiano:
Google’s early mission to “organize the world’s information” made search a foundational way for people to find what they are looking for, whether it is fashion or facts. In the next five years, AI will enable individuals to find information at least as easily as searching for it, and to understand and employ it for their own and their organization’s benefit, or detriment. Industries that rely on information – aggregating, organizing or employing it – will be under intense pressure as a result.
Other sectors that depend on creation of content – voice, text, video, sound and more – are already threatened as generative AI makes creation of these outputs easier and more widespread. A flood of AI-generated content is already overwhelming some businesses such as social media and threatening some jobs such as animators and character artists. However, human creativity and imagination continue to stand out while preferences for human-human interaction remain strong across these content types. Models in these businesses must adapt in ways that recognize the changes AI is driving and amplify the uniquely human aspects that create memorable moments.
What role should humans play in AI-driven activities, such as customer service interaction, credit approvals or hiring decisions?
Venkatesan:
Decisions about humans’ role in AI activities also need to consider that humans (in general) prefer to interact with other humans. Also, humans have a better understanding of the context and learn from experience in real time. At the moment, AI recommendations that are augmented by human input, and with specific knowledge corpus provided by humans are performing better than AI-only model recommendations. Finally, sensitive industries like consumer finance, healthcare, etc., will always need humans to approve AI decisions.
Ruggiano:
When AI systems sense an impending collision in cars and in aircraft, we rely on them to take action. While they often alert drivers and pilots in advance, they don’t stop with a recommendation to “apply the brakes” or “pull up.” When the situation warrants it, they take action, and by doing so, they save human lives. Yet, we are not in a world where general purpose AI systems are trustworthy enough to assume such authority more broadly. And we have not determined, as a society, just what we are willing to delegate to AI. For the foreseeable future, humans should remain the final decision-maker in many areas and for many types of decisions. But just as with accident avoidance, there will be an increasing number of domains where we agree that AI decisions are better than the next best alternative.
The University of Virginia Darden School of Business prepares responsible global leaders through unparalleled transformational learning experiences. Darden’s graduate degree programs (Full-Time MBA, Part-Time MBA, Executive MBA, MSBA and Ph.D.) and Executive Education & Lifelong Learning programs offered by the Darden School Foundation set the stage for a lifetime of career advancement and impact. Darden’s top-ranked faculty, renowned for teaching excellence, inspires and shapes modern business leadership worldwide through research, thought leadership and business publishing. Darden has Grounds in Charlottesville, Virginia, and the Washington, D.C., area and a global community that includes 20,000 alumni in 90 countries. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.
Press Contact
Molly Mitchell
Senior Associate Director, Editorial and Media Relations
Darden School of Business
University of Virginia
MitchellM@darden.virginia.edu
