Leadership in the Age of AI: Key Takeaways from UVA Data Leadership Conference
By Cooper Allen
Artificial intelligence (AI) experts shared their insights on how AI may prove to be a defining technology of our time during a full-day conference organized by the University of Virginia.
Held at UVA Darden DC Metro at Sands Family Grounds in Arlington, VA on Dec. 5, UVA’s Conference on Leadership in Business, Data, and Intelligence also explored the many ethical questions that AI tools and their deployment can raise.
Co-hosted by the University’s Darden School of Business and School of Data Science, the gathering featured leaders from diverse sectors who opined on how AI will reshape society in the years ahead—and how it already is.
“AI has achieved unprecedented adoption and continues to develop at breakneck pace,” said Omar Garriott, Executive Director of the Batten Institute for Entrepreneurship, Innovation and Technology, which houses Darden’s AI Initiative. “This conference was about UVA using its unique convening power to explore the myriad implications of a technology with massive potential.”
Here are some key takeaways from the conference:
Building comfort and trust with AI is critical
Companies everywhere are now wondering how, and to what extent, they should incorporate AI into their operations. In the first panel discussion of the day, which focused on private sector perspectives on AI, panelists discussed the factors businesses must consider as they think about how to adopt AI tools.
“If you’re building an AI system, the number one thing you’re going to encounter is people who are afraid,” said Andrew Gamino-Cheong, co-founder and chief technology officer of Trustible AI. “And then it’s going to be on you to help build that trust.”
A critical component of establishing trust is ensuring that systems are safe – even when, and perhaps especially when, they’re operating in unanticipated ways.
“It’s not just, how do I design this system safely, securely?” said Jamie Jones, vice president of field services and technical partnerships at GitHub. “But how do I design it so that it’s safe and secure, even when it’s not doing what I expect it to do?”
Even with safeguards in place, though, Jones cautioned against releasing any AI system prematurely.
“We need to be very careful about what things we are actually pushing to market, shipping and going live with that may not exactly meet what we were trying to do,” he said. “Because, again, the blast radius of AI can become so large.”
The notion of responsibility was a common refrain. In another panel, Alex Pascal, a senior fellow with Harvard’s Ash Center for Democratic Governance and Innovation and a former Biden administration official, urged the private sector to assume that mantle.
“With great power comes great responsibility,” he said. “And right now, that responsibility for the next, I would say, at least year to 2 years is in the hands of the private sector.”
How AI is defined affects how it’s governed
AI ethics and governance will be critical areas for policymakers and industry in the near and long term. Defining exactly what is meant by artificial intelligence will be important as guidelines and regulations are developed – and it’s not a settled matter.
“I would say the jury is still out,” said Renée Cummings, an assistant professor of the practice in data science at UVA, explaining that the definition tends to evolve as new technologies are released.
“I think everyone is defining AI in the way that works best for their organization, their agency, or the things they want to do,” she added.
Ron Keesing, director of artificial intelligence and machine learning at Leidos, agreed that clarity on AI’s definition remains elusive.
“I think the definition still remains very fuzzy, and it’s going to be more and more of a problem as we try to introduce governance,” he said, explaining that this is because firms and individuals will prefer their applications not be subject to regulation and will, consequently, argue their work does not constitute AI.
Given the complexities and risks involved with AI, would it be preferable to simply ignore the technology altogether? Marc Ruggiano, director of the Darden-School of Data Science Collaboratory for Applied Data Science at UVA, posed this hypothetical to panelists.
“What would we be missing out on?” he asked.
Cummings noted that many of the red flags raised by AI are not new. “The challenge with AI is the amplification,” she said. “The challenge with AI is the data.” However, she added, “seeing the extraordinary things that AI can do – it makes you realize that we’ve got to work with these tough questions.”
Jepson Taylor, chief AI strategist for Dataiku, said that passing on AI would be the historical equivalent to ignoring the printing press or the internet – only bigger.
“If you decide to skip it, for the people that are not in this room that decide not to skip it, they will run circles around all of us,” he said.
Another panel session consisting of executives from Meta and former officials of the Biden Administration and Microsoft, addressed implications for leaders across sectors—namely the interplay between the public and private sectors.
Garriott, who moderated the panel, said: “We merely scratched the surface of critically important questions about the respective roles and responsibilities of the government and corporations as these technologies, which can often feel like black boxes, take root.”
We’re all AI experts
In a session on how AI impacts society, Mona Sloane, an assistant professor of data science at media studies at UVA, put it directly: “We’re all AI experts now, which means we should all have a voice in the conversation.”
Sloane explained how everyone is constantly interacting with AI systems in all facets of life, even if it’s not always visible. “We actually all are building up really important knowledges by way of this experience around AI,” she said.
The common denominator of these AI systems is “they are designed to facilitate decision-making in order to save time, resources and increase productivity,” she said.
Sloane addressed the notion that AI technology will serve as a proxy for human decisions. “AI does not necessarily replace those,” Sloane said, in reference to personal decision-making. “It shifts how they are being done.”
Sloane illustrated this by describing a research area she has explored, the use of AI in professional recruiting, which can take many forms, from screening applicants to crafting more inclusive job advertisements. She argued that “in order to avoid human bias and machine bias, we need to think about transparency — but in a contextual way.”
The Psychology of AI
In a session exploring the psychology of AI, Roshni Raveendhran, Assistant Professor of Business Administration at the UVA Darden School of Business, noted that AI is revolutionizing all aspects of how we work. She explored three different uses of AI in the workplace: for personalization, for increasing productivity and for planning and decision-making.
According to Raveendhran, whose research focuses on understanding the future of work, growing use of AI in the workplace creates a pressing need to understand “how AI impacts people and how our psychology impacts AI when AI is being deployed,” as she put it.
“To receive the full value from AI and other technologies,” said Raveendhran, “we need to be asking, ‘How can we leverage those technologies to create positive impact on organizations, on individuals and society’ and ‘How can we create workplaces in which people can thrive?’”
This is a period of ‘uncertainty’
In the conference’s closing keynote, attendees heard from Adam Ruttenberg, a partner with the law firm Cooley, who focused on the legal and regulatory environment for AI. Early in his remarks, he issued a disclaimer to the audience.
“When we talk about the tech regulatory landscape for AI, we are guessing,” he said. Why? “It’s changing every single day.”
To illustrate the delicate moment society is currently at with generative AI, Ruttenberg cited a test his law firm ran, in which it gave ChatGPT a prompt to generate 10 cases for a legal scenario where they knew there was only 1 that applied — which it did because that was the precise request.
“So where does that leave us?” he asked. “It leaves us in a period of fear, uncertainty, and doubt.”
Ruttenberg described how the recent White House executive order – which includes guidelines on transparency, rules on biases, and privacy protections – currently represents the closest thing in the United States to a governing legal framework. However, the time required to implement its provisions could cause problems.
“The truth of the matter is, the technology will have changed by the time the regulations come out, because the pace of technology development, especially in AI, is huge,” he said.
He went on to explain that, currently, significant uncertainty exists around what will be covered by copyright and patent law, and that a lot will likely happen over the next 12 to 18 months to better define the lay of the land.
So, given this landscape, what should industry, developers, and regulators focus on?
“In the short term, in this period of uncertainty, we have to treat generative AI and AI generally as a tool,” Ruttenberg said.
“As human beings and responsible business owners,” he added, “we need to make sure that we’re using that tool in a responsible way – which means we need to understand how it works, we need to understand its capabilities and its limitations, and we have to be responsible for what it does.”
Looking farther down the road, Ruttenberg offered this advice: “All we can do is navigate our best, use the best amount of human oversight, and know that whatever we think we know today, will change, and it’s going to change fast,” he said.
And it’s a moment like no other
The discussions throughout the day were bookended by the leaders of the host schools: Phil Bourne, founding dean of the School of Data Science, and Jeanne Liedtka, interim dean of the Darden School.
Kicking off the conference in the morning, Bourne underscored the historic and unprecedented nature of this new AI era, saying we’re in a “Prometheus moment.”
“I don’t think there’s any doubt, as we see it siting in our respective schools at UVA, that there’s something really game-changing going on right now,” he said.
Bourne added: “I’ve been around academia for a long time, and in government, and I don’t think there’s been quite a moment like this in my whole career.”
Liedtka, in closing the conference, acknowledged that discussions around AI, for now, may ultimately produce more questions than answers, given the complexity of the issues and challenges.
“Having the courage to ask the hard, tough questions is much better than spending your time answering the obvious, easy ones,” she said.
Liedtka also echoed Bourne’s sentiments in summing up the conference and the collaboration between the two schools: “I hope that this is just the first of many great conversations that we have together on the subject that will probably define the time we live in.”
This article originally appeared in the UVA School of Data Science news.
All photos by Avi Gerver Photography.
The University of Virginia Darden School of Business prepares responsible global leaders through unparalleled transformational learning experiences. Darden’s graduate degree programs (MBA, MSBA and Ph.D.) and Executive Education & Lifelong Learning programs offered by the Darden School Foundation set the stage for a lifetime of career advancement and impact. Darden’s top-ranked faculty, renowned for teaching excellence, inspires and shapes modern business leadership worldwide through research, thought leadership and business publishing. Darden has Grounds in Charlottesville, Virginia, and the Washington, D.C., area and a global community that includes 18,000 alumni in 90 countries. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.
Associate Director of Content Marketing and Social Media
Darden School of Business
University of Virginia