The David LaCross Interview: Exploring AI’s Endless Possibilities and Darden’s Call to Action

By Lauren Foster


David LaCross’s affinity for artificial intelligence is palpable.

He speaks energetically about how ChatGPT and other large language models (LLMs) supercharge his ability to find and organize information. He talks enthusiastically about AI’s potential to individualize health care treatment, and its promise to benefit virtually any other kind of work. He voices caution over its role in amplifying misinformation and its surging power demands.

“The upside for AI as a productivity enhancement tool across so many different subject matters is going to be of immense benefit to humanity,” he says during a phone interview with The Darden Report from his home in California.

AI clearly is here to stay. And to LaCross (MBA ’78), Darden has a responsibility to help direct its growth, an effort he and his wife, Kathy, are enabling. In September, UVA and Darden announced the creation of the LaCross Institute for Ethical Artificial Intelligence in Business, made possible by the largest gift in Darden history from the LaCrosses.

For business students and professionals, LaCross’s advice is clear: embrace AI or risk becoming “roadkill” in that transformation. He’s confident it will solve complex problems and enhance human capabilities across industries. However, he stresses the importance of responsible development.

“It’s absolutely essential, from a career perspective, to be very intellectually curious about this,” he emphasizes.

The AI Revolution: A Productivity Powerhouse

LaCross’s optimism comes from experience.

His journey in AI began in the late 1990s, when Fair Isaac & Co. (FICO) acquired his company, Risk Management Technologies. FICO was then and still is arguably the preeminent firm in the broad AI segment “predictive analytics.” While there, he was granted deep guided tours of FICO’s entire technology map, a blueprint of where a company is heading with its technology development, products and investments. LaCross describes the scale of data and the sophisticated approach to mining that data to calculate credit scores or the likelihood of a fraudulent credit card transaction as “truly breathtaking.”

But this area of AI, while vast, is very different from the AI getting headlines today: The LLMs (sometimes referred to as Generative AI), have used machine learning to respond to user text questions coherently and, with related technology, absorb user text instructions to create images. The interactions can be iterative with the AI responding to more detailed questions, for example, or modifying the last image it created.

"It's absolutely essential, from a career perspective, to be very intellectually curious about this. "
David LaCross (MBA '78)

The game-changer, according to LaCross, came with the 2022 release of ChatGPT 3.5. As one of its first million users in the first week of release, he was instantly captivated.

“It was like honey to a honeybee for me,” he says. “When I used it for the first time, I was quite literally in awe.”

ChatGPT’s ability to handle complex queries and generate coherent, well-crafted responses marked a new era in AI capabilities.

“For non-controversial subjects – just fact-finding – I couldn’t ask for a better research assistant. It’s so much more efficient than general search, where each link must be examined, vetted, and relevant content painstakingly incorporated into the work,” LaCross says. “I now use up to four different LLMs daily and enjoy comparing and contrasting their results using the same queries.”

The scale of LLMs and their evolutionary pace gives a glimpse of the technology’s enormous potential. LaCross notes that ChatGPT3.5 contained about 175 billion parameters derived from its training data that were constantly being evaluated to predict the next word in a query response. ChatGPT4.0, released just four months later, contained 1.7 trillion parameters.

That’s tremendous progress, he says, but ChatGPT4o (“o” for “omni”), released in May 2024, will take LLMs to a new level and a move towards more natural human-computer interaction – learning data can consist of any combination of text, audio, image and video, and output can likewise consist of any combination of those modes. The number of potential AI applications will explode

AI Challenges

Even with AI’s tremendous upside, there are plenty of concerns and a growing need to mitigate risks and problems. LaCross says ethical concerns cut across AI’s progression and implementation. This includes privacy issues, bias that is perpetuated when the data informing AI itself reflects biases, sustainability impacts and more. These challenges will evolve and magnify as the technology develops.

When LLMs made their public appearance, critics pointed out that they were guilty of returning inaccurate information, and that the problem, while somewhat reduced over the past couple years, remains. But LaCross takes issue with critics using the terms “hallucinating” or “making stuff up.”

“Hallucinate is the absolute wrong term, and it’s used by people who should know better. These LLMs do not hallucinate. They are simply developing AI algorithms from the training data they are fed,” he says. “When you scrape the entire internet for training data, you get the bad with the good. Generative AI applications apply the same algorithms to the entire data set to predict the next word, while considering the context of words previously selected. Results are based on algorithms – not creative thought.”

This critique raises another issue with AI: Given the scale of data, how can AI results be audited, explained and validated? This so-called “black box” challenge is vexing because it’s multidimensional, and it’s now a crucial field of research and practice, LaCross says. “Much has been written regarding the need for functional audits that focus on precisely how AI results were created. Software historically has used computer-generated logs of every coding step taken to validate results. But when considering the soon-to-be trillions of parameters contained in LLMs that are being evaluated to provide the next word, alter an image, or create video,” he adds, “providing a detailed audit trail is beyond daunting.”

Still, the search is on for mechanisms to validate results, and to conduct “impact” audits to investigate the effects of an AI’s output over time on users and society as a whole.

“This is part of the AI and ethics discussion,” LaCross says.

Another concern: human generated misinformation and AI’s multiplier effect on the problem. Types of AI-misinformation include enhancing details, communicating uncertainties, drawing incorrect conclusions, and simulating personal tones. Detecting AI-generated misinformation is a critical challenge, especially as AI becomes more adept at creating persuasive content that manipulates behavior. Deepfakes, whether audio, images, or videos are a closely related problem.

LaCross warns that we soon might not be able to trust any audio or visual information on social media. It’s a challenge requiring public education, new systems of verification and better policing by content providers.

Researchers and practitioners are exploring ways to address this challenge. Many AIs are beginning to incorporate content provenance, which indicates where the information originated, and source transparency by utilizing links and citations to allow users to confirm results.

“The problem, of course, is that those references could be fake as well,” LaCross says.

Another approach is to develop tools that can help identify when AI produced content, and then undertake independent fact checking. The irony: Using AI to identify content created by AI.

“The public needs to know that less and less content can be relied on to be accurate and that a healthy degree of skepticism is warranted,” LaCross says. “This challenge will only increase. As AI technology evolves, it will be harder and harder to spot AI-generated content.”

Content providers have their own issues.

“They are increasingly aware that their reputations can be damaged severely if their output loses credibility. They have a significant incentive to police their training data and to provide a mechanism for their consumers to log challenges to AI results and provide their reasons,” LaCross says. “They need to establish their brand’s trustworthiness. There will always be people who prefer to live in their echo chambers, but as more people become generally more wary of content, the needle will move in the right direction.”

Yet another challenge of AI’s continued growth is its effect on energy consumption. Here, LaCross identifies a sobering list of compounding issues that includes the surging electricity needed to power AI computing hardware and then cool those components, and the expansion of data centers that need enormous amounts of uninterrupted power. AI developers are acutely aware of this issue, he says, but the general public is far less informed.

“That needs to change, especially as AIs become more integrated and impactful in our daily lives,” LaCross says. “Ongoing efforts to optimize AI models and promote energy-efficient practices will be crucial for a sustainable future. But there remains much uncertainty. As AI continues to evolve, a balance must be struck between technological innovation and environmental sustainability.”

Reshaping Professions and Industries

A self-described “rabbit-hole” person, LaCross’s AI dive has become his deepest and most diverse given the countless side tunnels. He spends hours each day to stay current, reading books, mainstream AI-focused periodicals and technical websites. LaCross estimates he spends four times as much time today to stay informed than he did in 2022.

For the foreseeable future, LaCross envisions various AIs as “copilots” serving as productivity enhancers for human workers, not total labor replacements. But he acknowledges that “knowledge” professions will likely be impacted far more than was foreseen just five years ago. What makes this probable revolves around one common denominator – the challenge professionals have in keeping up with advances in their field, such as technology, research, legal frameworks and regulation, which leads to one of LaCross’s predictions for the future.

"I believe that AI is increasingly less of a technology challenge than a leadership opportunity. "
David LaCross (MBA '78)

“I believe that LLMs will remain very popular with many users, especially as a Search replacement. And LLM data quality will likely improve but the scale of the effort means vetting data will be expensive and time consuming,” he says. “Given that, the real AI action will involve taking the core LLM technologies and applying them to thousands of much smaller domains with a concentration on accurate training data from the outset. These more focused AIs will offer an up-to-date knowledge base that can be tapped very quickly with relevant and accurate results relating to the information requested.”

The legal field, for example, could see paralegals, usually undertaking manual research for days or weeks, instead feeding AI a particular set of facts and then turning it loose for a couple minutes to identify the universe of relevant precedents, propose legal arguments and strategies tailored to the known opinions of whatever judge is hearing the case. Lawyers could then debate the AI to test the soundness of information and strategies.

In medicine, specialists and med students alike will benefit from AI’s ability to absorb the universe of relevant medical literature and provide information that addresses patient-specific data. “Text combined with video of surgical techniques could revolutionize the quality of diagnosis and treatment planning,” LaCross predicts.

Education, particularly early childhood education, is another area where LaCross sees enormous potential, despite some institutional resistance. He imagines AI multi-modal tutor bots with “infinite patience” that adapt to individual learning styles, working with children at any time of day from a very young age. Images or videos from the bot and verbal responses from students could accelerate vocabulary acquisition and later, reading comprehension.

“Also, a favorite aphorism emphasized in my Darden experience is that you can’t manage what you don’t measure.” He explains, “a significant role of any education bot will involve constant evaluations of a student’s performance and will report them to teachers, parents and administrators. It will measure the student’s individual trajectory on different dimensions, their progress relative to their peers in class, and to their peer group nationwide or worldwide.”

He envisions teachers having access to a dashboard covering each student’s performance metrics measured by their respective bots and using that information to prioritize the need for personal interventions when a student goes off track.

“School districts can and should set education bot objectives and capabilities, including training them in skills that encourage student interaction for both recreation as well as education,” he says.

LaCross draws parallels with past technological revolutions, noting that fears of mass unemployment due to innovation have often been overblown. “Historically, the opposite has occurred in almost all cases,” he says. Still, he acknowledges that AI could lead to workforce reductions in some areas. But he remains optimistic: “The ones who survive and thrive will be those who become ‘query engineers’ able to extract the most productivity improvements from their copilots.”

The LaCross AI Institute

LaCross’s energy and excitement about AI generally are perhaps outweighed only by his enthusiasm about the potential of the new Institute.

“I’ve got tunnel vision on case development and incorporating AI topics into our curriculum at the same level as marketing or finance,” he says.

The goal is to equip students with a deep understanding of AI’s potential business applications, ethical considerations, and implementation challenges, while leveraging and amplifying the array of AI-related research and instruction occurring across UVA.

As he did when Darden announced LaCross’s plan to expand his extraordinary gift to the school to $100 million in 2023, which included funding to create the Institute, LaCross says Darden is perfectly positioned to answer the call for a business school to emphasize critical needs complementary to the technology.

“In fact, I believe that AI is increasingly less of a technology challenge than a leadership opportunity,” he says.

About the University of Virginia Darden School of Business

The University of Virginia Darden School of Business prepares responsible global leaders through unparalleled transformational learning experiences. Darden’s graduate degree programs (MBA, MSBA and Ph.D.) and Executive Education & Lifelong Learning programs offered by the Darden School Foundation set the stage for a lifetime of career advancement and impact. Darden’s top-ranked faculty, renowned for teaching excellence, inspires and shapes modern business leadership worldwide through research, thought leadership and business publishing. Darden has Grounds in Charlottesville, Virginia, and the Washington, D.C., area and a global community that includes 18,000 alumni in 90 countries. Darden was established in 1955 at the University of Virginia, a top public university founded by Thomas Jefferson in 1819 in Charlottesville, Virginia.

 

Press Contact

Molly Mitchell
Senior Associate Director, Editorial and Media Relations
Darden School of Business
University of Virginia
MitchellM@darden.virginia.edu