Skip to main content
Vanderbilt Background Photo

Friend or foe? A&S faculty discuss benefits and concerns of AI

Posted by on Thursday, January 29, 2026 in _To Migrate, News Story, Research.

illustration of two halves of a brain
Illustration by Liz Chagnon

Artificial intelligence has become increasingly present in our everyday lives—embedded in our cell phones, popping up on social media platforms, and incorporated into many business operations.

With this rapid adoption comes many emotions ranging from excitement about having a technological aid that makes life more efficient to despair about the potential impacts on the environment and on human behavior.

Faculty experts in the College of Arts and Science are leading research to help us better understand AI’s effects on humans and the surrounding world, while also testing its use as a research tool. Their discoveries might hold the answers to some of your most pressing AI questions.

Is AI changing the way we think?

A photo of Jay ClaytonJay Clayton, William R. Kenan, Jr. Chair and professor of English, is interested in how literature, film, and television influence public attitudes toward AI. His research was begun with assistance from a grant from the Ethical, Legal, and Social Implications Research Program at the National Institutes of Health.

Currently, Clayton is working with a team of undergraduate and graduate students to analyze 580 films dating from 1927-2025 that reference artificial intelligence of all sorts, ranging from a computer to a robot to a cyborg to networked intelligence. The team is examining the dataset for different themes, such as how films have changed in the way they depict AI over time, how AI has been envisioned as part of the future of medicine, and how AI might impact the environment. They plan to expand on this research by also analyzing novels and television that incorporate AI.

“The exaggerated hopes and fears that films raise concerning AI have had a clear influence on the degree to which people embrace AI in their lives,” Clayton said. “I think it is important that we take a balanced approach to the potential benefits and risks of AI in society. The study of literature and film can help us think critically about the exaggerations that surround AI in culture.”

A photo of Jenny DavisJenny L. Davis, Gertrude Conaway Vanderbilt Chair and professor of sociology, explores the ways AI reflects and shapes society at individual, cultural, organizational, and institutional levels. Her forthcoming book, The Injustice of Fairness: Algorithmic Reparation the Case for Redress, challenges the fairness paradigm within AI and computing ethics, offering a structural and historical lens.

Her most recent projects include a study exploring community controversies around the xAI data centers in Memphis, Tennessee, a study of how people perceive risks of AI-related job loss and deskilling, and studies of AI and decision-making in hiring and military domains.

“One really interesting finding comes from an AI hiring study, where we asked participants to rate job candidates based on a resume paired with an AI-generated score. We found a robust and consistent pattern in which decision makers from all kinds of backgrounds and under all kinds of conditions were heavily influenced by the AI recommendations, rating job candidates in line with the AI-generated outputs,” Davis said. “However, when asked to explain their ratings, participants believed AI had very little to do with it, instead attributing their responses to job-relevant factors like education and work experience. This gap in self-knowledge is somewhat alarming. People are influenced by AI without realizing it’s happening.”

Looking at the rapid adoption of previous communication technologies, Davis anticipates AI will have a lasting impact on human social practices. When reflecting on the adoption of social media, Davis said society has become accustomed to even more interaction than before, moving fluidly between online and offline communication and developing grammars and norms for text, audio, and image-based mediums.

“With social media, social skills didn’t necessarily get better or worse, but the nature of social interaction changed—for both good and for ill, with significant variation between people and contexts,” Davis said. “With AI there is certainly concern about ‘social deskilling.’ AI interaction partners can be tuned to meet people’s needs, papering over the challenges and richness of human relationships. On the other hand, AI may be a way for people to practice their social skills, including difficult conversations and opening up about embarrassing or sensitive topics. More research is needed, alongside careful consideration of the way these technologies are designed, deployed, regulated, and used.”

Is AI a helpful research tool?

A photo of Allison Walker

Allison Walker, assistant professor of chemistry and biological sciences, is working to develop AI methods that will help researchers discover new therapeutics from natural products, which are chemical compounds or substances produced by living organisms.

To accomplish this, her lab is working on assembling datasets of known natural products and their functions using large language models (LLMs) to help researchers search the literature for data more quickly.

Walker also teaches her students how to use various AI methods for biology, specifically how to use the AI program AlphaFold to predict protein structures.

“The largest impact of AI on my field is probably AlphaFold,” she said. “AlphaFold makes it possible to predict structures of many proteins, reducing the need for costly experiments to determine protein structure. This has really changed the way people do research in the areas of chemical biology and biochemistry.”

She envisions her field continuing to incorporate AI and developing more ways to automate experiments to collect data for AI training models, as well as to carry out experiments proposed by AI or based on AI predictions. However, she feels researchers should not become overly reliant on AI as it is not always accurate.

“My biggest concern is that researchers will become too trusting of AI models that are correct most, but not all, of the time. This is true of both AlphaFold and LLMs,” Walker said. “I worry that if researchers assume that all AlphaFold predictions are true or that all LLM generated summaries of a field are correct they could end up trusting a hallucination that leads them down the wrong research path. But overall, I believe that AI will have a positive effect in the fields of chemistry and biology by accelerating the rate of new discoveries and giving students tools to quickly learn about areas of research that are new to them.”

A photo of Steven WernkeSteven Wernke, professor of anthropology, is developing new AI vision models for satellite imagery in a groundbreaking project to detect archaeological sites across the Andes of South America to improve our understanding of Andean settlement systems and human-modified landscapes. He and his collaborator Yuankai Huo, Assistant Professor of Computer Science and Electrical and Computer Engineering, won grants from the National Science Foundation and the National Endowment for the Humanities to support their efforts.

“Archaeologists working in the Andes, like in most areas of the work, face a huge sampling problem,” Wernke said. “We know a lot about the relatively small areas we have surveyed and excavated, but we don’t really know how representative those areas are to the whole. We’re trying to provide this other layer of information about the vast interstitial spaces between the areas we have studied systematically, so we get a continuous view of where people were and how they tended to adapt to and transform the landscape.”

Wernke said he doesn’t view AI as a hindrance to traditional archeology, but as a helpful tool to support the more laborious initial steps of the field work process and to obtain more data than was previously possible.

“Archaeology has taught us much about diverse ways of organizing society through field studies of monumental sites and excavations in household contexts,” Wernke said. “But some of the greatest cultural achievements of the peoples of the Andes are their tremendous feats of sustainable landscape engineering and the associated settlement systems they supported. We get rich soundings of those cultural landscapes through fieldwork. What we’re providing is a complementary bird’s eye view of just how extensive and diverse those transformational solutions were. But we don’t just deploy AI to find sites and present the results. We don’t just use a ‘human in the loop’ approach, we use an ‘expert-and-stakeholder in the loop’ approach to generate results that are reliable and relevant.”

A photo of Haein KangHaein Kang, assistant professor of art, often uses emergent technologies in her artwork, including AI. Kang began her career as a painter, but after moving to San Francisco, California, for her master’s degree, she met artists who incorporated technology in their work. This inspired her to do the same with her own art and pursue a doctorate in media engineering.

After moving to Nashville, Kang encountered the emergence of periodical cicadas, which spend 13 or 17 years underground. She decided to use AI as an artistic tool to recreate her memorable experience of seeing cicada exoskeletons covering the city. The result was a short video using Google Vertex AI that depicts the cicadas’ brief moments of life when they emerge into the light after spending long, dark periods underground.

“AI created images that closely reflected my memories and intentions. For example, AI photo-realistically reproduced the cicadas and even rendered non-exist scenes, such as their claws catching soap bubbles without a lens,” Kang said. “AI technology impacts almost everything today. From my perspective, we are expanding our artistic methods. Many traditional artists are still creating great paintings, but just as we started producing lens-based art when the camera was introduced, we now have AI. We are expanding our artistic toolset, not choosing one form over another. In my case, I create art with imaging technologies to explore my society and my time. Some artists observe the world a little bit differently based on their experiences. AI is just another object of observation.”

What is AI’s potential global impact?

A photo of Michael BessMichael Bess, Chancellor’s Chair and professor of History, is examining the societal implications of AI. He is curious about how these kinds of technologies are likely to affect all aspects of our social world, from politics to economies to cultural life and the everyday experience.

He views AI as a dual-use technology that can bring both enormous benefits to society but also could cause extreme harm. In the short term, he thinks AI will continue to enhance many aspects of both the scientific and technological discovery process, which will lead to the creation of many beneficial innovations. However, he is also cautious of AI’s potential to lead to mass unemployment by replacing human workers with cheaper and more effective AI systems.

“In the long term, the big question is whether AI can be controlled properly, so that it continues to serve human welfare. I am very worried about the fact that there is an ‘AI arms race’ today, both among nations and among big tech corporations,” Bess said. “Under these brutally competitive circumstances, it’s very hard to put in place safeguards and best practices that keep AI aligned with human values. The risk of AI spinning out of our control is real and increasingly worrisome to me.”

In his research, Bess analyzes what he views as the four greatest challenges to humanity: AI, climate change, nuclear weapons, and pandemics. He feels AI is one of the most influential factors in shaping human society that we have ever experienced in human history.

“AI can be extremely beneficial but also extremely dangerous, and our society is not doing nearly enough to ensure that the advance of AI is being pursued with proper safety precautions,” he said. “I’m sure we’ll keep finding new ways to use it—and equally sure it will continue to present us with hard challenges as we seek to make sure its impact is beneficial rather than harmful.”

A photo of Kristy RoschkeKristy Roschke, research associate professor of the communication of science and technology, studies media literacy with a focus on how people use and consume information. She believes that proper AI usage should be included in any media literacy training.

“I think my concern is the same concern I’ve had anytime a new technology or digital tool becomes part of our everyday lives, and it’s that we don’t teach people how to use them in any sort of meaningful, process-oriented way,” she said. “AI has been widely adopted, but how many people have ever learned how it works, or why it works that way, or how it learned what it learned? And when you input something into it, how it’s taking what you wrote to teach it? The answer is not many. The tool is so easy to use, and that’s great, but if we thought the fallout from social media was bad, the potential fallout from a lack of really understanding what AI can do will be much worse.”

Roschke expressed the importance of considering the growth of younger generations who have never known a world without technology or AI. The idea of a loss of critical thinking skills is a popular topic, but she said more work will need to be done to fully understand the repercussions of raising children exposed to AI since birth.

“One of the primary uses for AI, especially for young people, is companionship and seeking therapy-related types of interactions. It’s a proxy in some ways for many people to a relationship,” she said. “I think some of the social norms that we don’t think about teaching because we’ve just assumed that they have been inherent to human life are social skills, and now we need to teach social skills in tandem with AI skills. No public-school teacher in America knows how to address this in any way other than what their gut and experience is telling them. That’s fine, that’ll get you a certain distance, but they should be equipped with better tools.”

AI is fundamentally changing the way we live, but Roschke said it is important to remember it is still a tool that we need to learn how to use, as well as consider how it is being used by others for us, against us, with us, and without us.

“As humans, we get very insecure about our expertise,” she said. “We need to figure out what it is we actually need to know. A lot of times, that’s stripping back the busy work we do and focusing our energy on where we really add value. Our challenge as humans is to figure out what we still bring to the table so that we can use these tools appropriately. And if that means all the art ends up being computer-generated in the future, we would hope that it still comes from a human heart.”

Tags: