.

The Future of AI is Agentic

Saturday, April 18, 2026

“Basically with the right AI model, if you fit as much data as you want and contribute as much computing power as possible, then it can achieve great things. But the key question is, What is next?” That’s the question Xin (Eric) Wang, an assistant professor of computer science, addressed on April 9 at the most recent installment of the UC Santa Barbara Library AI in Action speaker series.

Wang presented his work alongside Fabian Offert, an assistant professor for the history and theory of digital humanities. The speaker series continues next month, with The Robert Mehrabian College of Engineering distinguished professor Simon Billinge, director of the California NanoSystems Institute, and Nina Miolane, assistant professor of electrical and computer engineering and co-director of REAL AI (Reliable, Efficient, and ALigned AI) for Science, speaking on May 18 at 4 p.m. in the library’s first-floor Instruction & Training Room (1312).

Wang, who directs the UCSB Center for Responsible Machine Learning, discussed his approach to the future of AI — the development of AI agents that can understand and respond to humans and to their environment, and take action on the information they receive.

“I believe the next big step is to build AI agents that can see and understand the multimodal environment, that can chat with humans in natural language, and more importantly, take actions to interact with the environment.” He sees applications for AI agents in always-on-call medical assistants, smart drones that can irrigate crops, and digital agents that people interact with through their phones and apps.

As part of his presentation, Wang described how his startup, Simular, where he is head of research, is developing AI agents that can reason more like humans do.

Many current large language models, he said, use a process called “chain-of-thought reasoning,” a step-by-step reasoning process,similar to the way a person might run through a list of ideas. “But here is the problem: humans don’t actually think this way, at least not all the time,” Wang said. Thought isn’t always verbal and sequential: it can leap from idea to idea, free-associate, and veer off track before coming up with an answer.

To incorporate more of those looser, human-like thought processes, Wang and his research team are introducing the idea of “soft thinking” to AI, a more associative process in which the AI agent can follow different lines of inquiry simultaneously. Wang is also working on how to better evolve “soft thinking” and other traits in AI over time.

At the presentation, Wang discussed how most development of AI agents is modeled on biological processes — finding a successful version, then amplifying that version while discarding the others, a survival-of-the-fittest model. “But if you think about it, AI agents are not biological, they're digital. Then why are we forcing them to evolve like biological individuals?” Wang said. “That’s very inefficient, actually. AI agents are not constrained by genetics, lineage,or reproduction. And they can directly share everything they have, including their trajectories, experience, tools, and workflows.”

In a recent paper posted to arXIV, he and computer science PhD students Zhaotian Weng, Antonis Antoniades, Deepak Nathani, Zhen Zhang, and Xiao Pu report that this approach, using AI agents that evolve as a group, outperforms self-evolving AI agents and matches or improves upon human-designed AI agents.

AI: Embedded in Culture

Before Wang’s presentation, Offert, director for the Center for Humanities and Machine Learning, discussed the emerging discipline of critical AI studies, a field that is the subject of his forthcoming book, Vector Media, written with Leonardo Impett, an assistant professor at the University of Cambridge in the UK.

The book focuses on the idea of embedding, the way that text, images, and graphs are represented in a way that they can be processed as data by machines. One aspect of embedding is that images and text often need to be compressed, which results in lost data.

Offert showed an example: a blurry video of the movie Shrek, in which the images had been compressed but were still recognizable. “Compression exploits our inattention to certain perceptual aspects of visual data,” Offert said.

Compression also works in only one direction. “We can never retrieve the image from the embedding,” Offert said. “We can, of course, save a kind of artificial connection here and just remember that we embedded this image, but we cannot go back.”

Offert traced the idea of compression through the early days of artificial intelligence, and discussed its connections with both vision science and political and economic history. For example, financial markets are “very abstract, and removed from the natural form of goods and services,” he said. “The price of making everything exchangeable, making everything commensurable through embedding is that we lose the characteristics of the media that we embed.”

In a question-and-answer session following the talks, Offert discussed AI’s role in different fields. In the sciences, Offert said, “Time and again, it has been shown that synthetic data [developed by iterating through AI models] is fine. It doesn’t get you all the way, but it works.” In contrast, multiple recent papers have shown that for some humanities research, “especially in terms of aesthetics, if you look at image generators, there’s a convergence on the worst kind of stock photography,” he said. “It becomes very clear what good and bad applications of AI systems are, where they really shine, which is not in making images for you, but in other domains."

Related People: 
Xin (Eric) Wang
Images of UCSB faculty members Fabian Offert and Eric Wang

Fabian Offert (left) and Eric Wang spoke at the UCSB Library's AI in Action series on April 9.