Cracking the Code of AI Reasoning: A Conversation with Google Research’s Dr. Annalisa Pawlosky

20250707_picture_blog_pawlosky annalisa
When Dr. Annalisa Pawlosky, Senior Research Scientist at Google Research, stepped into our EMBA classroom, we knew we were in for something special. But what unfolded was not just a lecture – it was an invitation into the mind of a scientist at the frontier of artificial intelligence, someone who bridges the worlds of biology, language models, and human reasoning with both curiosity and candor.

​In her talk, titled “Analyze, Generate, Innovate – Lessons Learned from AI’s Past to Catalyze AI’s Future,” Dr. Pawlosky took us on a journey across nearly a decade of AI development at Google—from early foundational models to today’s multi-agent systems that generate, rank, and compete with one another in real time.

A researcher with a rich background in computational biology and systems science, Dr. Pawlosky focuses on what she calls “understanding the rules”– of language models, of biological systems, and of intelligence itself. Her work explores how large-scale models behave, how they diverge from human logic, and how to engineer more adaptive, transparent AI agents. She challenged participants to consider not just what AI can do, but how it reasons – and how we, as leaders, should interpret its decisions.

“Physics, we have some sense of what it’s doing. Biology is much harder. Large language models are harder still. I’m trying to find the rules that govern them.”

In a room full of executives, Dr. Pawlosky didn’t just present slides – she sparked an experiment. Together, we co-developed a prompt to test her team’s latest multi-agent model, live. As the model generated and iterated in the background, she encouraged us to pit it against popular tools like ChatGPT and Copilot, using a real-world challenge related to legal sentencing frameworks in Switzerland. Her goal? Not to impress, but to reveal where these systems shine—and where they quietly fail.

“The model I’ll run won’t give you just one answer. It’ll produce hundreds of competing ideas while I talk. Then we’ll look at the results, and I’ll show you what went right—and what went wrong.”

True to form, she was transparent about AI’s limitations and unpredictability. She discussed how human intent often gets lost in translation when passed through large models, and how researchers like herself design boundary conditions – a kind of guardrail – to help guide AI decision-making.

Dr. Pawlosky’s academic rigor was matched by her engaging, almost playful delivery. She challenged the group to define terms, weigh ethical trade-offs, and even consider using AI to model marketing pipelines or simulate legal defenses. Her ability to toggle between technical depth and real-world application made the session both accessible and thought-provoking.

What stood out most, however, was her humility and openness. She shared failures, admitted uncertainties, and reminded us that even inside Google’s AI labs, many questions remain unanswered. But that, she argued, is what makes the field so exhilarating – and so important to approach thoughtfully.

“It’s not about human reasoning—it’s model reasoning. The difference matters. And it’s our job to understand where that gap lies.”

We’re deeply grateful to Dr. Pawlosky for joining us and giving our EMBA participants a rare behind-the-scenes look at the science – and soul – of modern AI. Her visit was not just a learning opportunity, but a vivid example of how deep research, open dialogue, and human insight can come together to shape better questions – and better leaders.