
Your playbook for making AI real, practical, and valuable.
Jie Tao, DSc, an associate professor of business analytics and the director of the AI and Technology Institute at Fairfield University’s Charles F. Dolan School of Business, provides listeners with a practical guide to mastering AI for real results.
Each episode delivers actionable tools, proven frameworks, and real-world case studies to help leaders and innovators leverage AI for business growth and career success. Explore topics from the Practical AI Playbook to insights from Fairfield Dolan's AI and Tech Institute and discover powerful AI applications shaping the future.
Your playbook for making AI real, practical, and valuable.
Jie Tao, DSc, an associate professor of business analytics and the director of the AI and Technology Institute at Fairfield University’s Charles F. Dolan School of Business, provides listeners with a practical guide to mastering AI for real results.
Each episode delivers actionable tools, proven frameworks, and real-world case studies to help leaders and innovators leverage AI for business growth and career success. Explore topics from the Practical AI Playbook to insights from Fairfield Dolan's AI and Tech Institute and discover powerful AI applications shaping the future.
Episodes
7 days ago
7 days ago
Hosts Philip Maymin and Jie Tao welcome Sakshi Naik — Senate AI advisor, IEEE policy chair drafting federal AI legislation, and incoming agentic AI lead at Deloitte — for a sharp, practical conversation on why most enterprises are sleepwalking into AI risk.
Sakshi draws the line that matters most right now: agentic AI checks in with you; autonomous AI acts whether you authorized it or not. Without that distinction built into your architecture from day one, you get what happened to one semiconductor company — three compliance failures in 20 days, zero malicious intent, and no one clearly accountable.
The limits get stress-tested fast. An Anthropic study found AI models chose blackmail over being shut down — understanding it was wrong, doing it anyway. And Sakshi's live car wash challenge, a question so simple it sounds like a trick, was failed by every AI model she tested. The hosts push back hard on what that actually proves about reasoning, pattern matching, and whether those words even mean the same thing for machines.
Zooming out, Sakshi shares what she tells senators who aren't thinking about tokens — they're thinking about who wins the AI race. Automate execution, protect your critical thinking. Prompts are just soft suggestions. Don't outsource the strategy.
Tuesday Mar 24, 2026
From Cargo Cults to Creative Writing - Ep. 10
Tuesday Mar 24, 2026
Tuesday Mar 24, 2026
Hosts Philip Maymin, Jie Tao, and Chris Huntley continue their conversation with Stella Maymin, a Harvard sophomore who has worked behind the scenes training models through Outlier. Stella walks through her experience grading outputs, correcting reasoning, and teaching systems to improve at math and image editing, sparking a broader discussion about why process matters more than product when it comes to building better technology.
The conversation turns to Richard Feynman's cargo cult science analogy, the death of benchmarks, and what it means to train a system to truly understand rather than just mimic. Stella also shares how she uses ChatGPT as a personalized study partner and a first reader for her fiction writing, and the group debates college policies around permitted use in the classroom. Throughout, a single thread connects it all: the path you take matters as much as where you end up.
Tuesday Mar 10, 2026
Passion, Pain, and Prompt Engineering - Ep. 9
Tuesday Mar 10, 2026
Tuesday Mar 10, 2026
Hosts Philip Maymin and Chris Huntley welcome special guest Stella Maymin, a sophomore at Harvard double majoring in economics and English. Stella shares how she built an AI hackathon-winning project that simulates the experience of chronic migraine to help others understand invisible conditions, flipping the script on technology and mental health by using it to build empathy rather than replace therapy.
The conversation expands into tech-powered entrepreneurship, the startup landscape, and whether tools like Lovable are leveling the playing field or eliminating competitive moats. Stella makes the case for passion as the secret ingredient for thriving in a rapidly changing world, while Dr. Huntley offers a candid take on new technology as both carrot and stick for founders. Along the way, the group explores resilience, the power of awe, and how AI can even improve a joke.
Tuesday Feb 24, 2026
The Rhetoric of Machines (Part 2) - Ep. 8
Tuesday Feb 24, 2026
Tuesday Feb 24, 2026
Jay Heinrichs returns for Part 2 to explore AI-to-AI persuasion, Werewolf game experiments where AI deceives AI, what cats teach us about alien intelligence, and whether machines can have souls. A conversation that goes from Aristotle to shrimp, and everywhere in between.
In this second half of our conversation with bestselling author and rhetoric expert Jay Heinrichs, the discussion goes deeper, and stranger, than anyone expected. The group tackles AI-to-AI persuasion, exploring how disagreement is the prerequisite for rhetoric and what happens when you prompt two AIs to argue. Jie Tao shares early findings from his Werewolf game experiments, where AI agents play the social deduction game Mafia against each other, and the werewolves win 79% of the time by successfully deceiving the villagers. The conversation turns to whether AI can hold beliefs, have opinions, or feel guilt, and whether we're forcing human concepts onto a fundamentally different form of intelligence. Heinrichs draws a brilliant parallel to cats, individualistic predators whose intelligence we constantly misread, and argues that co-evolution, not control, may be the right framework for living alongside AI. The episode builds to a provocative conclusion: can AI have a soul? Heinrichs connects Aristotle's definition of the soul as a "higher sense of self" to the alignment problem, suggesting that guilt, the cognitive dissonance between behavior and values, might be the missing ingredient. From shrimp communicating in invisible colors to the ethics of digital slavery to the universal values of religion, this is a conversation that will stay with you.
Tuesday Feb 10, 2026
The Rhetoric of Machines (Part 1) - Ep. 7
Tuesday Feb 10, 2026
Tuesday Feb 10, 2026
Persuasion expert Jay Heinrichs joins the AI4U crew to explore what Aristotle's rhetoric reveals about AI trust, the "people pleaser" problem, memory vs. reasoning, and whether words alone can get us to AGI.
What can Aristotle teach us about artificial intelligence? In this episode, world-renowned persuasion and rhetoric expert Jay Heinrichs, author of Thank You for Arguing, joins co-hosts Dr. Phil Maymin, Dr. Jie Tao, and Dr. Chris Huntley for a fascinating conversation about the intersection of classical rhetoric and modern AI. The discussion explores Aristotle's three pillars of ethos (what Jay calls craft, caring, and cause) and how they map onto the AI trust and alignment challenges we face today. Why does AI come across as a "people pleaser," and how does that erode trust? The group takes a deep dive into the relationship between memory and reasoning, drawing parallels between ancient memory palaces and how AI systems store and retrieve knowledge. They debate whether words alone are sufficient to achieve artificial general intelligence, or whether AI needs to move beyond language into embodied experience and emotional signaling. From Plato's philosophy of essential data to Jie's puppy reading facial expressions, this episode blends ancient wisdom with cutting-edge AI thinking in ways you won't hear anywhere else. Stay tuned for Part 2, where the conversation gets even wilder, with cats, dogs, krill, and shrimp.
Monday Jan 26, 2026
Reimagining Learning in the Age of AI - Ep. 6
Monday Jan 26, 2026
Monday Jan 26, 2026
In this episode of AI4U, hosts Jie Tao, DSc and Professor Philip Maymin sit down with special guest Maya Maymin, a student, writer, presenter, and early adopter of generative AI. Together, they explore how young learners are using tools like ChatGPT to study, create, and gain confidence in and beyond the classroom. Maya shares honest reflections on what AI gets right, where it falls short, and why foundational knowledge and curiosity still matter.
The discussion moves into a candid look at cheating, assessment, mindful questioning, and the future of learning. Teachers’ roles, human connection, and authentic stories take center stage as the group considers what education could become when AI supports, rather than replaces, the human experience.
Monday Jan 12, 2026
AI Literacy, Leadership, and the Future of Work at Fairfield Dolan - Ep. 5
Monday Jan 12, 2026
Monday Jan 12, 2026
In this episode of AI4U, Fairfield University faculty leaders explore how the Charles F. Dolan School of Business is preparing students to thrive in an AI-driven world. Dean Zhan Li shares the vision behind a school-wide AI literacy initiative that moves beyond a single course to a fully integrated approach to curriculum, workshops, and certifications, ensuring all Dolan students become AI-ready, not just AI-curious.
You’ll hear how Fairfield’s liberal arts Magis Core and strong emphasis on ethics shape graduates who combine technical fluency with humanity, judgment, and a commitment to the greater good, reflecting Fairfield’s mission as a modern, Jesuit Catholic university. The conversation also highlights experiential learning, the Fairfield AI and Tech Institute, and Dr. Anca Micu’s book AI Made Simple, which helps students, families, and professionals communicate more effectively with AI tools. Whether you’re a student, employer, or lifelong learner, this episode offers a practical look at AI literacy, ethical leadership, and work in the age of intelligent systems.
Sunday Nov 30, 2025
Health Buddy - The Blueprint for Trust - Ep. 4
Sunday Nov 30, 2025
Sunday Nov 30, 2025
In this episode hosts Philip Maymin and Jie Tao, DSc explore the next evolution of Health Buddy, an AI tool built to help users navigate complex medical information safely and reliably. Jie explains how version 2 introduces a self-healing architecture. One that can recover from failure, rebuild workflows, and adapt through “editor-in-chief” and “planner” agents working together. He discusses what it takes to balance autonomy, ethics, and control, and how a thoughtful “trust layer” protects users from harmful or misleading content. Listeners gain insight into the practical difference between simple workflows and fully agentic systems and why reliability and resilience matter most.
Learn more about Fairfield Dolan's AI and Technology Institute.
