The Reasoning Show
The Reasoning Show AI moves fast. Thinking clearly matters more.
The Reasoning Show cuts through the hype to explore how the smartest people in enterprise AI actually make decisions — the strategy, the tradeoffs, and the hard lessons no press release mentions.
Every week, hosts Aaron Delp and Brian Gracely sit down with the founders building the tools, investors funding the shift, and operators running AI in the real world. Not hype. Not panic. Just clear-headed conversations with people who have to make actual decisions.
Because the AI revolution isn't just happening. It's being reasoned through.
New shows every Wednesday and Sunday.
Topics: Enterprise AI strategy · LLMs in production · AI leadership · Agentic AI · Digital Sovereignty · Machine Learning · AI startups · Cloud Computing
The Reasoning Show
Understanding RAG Systems
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
SUMMARY: The RAG (Retrieval Augmented Generation) pattern is one of the most frequently used to augment LLMs with context-specific information. Let’s explore RAG.
GUEST: Roie Schwaber-Cohen, Head of Developer Relations at Pinecone
SHOW: 1018
SHOW TRANSCRIPT: The Reasoning Show #1018 Transcript
SHOW VIDEO: https://youtu.be/-kZZEMR341Q
SHOW SPONSORS:
- Nasuni - Activate your data for AI and request a demo
- ShareGate - ShareGate Protect. Microsoft 365 Governance, we got this!
SHOW NOTES:
Topic 1 - Welcome to the show. Tell us a little bit about your background, and what you focus on these days at Pinecone
Topic 2 - Let’s begin by talking about RAG systems. What are they? Why do companies choose to use them? What benefits do they provide in AI systems?
Topic 3 - At a high level, RAG sounds straightforward—retrieve relevant context, generate an answer. But in practice, where does it break first as systems scale?
Topic 4 - I’ve heard that RAG systems can return answers that are technically correct but fundamentally wrong. What’s a concrete example of that happening in production—and why does it slip past most teams?
Topic 5 - In traditional systems, we assume there’s a single source of truth. But in enterprise environments, ‘truth’ is often versioned, contextual, and conflicting. How should teams rethink ‘truth’ when building AI systems?
Topic 6 - A lot of teams assume their knowledge base is ‘good enough’ for RAG. What do they usually underestimate about the messiness of real enterprise data?
Topic 7 - There’s a growing narrative that better reasoning models can compensate for weaker retrieval. From what you’ve seen, where does that idea fall apart?
Topic 8 - If correctness depends on things like timing, policy scope, or configuration, how should teams design systems that understand context—not just content?
Topic 9 - Looking ahead, what replaces today’s RAG architectures? What patterns are emerging among teams that are actually getting this right?”
FEEDBACK?
- Email: show @ reasoning dot show
- Bluesky: @reasoningshow.bsky.social
- Twitter/X: @ReasoningShow
- Instagram: @reasoningshow
- TikTok: @reasoningshow
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
Software Defined Talk
Software Defined Talk LLC
Dithering Preview
Ben Thompson and John Gruber
Everyday AI Podcast – An AI and ChatGPT Podcast
Everyday AI
Prof G Markets
Vox Media Podcast Network
Acquired
Ben Gilbert and David Rosenthal
Decoder with Nilay Patel
The VergetheCUBE
SiliconANGLE, Media