<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" 
     xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd"
     xmlns:content="http://purl.org/rss/1.0/modules/content/"
     xmlns:atom="http://www.w3.org/2005/Atom"
     xmlns:podcast="https://podcastindex.org/namespace/1.0">
  <channel>
    <title>Token Intelligence</title>
    <link>https://www.tokenintelligenceshow.com</link>
    <description>Two friends break down AI, technology, and entrepreneurship through mental models, real-world experience and the pursuit of a life well-lived.</description>
    <language>en</language>
    <copyright>Copyright 2026 Token Intelligence</copyright>
    <lastBuildDate>Fri, 17 Apr 2026 14:51:23 GMT</lastBuildDate>
    <atom:link href="https://www.tokenintelligenceshow.com/rss" rel="self" type="application/rss+xml"/>
    
    <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
    <itunes:author>Eric Dodds &amp; John Wessel</itunes:author>
    <itunes:owner>
      <itunes:name>Eric Dodds &amp; John Wessel</itunes:name>
      <itunes:email>eric@tokenintelligenceshow.com</itunes:email>
    </itunes:owner>
    <itunes:explicit>false</itunes:explicit>
    <itunes:type>episodic</itunes:type>
    <itunes:category text="Technology"/>
    <itunes:category text="Business">
      <itunes:category text="Entrepreneurship"/>
    </itunes:category>
    <podcast:locked>no</podcast:locked>
    <podcast:guid>https://www.tokenintelligenceshow.com/podcast-guid</podcast:guid>

    <item>
      <title>If Notion beats HubSpot, will they still lose to Claude?</title>
      <link>https://www.tokenintelligenceshow.com/episode/if-notion-beats-hubspot-will-they-still-lose-to-claude</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/if-notion-beats-hubspot-will-they-still-lose-to-claude</guid>
      <description><![CDATA[<p>Notion could take out HubSpot, but the frontier providers are fighting a bigger war over who owns the interface, the context, and eventually the whole stack.</p>

<h2>Summary</h2>
<p>Eric opens by restating the case for Notion as a serious long-term threat to HubSpot: a database-first product with connected apps, strong AI, and enough cash to close obvious gaps fast.</p>
<p>John then challenges that thesis after watching a real Notion AI workflow struggle under a more ambitious content-planning use case, which leads to a deeper question about architecture: whether markdown-native systems are better suited to AI, and how much re-engineering incumbents may still need.</p>
<p>From there, the episode widens into a broader prediction about software itself: fewer standalone tools, more orchestration, heavier bundling, and a real possibility that the ultimate winner is not the best app suite at all, but the model layer that becomes the place people naturally work.</p>
<h2>Key takeaways</h2>
<p><strong>Key takeaways</strong></p>
<p><strong>Connected context is the real wedge:</strong> Notion’s shot at HubSpot is less about matching every feature and more about owning the information that makes agents feel magical.</p>
<p><strong>Architecture may become strategy:</strong> If AI works best on simpler and more file-like systems, some incumbents may need painful re-engineering before they can fully capitalize on it.</p>
<p><strong>Simpler interfaces may win</strong>: As models improve, many businesses may prefer chat, docs, search, and spreadsheets over ever-larger stacks of specialized software.</p>
<p><strong>Orchestration is the new battleground:</strong> Project management tools and AI workflow platforms are starting to converge around coordinating people, systems, and agents.</p>
<p><strong>Bundling is back in force:</strong> AI makes it cheaper to expand across categories, which could turn today’s focused tools into tomorrow’s full-stack business suites.</p>
<p><strong>Frontier models can eat the app layer:</strong> Notion may pressure HubSpot, but Anthropic and OpenAI could pressure Notion by becoming the default place where work happens.</p>
<h2>Notable mentions and links</h2>
<p>The article Why OpenAI Should Build Slack is used as an example of how AI is creating counterintuitive competition that makes once-strange product moves logical.</p>
<p>Obsidian, a markdown editor, matters because its markdown-on-disk architecture may be more naturally compatible with current AI systems than Notion’s nested page model.</p>
<p>Postgres and Notion’s past sharding crisis come up as a reminder that architecture choices can become company-level constraints when growth and new workloads collide.</p>
<p>Notion AI is described as promising but uneven in aggressive one-shot workflows where users want it to generate and structure a full month of content in one pass.</p>
<p>Vercel enters the discussion because John’s enterprise use of Notion through MCP and Claude shows how AI can turn a workspace into a searchable database rather than a primary interface.</p>
<p>Claude artifacts are cited as an early hint that a model-native document experience could expand beyond chat and start absorbing traditional software surfaces.</p>]]></description>
      <itunes:summary>Notion could take out HubSpot, but the frontier providers are fighting a bigger war over who owns the interface, the context, and eventually the whole stack.</itunes:summary>
      <pubDate>Sat, 11 Apr 2026 13:00:00 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/wYVuc5B3ng1LhvUVcksMib.mp3" type="audio/mpeg" length="15490212"/>
      <itunes:duration>00:32:16</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>15</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/if-notion-beats-hubspot-will-they-still-lose-to-claude#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>AI burnout: the hardest parts of your job all day</title>
      <link>https://www.tokenintelligenceshow.com/episode/ai-burnout-the-hardest-parts-of-your-job-all-day</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/ai-burnout-the-hardest-parts-of-your-job-all-day</guid>
      <description><![CDATA[<p>AI is sold as a productivity miracle drug, and many have tasted the power. But in private conversations, they talk about redlining: higher expectations, more context switching, and smaller teams.</p>

<h2>Summary</h2>
<p>Eric opens with a report from a longtime founder-investor friend returning from Silicon Valley: “AI burnout is real.” From there, he and John split the issue into two pressures at once: rising expectations per worker, and the constant workflow thrash of keeping up with changing models, tools, and methods.</p>
<p>They then get specific about why AI productivity can feel worse before it feels better. Faster execution means more projects in parallel, more indeterminate waiting loops, and more time spent on architecture, judgment, and review, which can turn the hardest part of the job into the whole job.</p>
<p>By the end, the conversation zooms out from fatigue to identity. If AI lets two people do the work of 20, the risk is not just displacement for the 18, but a harsher kind of work for the two who remain.</p>
<h2>Key takeaways</h2>
<p><strong>More leverage means higher expectations</strong>: AI efficiency often becomes a new baseline for output rather than a source of extra slack.</p>
<p><strong>Context switching is the hidden cost</strong>: Faster tasks create more parallel work, more waiting loops, and a harder-to-plan day.</p>
<p><strong>Automation concentrates work the hard stuff</strong>: As AI absorbs implementation, people spend more of their time on judgment, architecture, and review.</p>
<p><strong>Smaller teams can feel heavier</strong>: Replacing 10 people with 2 does not remove ownership, it compresses it onto fewer humans.</p>
<p><strong>Burnout is both personal and market-wide</strong>: The pressure comes from daily workflow thrash and from the fear of falling behind in a shifting labor market.</p>
<p><strong>The identity risk may outlast the productivity gain</strong>: For knowledge workers, the deepest disruption may be losing the sense of who they are at work.</p>
<h2>Notable mentions and links</h2>
<p>Vercel is Eric’s day-to-day reference point for how AI changes expectations inside a real software company, grounding the conversation in lived experience rather than abstraction.</p>
<p>Markdown is mentioned as a surprisingly durable AI workflow format, showing how newer tools often push people back toward older, simpler conventions.</p>
<p>Sahaj Garg, co-founder and CTO of Wispr, is quoted at length because the framing in his essay on cognitive labor displacement shifts the conversation from efficiency and headcount to identity, status, and despair.</p>
<p>Wispr Flow is the speech-to-text company Garg cofounded, and its essay becomes the bridge from personal burnout to the wider social consequences of AI adoption.</p>]]></description>
      <itunes:summary>AI is sold as a productivity miracle drug, and many have tasted the power. But in private conversations, they talk about redlining: higher expectations, more context switching, and smaller teams.</itunes:summary>
      <pubDate>Sat, 04 Apr 2026 10:53:49 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/ReG8cor2CRZQPja7mrn7FU.mp3" type="audio/mpeg" length="29847116"/>
      <itunes:duration>00:39:10</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>14</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/ai-burnout-the-hardest-parts-of-your-job-all-day#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Why the longest-running tech CEO still fears failure</title>
      <link>https://www.tokenintelligenceshow.com/episode/why-the-longest-running-tech-ceo-still-fears-failure</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/why-the-longest-running-tech-ceo-still-fears-failure</guid>
      <description><![CDATA[<p>Jensen Huang built NVIDIA into a trillion-dollar AI giant, but still works like survival isn’t guaranteed. Eric and John unpack fear, humility, market timing, and ingredients for enduring leadership.</p>

<h2>Summary</h2>
<p>Eric and John use Jensen Huang’s Joe Rogan interview to explore a kind of leadership that feels rarer than vision-talk or AI bravado: a founder who still sounds driven more by the fear of failure than the glow of success. What follows is part NVIDIA origin story, part meditation on timing, likability, humility, and the surprising honesty of someone who has won big without ever acting like the outcome was guaranteed.</p>
<p>Along the way, they revisit NVIDIA’s near-death moments with Sega and an emulator gamble, connect Huang’s immigrant story to his emotional posture, share personal stories about giving money back to investors, and land on a broader takeaway: the best leaders may be the ones least blinded by the illusion of control.</p>
<h2>Key takeaways</h2>
<p><strong>Fear of failure is a real engine</strong>: Huang comes across as someone driven less by the upside of winning than by the responsibility of not failing, and that honesty gives his leadership more weight.</p>
<p><strong>Likability matters more than people admit</strong>: The Sega story lands because trust and personal credibility, not just technical merit, helped keep NVIDIA alive.</p>
<p><strong>Timing matters more than strategy</strong>: A lot of success looks cleaner in hindsight than it felt in the moment, and the episode keeps returning to how much depends on market windows, luck, and circumstance.</p>
<p><strong>Good AI leadership makes room for fear:</strong> Huang’s answers stand out because he treats people’s concerns about AI as understandable rather than naive or beneath him.</p>
<p><strong>Humility makes conviction believable:</strong> He talks like someone who has survived bad bets, close calls, and uncertainty, which makes his confidence feel earned instead of performative.</p>
<p><strong>Survival is a better frame than inevitability</strong>: One of the deepest themes of the episode is that enduring leaders never fully assume they’ve arrived, and that mindset may be part of why they last.</p>
<h2>Notable mentions and links</h2>
<p>Jensen’s Joe Rogan interview mattered to John because he had heard Huang quoted for years but had never heard him talk at long-form length.</p>
<p>The book Creativity, Inc. by Ed Catmull enters the episode as a parallel survival story, especially the famous Toy Story 2 anecdote where Pixar nearly lost the movie to an accidental deletion.</p>
<p>Oneida Baptist Institute in Kentucky becomes one of the most memorable details in Huang’s backstory, because the hosts can’t get over what it must have meant for a nine-year-old immigrant to land there.</p>]]></description>
      <itunes:summary>Jensen Huang built NVIDIA into a trillion-dollar AI giant, but still works like survival isn’t guaranteed. Eric and John unpack fear, humility, market timing, and ingredients for enduring leadership.</itunes:summary>
      <pubDate>Sat, 28 Mar 2026 13:09:33 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/0fb1174c-7a7f-4f88-aea1-36ca90b9eab0.mp3" type="audio/mpeg" length="19673774"/>
      <itunes:duration>00:40:59</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>13</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/why-the-longest-running-tech-ceo-still-fears-failure#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Can the way you talk to AI change you?</title>
      <link>https://www.tokenintelligenceshow.com/episode/can-the-way-you-talk-to-ai-change-you</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/can-the-way-you-talk-to-ai-change-you</guid>
      <description><![CDATA[<p>What does talking to AI all day do to the way we think, relate, and communicate? Eric and John explore kids, companionship, human dignity, and why the line between person and machine matters.</p>

<h2>Summary</h2>
<p>Eric and John explore a new habit that already feels normal: talking to AI constantly, casually, and sometimes a little too personally.</p>
<p>As they compare their own work habits, from treating Claude like a coworker to noticing how easily chat becomes pseudo-relationship, they land on a deeper concern: not just over-humanizing machines, but losing sight of what makes human relationships distinct, difficult, and valuable.</p>
<h2>Key takeaways</h2>
<p><strong>Watch your language with AI</strong>: repeated “coworker” and “we” framing can shape your instincts even when you know it’s a machine.</p>
<p><strong>Separate output quality from self-formation</strong>: a prompt style may work, but still train you in unhealthy ways.</p>
<p>Teach kids the category line early: AI can sound alive, helpful, and familiar without being human.</p>
<p><strong>Resist the path of least resistance</strong>: AI is designed to be easier to deal with than people, and that ease can subtly weaken your appetite for real relationships.</p>
<p><strong>Keep the distinction clear</strong>: AI can help with thinking, drafting, and iteration, but it cannot reciprocate dignity, sacrifice, or love.</p>
<h2>Notable mentions and links</h2>
<p>John describes a recent experiment inspired by the emerging idea of a “zero-person company”, where AI agents can take on roles like CEO, manager, and operator inside a simulated business workflow.</p>
<p>Anthropic’s Claude Cowork is mentioned as evidence that the product category itself is reinforcing the coworker metaphor, not just individual users, with Anthropic explicitly framing it as a way to hand off multi-step work to Claude.</p>
<p>A Hacker News post titled “Shall I implement it? No”, which links to a GitHub Gist screenshot, is used to underline the tension: the interface feels conversational and clever, while the underlying system can still fail in ways that are unmistakably machine-like.</p>
<p>Jensen Huang’s conversation on The Joe Rogan Experience #2422 enters the discussion as Eric and John zoom out from prompting habits to first-principles questions about sentience, consciousness, and whether AI can actually have experience at all.</p>
<p>C.S. Lewis’s line about never meeting “a mere mortal,” from <em>The Weight of Glory</em>, becomes a shorthand for their conviction that human beings belong in a fundamentally different category from machines.</p>]]></description>
      <itunes:summary>What does talking to AI all day do to the way we think, relate, and communicate? Eric and John explore kids, companionship, human dignity, and why the line between person and machine matters.</itunes:summary>
      <pubDate>Sat, 21 Mar 2026 13:00:00 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/Q3uyLhmHndK2MusL71tTSX.mp3" type="audio/mpeg" length="18659178"/>
      <itunes:duration>00:38:52</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>12</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/can-the-way-you-talk-to-ai-change-you#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Why can&apos;t we find a metaphor for AI?</title>
      <link>https://www.tokenintelligenceshow.com/episode/why-cant-we-find-a-metaphor-for-ai</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/why-cant-we-find-a-metaphor-for-ai</guid>
      <description><![CDATA[<p>Stochastic parrot. Intern. Exoskeleton. Every AI metaphor shapes what you build and what you ignore, but the deeper question is why we can’t find a metaphor that fits.</p>

<h2>Summary</h2>
<p>Eric and John trace five years of AI metaphors: stochastic parrot, blurry JPEG, intern, calculator for words, autonomous agent, digital employee, exoskeleton. Every metaphor suffered from a form of near-sightedness, capturing what the technology felt like in the moment, but missing what it was becoming.</p>
<p>Then they ask the harder question: what happens when a technology is so transformative that no metaphor holds? They pull in horseless carriages, Gilded Age empires, and biblical prophecy to argue that the best frame for AI is no frame at all.</p>
<h2>Key takeaways</h2>
<p><strong>Your metaphor is your ceiling</strong>: Call it a parrot and you'll use it cautiously. Call it a calculator and you'll use it practically. Your mental model for AI shapes what you believe is possible.</p>
<p><strong>Count metaphors per year, not features:</strong> The fact that we've burned through seven frames in five years is a clear indicator that AI will be more transformative than most people can imagine.</p>
<p><strong>Expect the best metaphors to break</strong>: When a technology is truly transformative, like rail, electricity, and the internet, it stops being described by analogy and starts being described on its own terms.</p>
<p><strong>Watch the agent economy, not just individual agents:</strong> The frontier isn't AI serving humans, it's AI systems interacting with each other, buying, selling, and bidding, which raises hard questions about trust and infrastructure.</p>
<p><strong>Use metaphors as a design check</strong>: Unlike replacement metaphors, the exoskeleton recenters the human. It's a useful test: does this tool amplify skill, or does it just hide the absence of it?</p>
<p><strong>Study the Gilded Age parallels</strong>: Rail, oil, steel, and banking each started as a single focused industry and ended up reshaping everything around them. AI is following the same playbook.</p>
<h2>Notable mentions and links</h2>
<p>The book of Ezekiel, Chapter 1, contains a vision of "a wheel within a wheel" — a biblical example of reaching for metaphor when direct language fails to capture something genuinely new.</p>
<p>"Stochastic parrot" was coined in a 2021 academic paper by Emily Bender, Timnit Gebru, and others, framing large language models as systems that statistically mimic text without real understanding.</p>
<p>Ted Chiang's 2023 New Yorker essay "ChatGPT Is a Blurry JPEG of the Web" compared language models to lossy compression — you get most of the information, but you'll never get the exact original back.</p>
<p>The "intern" metaphor (2023), popularized by Wharton's Ethan Mollick, communicated that AI output needs to be checked, reviewed, and supervised — useful framing during the era of hallucination anxiety.</p>
<p>Simon Willison's "calculator for words" (2023) reframed language models as tools that manipulate language the way calculators manipulate numbers: powerful, but not a search engine replacement.</p>
<p>The "autonomous agent" metaphor (2024) emerged alongside real-world deployments: Klarna announced its AI had replaced 700 customer service workers, and Eric and John built their own SEO content agent using Google Sheets and the ChatGPT API.</p>
<p>The "exoskeleton" metaphor (2025–2026) recenters the human: AI augments what you can already do rather than replacing you, but it's only as good as the operator wearing it.</p>
<p>The TI-83 Plus Silver Edition comes up as a nostalgia touchpoint — John and Eric bond over graphing calculators as their first experience of a machine doing complex operations they couldn't easily do by hand.</p>
<p>Polymarket is referenced as a platform where autonomous agents could participate in prediction markets, illustrating the agent-to-agent commerce concept.</p>]]></description>
      <itunes:summary>Stochastic parrot. Intern. Exoskeleton. Every AI metaphor shapes what you build and what you ignore, but the deeper question is why we can’t find a metaphor that fits.</itunes:summary>
      <pubDate>Sat, 14 Mar 2026 11:37:00 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/f5374cc9-6456-443a-a281-54925c800d7c.mp3" type="audio/mpeg" length="24212602"/>
      <itunes:duration>00:50:27</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>11</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/why-cant-we-find-a-metaphor-for-ai#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>The new superpower is old: speed, craft, and AI</title>
      <link>https://www.tokenintelligenceshow.com/episode/the-new-superpower-is-old-speed-craft-and-ai</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/the-new-superpower-is-old-speed-craft-and-ai</guid>
      <description><![CDATA[<p>AI makes speed cheaper, but craft still sets the ceiling. Eric and John unpack a timeless superpower: being fast and good at your work, then explore how to develop it without burning out.</p>

<h2>Summary</h2>
<p>Eric and John unpack a deceptively simple superpower: being both fast and good at your work. They argue AI raises the floor on speed but disproportionately rewards people with craft, judgment, and cross-disciplinary basics.</p>
<p>Then they ask the harder question: how to compound that advantage without burning out, chasing the wrong incentives, or getting trapped in job roles you don't actually want.</p>
<h2>Key takeaways</h2>
<p><strong>Separate the superpower levers</strong>: Treat speed and quality as distinct variables, then learn when the business context calls for more of one or the other.</p>
<p><strong>Create margin on purpose</strong>: Even 10–20% of reclaimed time, reinvested in better workflows and deeper skill, can compound over years.</p>
<p><strong>Use AI as an amplifier, not a crutch</strong>: Let it strengthen real craft, not conceal the absence of it.</p>
<p><strong>Master the adjacent basics</strong>: Business, communication, product sense, data, finance, and history make fast judgment more reliable.</p>
<p><strong>Protect focus without disappearing</strong>: Deep work matters, but it has to coexist with the responsiveness your role actually requires.</p>
<p><strong>Put guardrails on acceleration</strong>: The same systems that make you more effective can also make it harder to stop.</p>
<h2>Notable mentions and links</h2>
<p>C.S. Lewis's <em>The Inner Ring</em> returns as the framing text, especially the idea of the "sound craftsman" who loves the work more than the status around it.</p>
<p>John D. Rockefeller, via John's Gilded Age reading, is used as a historical example of someone who could scan ledgers and instantly spot a single error.</p>
<p>ElevenLabs is used as a concrete AI workflow example, letting John capture ideas while driving, get clean transcription, and compress podcast prep into minutes instead of hours.</p>
<p>The book <em>It's All Politics</em> is brought in to argue that office politics is real, but best treated as a means to support craft rather than replace it.</p>
<p>Peter Drucker’s line that marketing and innovation ‘produce results’ while ‘all the rest are costs’ frames why finance, sales, messaging, and product understanding matter even when your core role is technical.</p>
<p>The movie <em>Limitless</em> becomes the metaphor for AI productivity, especially the temptation to normalize constant acceleration until it starts to feel like withdrawal when the tools are unavailable.</p>]]></description>
      <itunes:summary>AI makes speed cheaper, but craft still sets the ceiling. Eric and John unpack a timeless superpower: being fast and good at your work, then explore how to develop it without burning out.</itunes:summary>
      <pubDate>Sat, 07 Mar 2026 16:47:00 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/8ba8b163-9ef0-4bee-8a1b-ae32ff8d7f07.mp3" type="audio/mpeg" length="19804595"/>
      <itunes:duration>00:41:15</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>10</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/the-new-superpower-is-old-speed-craft-and-ai#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Is AI productivity as simple as using more tokens? </title>
      <link>https://www.tokenintelligenceshow.com/episode/is-ai-productivity-as-simple-as-using-more-tokens</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/is-ai-productivity-as-simple-as-using-more-tokens</guid>
      <description><![CDATA[<p>How does Peter Steinberger spend $20k/month on tokens, and why? Based on their own experiments, Eric and John talk explain why autonomous loops are the next productivity frontier for AI. </p>

<h2>Summary</h2>
<p>Eric and John trace the rapid evolution of AI productivity, from prompt engineering to context engineering to autonomous loops. They land on a surprising insight: the biggest unlock isn't how you talk to AI, it's how much you let it run without you. They use OpenClaw's heartbeat file, real token-cost math, and the concept of long-horizon planning to argue that the bottleneck is shifting from prompt engineering skill to outcome definition and, ultimately, to human adoption speed.</p>
<h2>Key Takeaways</h2>
<p><strong>Prompt engineering is already productized</strong>: tools like v0’s prompt enhancer and Claude's plan mode have absorbed what used to be a manual skill.</p>
<p><strong>The real token spend comes from autonomy, not interaction</strong>: running multiple agents on loops is how you get to $15–20K/month, not by typing faster.</p>
<p><strong>Define the outcome, not the process</strong>: autonomous loops work best when the destination is crisp; vague goals still need human-in-the-loop collaboration.</p>
<p><strong>Long-horizon planning is the emerging skill</strong>: if AI compresses three years of execution into a quarter, you need to plan at a level of detail nobody's practiced.</p>
<p><strong>User adoption is the true ceiling</strong>: even if you can ship three years of product in three months, humans can't consume it that fast, so the bottleneck moves from build to adoption.</p>
<p><strong>Get (tokens) while the getting's good</strong>: $200/month subscriptions currently deliver thousands in real token value, but that arbitrage won't last forever.</p>
<h2>Notable mentions and links</h2>
<p>Agent skills are reusable capabilities for AI agents that you can manually install. They are mentioned as part of the progression from prompt engineering to context engineering and beyond.</p>
<p>Claude's plan mode (and similar features in other tools) are framed as productized versions of prompt engineering. Boris, the inventor of Claude Code, explained on Lenny's Podcast that plan mode is just a prompt telling the model to plan and not write code.</p>
<p>The heartbeat file is an OpenClaw text file with instructions that a scheduled job reads every 30 minutes. The AI agent wakes up, executes tasks autonomously, then goes back to sleep.</p>
<p>Anthropic's agent experiments, like building a C compiler, are cited as examples where clearly defined outcomes make autonomous loops viable.</p>
<p></p>]]></description>
      <itunes:summary>How does Peter Steinberger spend $20k/month on tokens, and why? Based on their own experiments, Eric and John talk explain why autonomous loops are the next productivity frontier for AI. </itunes:summary>
      <pubDate>Sat, 28 Feb 2026 11:04:24 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/a7c09c77-1860-4d96-a835-6f049f82e48f.mp3" type="audio/mpeg" length="15479136"/>
      <itunes:duration>00:32:15</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>9</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/is-ai-productivity-as-simple-as-using-more-tokens#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Navigating skill atrophy in the AI age</title>
      <link>https://www.tokenintelligenceshow.com/episode/navigating-skill-atrophy-in-the-ai-age</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/navigating-skill-atrophy-in-the-ai-age</guid>
      <description><![CDATA[<p>Eric stopped using AI for personal writing. Why? As you outsource to AI, you must decide which skills to keep sharp. Hand-coding is fading, but thinking, storytelling, and taste are timeless.</p>

<h2>Summary</h2>
<p>Eric and John unpack a quiet side-effect of delegating more work to AI: some skills do atrophy, but others get replaced by entirely new “muscles.” They use coding, Google-era “power searching,” and writing as case studies, then land on a sharper question: which fundamentals make you better at using AI (not just better at avoiding it)?</p>
<h2>Key takeaways</h2>
<p><strong>Treat skill atrophy as a design problem</strong>: decide what’s a “means-to-an-end” (fine to automate) vs. what’s foundational (worth training intentionally).</p>
<p><strong>Expect “power Googling” to fade, but replace it with source discernment:</strong> provenance matters more when AI artifacts are cheap and plentiful.</p>
<p><strong>Separate “writing” from “thinking” at your peril</strong>: if you outsource narrative and structure too early, you may lose the muscle that makes your AI output good.</p>
<p><strong>Use constraints strategically to keep core skills strong</strong>: paradoxically, working non-AI muscles makes you faster and more precise when you do use AI.</p>
<p><strong>Reframe the question from “what should I not outsource?” to “what makes me better at using AI?”</strong>: that’s where durable advantage will compound.</p>
<h2>Notable mentions and links</h2>
<p>The CEO of Vercel’s X post (“If you don’t use your body… If you don’t use your brain… what’s your plan?”) kicks off the episode’s core tension: AI makes things easier, but ease can come with cognitive tradeoffs.</p>
<p>Advanced Google search operators (site: constraints, filetype:pdf, and strategic quote usage for exact matches) are described as once-high-leverage skills that are fading in day-to-day use.</p>
<p>Eric’s example of hunting down a misattributed Mark Twain-style quote (“history doesn’t repeats itself…it rhymes”) illustrates where LLM search can stall and classic Google still wins.</p>
<p>Dragon’s decades-old transcription software is referenced as an early attempt at voice-to-text that’s now been eclipsed by modern AI transcription quality.</p>
<p>Whispr Flow’s pitch (speaking several times faster than typing) is used to explain why voice-first capture can be a legitimate productivity unlock.</p>]]></description>
      <itunes:summary>Eric stopped using AI for personal writing. Why? As you outsource to AI, you must decide which skills to keep sharp. Hand-coding is fading, but thinking, storytelling, and taste are timeless.</itunes:summary>
      <pubDate>Sat, 21 Feb 2026 13:36:49 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/4e9dd2f9-3a23-4154-a93d-afbe45abeac3.mp3" type="audio/mpeg" length="22497298"/>
      <itunes:duration>00:46:52</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>8</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/navigating-skill-atrophy-in-the-ai-age#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Will Notion dethrone HubSpot with AI? </title>
      <link>https://www.tokenintelligenceshow.com/episode/will-notion-dethrone-hubspot-with-ai</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/will-notion-dethrone-hubspot-with-ai</guid>
      <description><![CDATA[<p>AI is producing counter-intuitive competition. Notion’s connected ecosystem, architecture, and cash make it a threat…if the hyperscalers don’t eat the app layer.</p>

<h2>Summary</h2>
<p>AI is rewriting the playbook on competition: as software gets easier to build, the advantage shifts to products that own connected context across apps, which make agents feel truly magical. Eric and John argue that Notion’s app ecosystem, database-first architecture, and financial position could realistically challenge HubSpot, while the biggest looming risk for both is whether hyperscalers (Google, Amazon, Microsoft) bundle an “agent checkbox” product and eat the app layer altogether.</p>
<h2>Key Takeaways</h2>
<p>The old “start narrow” playbook still works, but cheap software + intense competition <strong>shifts the advantage toward products that own connected context, not just features</strong>.</p>
<p><strong>Notion’s best near-term wedge against HubSpot is agent UX</strong>: unified docs + databases + meeting notes + comms context can make automation feel genuinely magical.</p>
<p><strong>Expansion doesn’t require building everything from scratch</strong>: APIs (email, site generation) plus buy/build optionality can rapidly close surface-area gaps.</p>
<p><strong>The real product risk isn’t features, it’s form factor</strong>: if “agent-first storage” replaces human-first pages, incumbents may resist the necessary reinvention.</p>
<p><strong>Competitive risk comes from above and below</strong>: hyperscalers can bundle an agent checkbox product, while frontier model providers can squeeze margins and capture app layers.</p>
<p><strong>Knowledge hygiene is becoming automatable</strong>: if agents can keep workspaces searchable and deduped in the background, Notion’s “single system” story gets stronger, especially for SMB/mid-market companies.</p>
<h2>Notable mentions and links</h2>
<p>Notion bills itself as an “AI workspace,” but they have the ability to become a complete operating system for businesses.</p>
<p>HubSpot is a decades-old company that provides marketing, sales, and customer support software.</p>
<p>Linear created a wedge by focusing on a very narrow use case targeting frustrated Jira users.</p>
<p>Granola’s transcription and note taking app is also a wedge product, beating out long-time incumbents like Otter.ai.</p>]]></description>
      <itunes:summary>AI is producing counter-intuitive competition. Notion’s connected ecosystem, architecture, and cash make it a threat…if the hyperscalers don’t eat the app layer.</itunes:summary>
      <pubDate>Sat, 14 Feb 2026 12:30:06 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/de93a793-f899-4f2b-bece-f64a2e3aa90c.mp3" type="audio/mpeg" length="13942300"/>
      <itunes:duration>00:29:02</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>7</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/will-notion-dethrone-hubspot-with-ai#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>The map is not the territory</title>
      <link>https://www.tokenintelligenceshow.com/episode/the-map-is-not-the-territory</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/the-map-is-not-the-territory</guid>
      <description><![CDATA[<p>How do you navigate the pace of AI disruption? This mental model helps you decode AI hype, catch cartographer bias, and avoid being blinded by the past. </p>

<h2>Summary</h2>
<p>Eric and John break down the mental model "the map is not the territory" and pressure-test it against AI hype, career war stories, and the beloved platitude "perception is reality." They walk through Shane Parish’s three principles: 1) reality is the ultimate update, 2) consider the cartographer, and 3) maps can influence territories, and show why each one matters when billions are flowing into AI and the territory is shifting under everyone's feet.</p>
<h2>Key takeaways</h2>
<p><strong>"Perception is reality" is a useful awareness tool and a terrible life principle.</strong> It helps you understand why people behave the way they do, but centering your life around it leads to incongruity and character problems.</p>
<p><strong>Reality will update your map whether you like it or not.</strong> AI skeptics who refuse to revise their position as capabilities improve are a real-time case study in map–territory mismatch. The faster the territory changes, the more dangerous a stale map becomes.</p>
<p><strong>The cartographer always has a bias.</strong> Whether it's a CRO whose commission rewards higher ACV or a frontier-model company that needs to justify billions in investment, the person drawing the map has incentives baked in. Always ask who made the map and what they gain from it.</p>
<p><strong>Maps shape the territory they claim to describe.</strong> The ROI-first map for AI is concentrating nearly all successful tooling around knowledge-worker productivity (especially coding), even though AI is capable of far more. That’s limiting what gets built and funded.</p>
<p><strong>Touch the territory.</strong> Financial models, performance reviews, product demos, and AI benchmarks are all maps. The risk you miss is always the one the map doesn't show, so get your hands on the actual thing before making big decisions.</p>
<h2>Notable mentions and links</h2>
<p>Charlie Munger of Berkshire Hathaway fame is credited with championing the idea of collecting mental models from many disciplines to improve decision-making.</p>
<p>Shane Parrish is a Munger disciple who runs the Farnham Street blog, wrote the book series <em>The Great Mental Models</em>.</p>
<p>You can read the Farnham Street blog post on this mental model.</p>]]></description>
      <itunes:summary>How do you navigate the pace of AI disruption? This mental model helps you decode AI hype, catch cartographer bias, and avoid being blinded by the past. </itunes:summary>
      <pubDate>Sat, 07 Feb 2026 12:58:15 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/292e6521-fb68-4c8a-976f-8d3f8e0942a5.mp3" type="audio/mpeg" length="14830463"/>
      <itunes:duration>00:30:54</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>6</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/the-map-is-not-the-territory#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Text message bankruptcy, OpenClaw, and 20 years of email data</title>
      <link>https://www.tokenintelligenceshow.com/episode/text-message-bankruptcy-openclaw-and-20-years-of-email-data</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/text-message-bankruptcy-openclaw-and-20-years-of-email-data</guid>
      <description><![CDATA[<p>Eric hits 247 unread texts, meets OpenClaw, and reminisces on Merlin Mann’s “pebble problem”. He and John learn why messaging is now entertainment and pave a path towards better communication.</p>

<h2>Summary</h2>
<p>Eric accidentally reveals he has 247 unread texts and declares text message bankruptcy. In his effort to reorganize, he and John take a sharp look at how modern communication channels have morphed into entertainment and how AI makes the problem worse.</p>
<p>Along the way they</p>
<p>Run an analysis on 20 years of personal email</p>
<p>Discuss the extremity of giving OpenClaw (né Moltbot, né Clawdbot) root access to your email and messages</p>
<p>Revisit decades-old lessons from Merlin Mann’s Inbox Zero legacy</p>
<p>By the end of the show, they land practical ways to overcome the limitations of form factor in order to communicate well with the people you care about.</p>
<h2>Key takeaways</h2>
<p><strong>The real goal is relational integrity:</strong> The episode lands on the uncomfortable truth that your communication backlog reveals your lived priorities. Improving the system is ultimately about showing up for people you care about.</p>
<p><strong>Communication channels are “feedifying”:</strong> email and texting increasingly behave like entertainment/content distribution streams, shifting norms toward higher volume and weaker connection.</p>
<p><strong>The inbox problem is now big enough to drive extreme solutions:</strong> people are running local, open-source AI agents (often on dedicated Macs) and a primary use case is triaging and responding to messages (which comes with significant security risk).</p>
<p><strong>Inbox Zero and the pebble problem still explain the pain:</strong> the enduring issue is tiny, individually “light” messages compounding into an attention debt that feels impossible to repay without a decision framework. Merlin Mann’s work on this has stood the test of time.</p>
<p><strong>The medium and tools shape behavior:</strong> Apple’s Messages app is optimized for synchronous bursts and dopamine-triggering reactions, while lacking robust workflow affordances. Text message bankruptcy is partly structural, not just personal discipline.</p>
<h2>Notable mentions and links</h2>
<p>Eric coined the term “text message bankruptcy” in a blog post he wrote about the experience.</p>
<p>OpenClaw, formerly namesd Moltbot, formerly named Clawdbot, is an open source personal AI assistant that can have root access to everything on your computer. A primary use case is managing email and text messaging, though people are using it in extreme and insecure ways, giving OpenClaw access to their passwords and credit cards.</p>
<p>*How we lost communication to entertainment* is a fascinating article about modern communication channels trending towards entertainment, robbing users of real connection.</p>
<p>Marshall McLuhan coined the term “the medium is the message” to describe how the medium a message is delivered through isn’t neutral, but is part of the message itself.</p>
<p>T9 Word was one of the first innovations in messaging on dumb phones before Blackberry brought the full QWERTY keyboard to mobile at scale.</p>
<p>Merlin Mann has written for decades about productivity and coined the term Inbox Zero in a talk he gave at Google.</p>
<p>Merlin Mann used a “pebble” metaphor to describe the light ‘weight’ of an individual message and the difference in expectations that creates between the sender and receiver.</p>]]></description>
      <itunes:summary>Eric hits 247 unread texts, meets OpenClaw, and reminisces on Merlin Mann’s “pebble problem”. He and John learn why messaging is now entertainment and pave a path towards better communication.</itunes:summary>
      <pubDate>Sat, 31 Jan 2026 16:30:58 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/4c4b2782-e16f-484f-82a2-f42f15bd0ed2.mp3" type="audio/mpeg" length="25169938"/>
      <itunes:duration>00:52:26</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>5</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/text-message-bankruptcy-openclaw-and-20-years-of-email-data#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Sunk cost, AI deniers, and Elon talks with Jesus</title>
      <link>https://www.tokenintelligenceshow.com/episode/sunk-cost-ai-deniers-and-elon-talks-with-jesus</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/sunk-cost-ai-deniers-and-elon-talks-with-jesus</guid>
      <description><![CDATA[<p>Sunk cost in the AI era: John and Eric define the bias, share candid stories, and show how identity, tech debt, and market shifts demand pivots, reality checks and the freedom of starting over.</p>

<h2>Summary</h2>
<p>John and Eric unpack the sunk cost fallacy through personal stories, clean definitions, and why it intensifies in fast-moving AI and software. They contrast stubbornness-as-craft with market reality, show how identity and ego can cloud pivots, and offer practical checks: external feedback, tighter problem framing, and willingness to start over.</p>
<h3>Key takeaways</h3>
<p><strong>Name the bias</strong>: Prior investment should not drive future investment. Always optimize for present and future ROI, not the past.</p>
<p><strong>Identity check</strong>: Notice when a project becomes “part of me,” because that’s when impartial judgment collapses.</p>
<p><strong>Use outside calibration</strong>: Ask trusted, domain-relevant peers to sanity-check your assumptions.</p>
<p><strong>Accept utilitarian wins</strong>: AI-produced code may be inelegant, yet commercially superior. Tests and agents will raise quality anyway, so it’s time to accept the future of software development.</p>
<p><strong>Freedom is willingness to start over</strong>: If you can let go of valuable things and start from zero, you won’t run the risk of getting bogged down by sunk costs.</p>
<h2>Noticeable mentions and links</h2>
<p>Sunk cost fallacy is defined as the bias of using prior investment (time, money, effort) to justify continued investment, even when it impairs present decision-making.</p>
<p>Thinking, Fast and Slow, written by Daniel Kahneman, is referenced for its System 1 / System 2 lens to explain why sunk cost can feel emotional and irrational.</p>
<p>Steam-powered boats and the Morse code/telegraph are cited as cases where stubborn persistence eventually met enabling tech, highlighting survivorship bias.</p>
<p>The "rich young ruler" story from Matthew 19 in the Bible is used to illustrate identity attachment and how letting go of things core to oneself can be the real barrier to change.</p>
<p>Elon Musk, via Walter Isaacson's biography, is referenced as an anti–sunk-cost archetype, repeatedly risking everything and switching when needed.</p>
<p>Benn Stancil's framing (LLMs read fast and summarize "roughly") is echoed to explain why AI coding feels transformative: machines don't slow down on code reading/writing.</p>]]></description>
      <itunes:summary>Sunk cost in the AI era: John and Eric define the bias, share candid stories, and show how identity, tech debt, and market shifts demand pivots, reality checks and the freedom of starting over.</itunes:summary>
      <pubDate>Sat, 24 Jan 2026 14:00:00 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/23eea30e-b91e-4859-a90b-83ccbc6a578e.mp3" type="audio/mpeg" length="19907831"/>
      <itunes:duration>00:41:28</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>4</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/sunk-cost-ai-deniers-and-elon-talks-with-jesus#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>AI&apos;s chat interface problem and Lobe&apos;s imaginary seed round</title>
      <link>https://www.tokenintelligenceshow.com/episode/episode-2-the-chat-interface-problem-and-lobe-s-imaginary-seed-round</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/episode-2-the-chat-interface-problem-and-lobe-s-imaginary-seed-round</guid>
      <description><![CDATA[<p>Eric and John riff on Lobe's seed round, then dive deep on why chat is the wrong UI for most AI. They unpack the blank page problem, why context matters, and how embedded AI will replace chat.</p>

<h2>Summary</h2>
<p>In Episode 2, Lobe gets a theoretical 3 million dollar seed round, and Eric and John discuss how they are going to deploy the capital, which includes potential acquisitions.</p>
<p>Next, they dive into a detailed discussion about why chat is a ubiquitous UI for AI. Eric feels very strongly about the shortcomings, which include poor literacy rates, the blank page problem, and which use cases chat is actually good for. The why is even more interesting, and their hypothesis is that cost is one of the primary drivers because of how expensive it is to run models at scale.</p>
<p>They wrap up by imagining a future where AI disappears from interfaces altogether, and is embedded natively in intuitive, multi-model user experiences.</p>
<h2>Key takeaways</h2>
<h3>Lobe.ai</h3>
<p><strong>Lobe’s path forward</strong>: acquire and partner for distribution (apps/sleep brands), integrate biometrics for REM triggers, and monetize interpretation and creative outputs.</p>
<h3><strong>The AI chat interface</strong></h3>
<p><strong>Chat is the wrong default interface for AI</strong>: it shines for search and inside high-context environments with clear task frames, but obfuscates the power of the tools in most other cases.</p>
<p><strong>Fundamental barriers limit the utility of chat</strong>: Americans have low literacy rates, and combined with the blank page problem, chat will limit the value people can get from AI.</p>
<p><strong>Context is king</strong>: multimodal, embedded AI will replace generic chat for many jobs. Think IDEs, docs, and app-native flows that deliver value in place.</p>
<p><strong>Hard costs influence the interface</strong>: cost and infra realities favor user-initiated interactions now; as economics improve, proactive, background “agentic” features will grow.</p>
<h2>Notable mentions with links</h2>
<p>Poe (by Quora) is shown as a chat aggregator illustrating how many tools converge on chat as the primary interface.</p>
<p>Notion AI is used to demonstrate higher-context chat inside documents. It's helpful, but with UX pitfalls (e.g., overwriting content and unclear "terms of the transaction").</p>
<p>Cursor (AI IDE) is highlighted as a high-context environment where chat + multimodal controls (browser, on‑page edits) make AI assistance more precise and useful.</p>
<p>v0 is referenced as a multimodal design/build flow that lets users edit generated UI directly, going beyond pure chat to reduce the blank-page burden.</p>
<p>Rabbit R1 is discussed as an alternative, voice‑forward hardware form factor pushing beyond chat, with lessons about timing, expectations, and risk.</p>
<p>Naveen Rao (Databricks) is quoted arguing that generic chat is “the worst interface for most apps,” calling for insight delivered “at the right time in the right context.”</p>
<p>Benedict Evans is cited for the idea that most people will experience LLMs embedded inside apps rather than as standalone chatbots, similar to how SQL is invisible in products.</p>
<p>Jakob Nielsen is noted for the view that prompt engineering’s rise signals a UX gap, and that AI needs a Google‑level leap in usability to cross the chasm.</p>
<p>Low literacy rates are discussed as a key limiter. Good writers tend to extract more value from chat tools.</p>
<p></p>]]></description>
      <itunes:summary>Eric and John riff on Lobe&apos;s seed round, then dive deep on why chat is the wrong UI for most AI. They unpack the blank page problem, why context matters, and how embedded AI will replace chat.</itunes:summary>
      <pubDate>Sun, 18 Jan 2026 03:11:00 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/ba3e4971-11a3-48b8-8d6d-428d24885417.mp3" type="audio/mpeg" length="27023587"/>
      <itunes:duration>00:56:17</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>3</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/episode-2-the-chat-interface-problem-and-lobe-s-imaginary-seed-round#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>Bottlenecks mental model &amp; tool time with Zo Computer</title>
      <link>https://www.tokenintelligenceshow.com/episode/episode-1-part-2-bottlenecks-mental-model-and-tool-time-with-zo-computer</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/episode-1-part-2-bottlenecks-mental-model-and-tool-time-with-zo-computer</guid>
      <description><![CDATA[<p>Eric and John discuss bottlenecks as a mental model, uncovering why constraints are leverage, not blockers. Hands-on Tool Time is with Zo Computer, a stateful, powerful, AI-enabled cloud computer.</p>

<h2>Summary</h2>
<p>In the second half of Episode 1, Eric and John tackle “bottlenecks” as a core mental model: why they limit system output, when to keep them on purpose, and how to fix the right ones without creating worse slowdowns. They share examples from product development, content quality control at scale, and how the youngest child changes family life.</p>
<p>In Tool Time, they go hands-on with Zo Computer, an AI-enabled cloud computer with state, plus agents and a real file system. Eric shares his screen to explore use cases like media management, hybrid search over local files, and remote development, ultimately questioning where the day-to-day value beats existing tools. Eric analyzes his entire history of blog post markdown files, and they conclude that running AI against physical files will be a big deal, but wonder if Zo is the right form factor.</p>
<h2>Key takeaways</h2>
<h3>Mental model: bottlenecks</h3>
<p><strong>Identify the real constraint and keep good bottlenecks:</strong> Focus on the true bottleneck, not the noisiest part. Optimizing fast stages is wasted effort. Some constraints (security, editorial review) protect quality and safety, so preserve them intentionally.</p>
<p><strong>Fewer focused people beat swarm tactics</strong>: Small, targeted groups resolve bottlenecks faster than all-hands pile-ons.</p>
<p><strong>Prototype fast, still ship with specs</strong>: High-fidelity prototypes unblock product velocity, but clear specifications prevent new downstream bottlenecks.</p>
<h3>Tool Time with Zo Computer</h3>
<p><strong>Save long-running AI work as real artifacts</strong>: Working against files and services with memory beats transient chats when your work is long-running or spans multiple sessions.</p>
<p><strong>Files beat context windows</strong>: Hybrid search over a real file system is faster and more precise than stuffing giant context windows.</p>
<p><strong>What uses cases the remote AI computer will really solve</strong>: Tools like Zo seem well suited when it beats local workflows on security (code/data never leaves a controlled environment), scalable compute (beefy GPUs/CPU on demand), or collaborative persistence (shared stateful workspaces, services, and logs that multiple people and agents can access).</p>
<h2>Notable mentions with links</h2>
<p><strong>Mental model: bottlenecks</strong></p>
<p>The Great Mental Models is a book series by Shane Parrish that breaks down fundamental decision-making through Charlie Munger’s latticework of mental models.</p>
<p>The Goal is a business novel by Eliyahu M. Goldratt that popularizes the Theory of Constraints and introduces the “Herbie” Boy Scout hike as a vivid metaphor for bottlenecks.</p>
<p>The Phoenix Project is an IT/DevOps retelling of The Goal that applies the Theory of Constraints to modern software delivery and operations.</p>
<p>The Trans-Siberian Railway is used in The Great Mental Models to show how relieving one constraint in a massive project can trigger new ones elsewhere.</p>
<p>Vercel’s v0 is an AI-assisted tool for generating websites and apps that shrinks the prototyping gap and increases product velocity and fidelity.</p>
<p><strong>Tools and AI</strong></p>
<p>Raycast is a next‑gen Mac launcher in the Spotlight/Alfred lineage that sparked a thought experiment about OS-level AI with rich local context and access.</p>
<p>Alfred is an earlier Mac power-user launcher that provides historical context for Raycast’s approach to extensible search and commands.</p>
<p>Zo Computer is a persistent cloud computer with memory, storage, agents, services, and a real file system that the hosts tested for Plex, blog analysis, and remote development.</p>
<p><em>... (Read more at the episode page)</em></p>]]></description>
      <itunes:summary>Eric and John discuss bottlenecks as a mental model, uncovering why constraints are leverage, not blockers. Hands-on Tool Time is with Zo Computer, a stateful, powerful, AI-enabled cloud computer.</itunes:summary>
      <pubDate>Sat, 10 Jan 2026 11:20:19 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/357561ce-697a-4df8-8955-9ad54b9026ac.mp3" type="audio/mpeg" length="28568782"/>
      <itunes:duration>00:59:31</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>2</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/episode-1-part-2-bottlenecks-mental-model-and-tool-time-with-zo-computer#transcript" type="text/html"/>
      
    </item>
    <item>
      <title>The Inner Ring &amp; creating an AI startup on demand</title>
      <link>https://www.tokenintelligenceshow.com/episode/episode-1-part-1-the-inner-ring-and-creating-an-ai-startup-on-demand</link>
      <guid isPermaLink="true">https://www.tokenintelligenceshow.com/episode/episode-1-part-1-the-inner-ring-and-creating-an-ai-startup-on-demand</guid>
      <description><![CDATA[<p>Eric and John invent “Lobe,” a screenless AI for dream capture, then unpack C.S. Lewis’s “Inner Ring” to explore status, AI FOMO, and the long game of craft, character, trust, and defining “enough.”</p>

<h2>Summary</h2>
<p>Eric and John kick off the inaugural episode of Token Intelligence with a live AI startup creation challenge. Responding to John’s prompt, Eric imagines “Lobe,” a screenless AI device for passive sleep listening that reconstructs and interprets your dreams.</p>
<p>Charting a course to more serious waters, the hosts pivot to C.S. Lewis’s “Inner Ring,” an 80-year-old college commencement speech, to unpack status, belonging, and career ambition in tech.</p>
<p>They connect Lewis’s warning to today’s AI FOMO, contrasting short‑game inner-ring chasing with the long‑game path of craftsmanship, character, trust, and defining “enough” in work and life.</p>
<p>Along the way, they share candid stories of startups, inner circles at school and work, and practical ways to stay curious without getting swept up in AI hype.</p>
<h2>Key takeaways</h2>
<p><strong>Live-creating an AI startup called Lobe</strong>: A screenless, passive sleep-listening device that records during REM, blends audio with biometrics, reconstructs your dream, and offers paid interpretations—with optional visualizations via generative video tools.</p>
<p><strong>The Inner Ring college commencement speech</strong>: C.S. Lewis’s warning, that chasing insider status “will break your heart,” maps to modern tech careers where influence, visibility, and belonging can overshadow the work itself.</p>
<p><strong>Short game vs long game</strong>: Inner-ring-chasing can move titles fast, but the durable path is craftsmanship + character → trust → meaningful opportunities and friendship.</p>
<p><strong>Define “enough”</strong>: If freedom and time with loved ones are the goals, you can often change life structures now rather than deferring everything to a future exit or windfall.</p>
<p><strong>Managing AI FOMO</strong>: Name it, keep simple systems to stay current, study fundamentals (economics, incentives), and build small projects to demystify the tech without drowning in hype.</p>
<h2>Notable mentions with links</h2>
<p><strong>Startup riff: inventing “Lobe” (screenless, passive listening AI)</strong></p>
<p>Sleep tracking apps like Sleep Cycle are referenced as prior art for nighttime audio capture and sleep analysis, inspiring Lobe’s focus on REM-triggered recording. Eric mistakenly referred to this a "Sleep Score" in the show. </p>
<p>Eight Sleep is mentioned as a potential smart-mattress integration partner within the broader sleep-tech ecosystem.</p>
<p>Sora is cited as a generative video tool that could visualize reconstructed dreams as shareable clips, extending Lobe’s premium features.</p>
<p><strong>Career and culture: C.S. Lewis, inner circles, and the craft</strong></p>
<p>The Inner Ring is a commencement speech given by C.S. Lewis at King’s College, University of London, in 1944.</p>
<p>War and Peace, by Leo Tolstoy, is quoted in The Inner Ring to illustrate the existence of informal “unwritten systems” that shape real power and belonging.</p>
<p>The “Pie Theory” of career success: Performance, Image, and Exposure are discussed as a common framework for how people advance inside organizations.</p>
<p>The Staff Engineer career path is highlighted as an individual-contributor track that rewards deep expertise and influence without requiring a move into management.</p>
<p><strong>Personal startup journeys and ecosystems</strong></p>
<p>The Iron Yard is referenced as a coding school startup experience that exposed the host to founder networks, fundraising, and an eventual exit.</p>
<p>Zappos and Tony Hsieh are mentioned in the context of a founder lunch and talent pipeline discussions during that startup phase.</p>
<p><em>... (Read more at the episode page)</em></p>]]></description>
      <itunes:summary>Eric and John invent “Lobe,” a screenless AI for dream capture, then unpack C.S. Lewis’s “Inner Ring” to explore status, AI FOMO, and the long game of craft, character, trust, and defining “enough.”</itunes:summary>
      <pubDate>Sun, 04 Jan 2026 01:35:28 GMT</pubDate>
      <enclosure url="https://www.tokenintelligenceshow.com/audio/38561f57-dac4-48cb-8a92-388aee2237fb.mp3" type="audio/mpeg" length="52836537"/>
      <itunes:duration>01:50:04</itunes:duration>
      <itunes:episodeType>full</itunes:episodeType>
      <itunes:episode>1</itunes:episode>
      <itunes:explicit>false</itunes:explicit>
      <itunes:season>1</itunes:season>
      <itunes:image href="https://cdn.sanity.io/images/dc80drb4/production/19f88eba675b238db864323f1208fc8d32dcaf92-3000x3000.jpg"/>
      <podcast:transcript url="https://www.tokenintelligenceshow.com/episode/episode-1-part-1-the-inner-ring-and-creating-an-ai-startup-on-demand#transcript" type="text/html"/>
      
    </item>
  </channel>
</rss>