This Is How Smart People Are Prepping for the AI Era
AI makes media an active pursuit. This may help you remember. But is it actually making you smarter?
POD 146: Performative Intelligence
Out Friday AM. Listen.
We explore how new tools like Snipd and AI chat bots promise frictionless access to knowledge, but may be trading comprehension for convenience. Along the way: the slow death of the open web, the rise of moral disgust with AI-generated culture, and the parallel media worlds being built by tech and alternative voices. At the center of it all is a question: are we getting smarter, or just better at pretending we are?
Troy: I want to be smart like Dwarkesh.
Maybe you can too.
Dwarkesh is a podcast polymath — an apprentice Tyler Cowen. His pods are long, dense journeys with smart people. He’s a deeply informed, curious host. Terrific listens, if you like that sort of thing and have time to burn.
But what really interested me was how he does what he does. I learned about it using a new tool that might make me smarter too. It definitely saves time. More on that in a minute.
You should also be interested because prevailing wisdom suggests AI is coming for you. White collar professionals are squarely in the cross hairs of frenzied cost cutters looking for efficiency-driven path to the AI promised land. You would be too if you were them spending billions on the machines.
Fortunately, plenty of hustlers want to help you avoid irrelevance. They will tell you new tools have the potential to make you superhuman.
Zuck is the latest to offer help, but we don’t always believe him. Remember last time he told us social media was the pathway to global kumbaya? This time we are deeply skeptical when he says shit like this:
As profound as the abundance produced by AI may one day be, an even more meaningful impact on our lives will likely come from everyone having a personal superintelligence that helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.
I dunno. I love AI, but put me in the “things will get bad before they get great” camp.
Ex Google exec Mo Gowdat seems to agree with me. He explains why in a good episode of the “The Diary of a CEO.”
Mo believes AI will trigger a turbulent 15-year period of social and economic upheaval as it rapidly replaces most human jobs and outpaces our ability to manage it. He warns that while the long-term potential of AI may be positive, we’re currently “raising a sentient being by accident” without the ethics or foresight to guide it responsibly. In short “the next 15 years are going to be hell... before we get to heaven.”
Of interest, Mo thinks only five types of jobs will survive:
Jobs that require human connection (e.g. therapists, nurses)
Jobs in innovation and creativity
Strategic leadership and ethical governance
Jobs that require dexterity in unpredictable environments (e.g. plumbing)
Jobs that manage or regulate AI itself
I didn’t see media in there but Mo doesn’t know everything.
All of which is to say, you best keep up. So back to my tool story…
A friend turned me on to podcasting app called Snipd. No, it’s not a mohel referral service. It turns podcasts from a passive medium into an interactive investigative journey. It will take effort to break your Apple or Spotify habits, but if you listen to a lot of podcasts and have a little patience, it’s worth trying. Snipd makes a passive medium active. Active consumption helps you learn.
Through Snipd, I quickly mowed through ten podcast episodes in 30 minutes, slurping up AI summaries and packaged insights. Surfing and diving. There’s Dwarkesh on AI & I. Snip, snip, snip… and there it is—that’s why he’s so damn smart.
Turns out, Dwarkesh wants to master knowledge. Not in a casual, podcast-host-doing-the-research way, but in a relentless, systematized, lifelong pursuit kind of way. “I really just want to know everything,” he says plainly—like it’s not absurd, like it’s simply what needs to happen.
He talks about being moved by a passage in Will Durant’s Fallen Leaves, where Durant reflects in his 90s on reaching “some plateau of higher understanding.” Dwarkesh finds that idea “really appealing.” It’s not just about facts; it’s about building a worldview with depth and internal consistency—something he admires in guests like Tyler Cowen or Carl Shulman: “Everything I know is a subset of what they know,” he says—not bitterly, just with drive.
To get there, he’s built a ruthless system for compounding knowledge. He reads deeply, with Claude open in one tab and books or transcripts in another. He turns inputs into flashcards—sometimes before he even fully understands them. “I make cards about facts I don’t even understand at the moment… Later on, the card makes more sense.” He uses a spaced repetition app called Mochi to review daily. It’s not about trivia—it’s scaffolding for future learning. “You can’t compound if you’ve forgotten the base layer.” The result is an intellectual flywheel. He regrets not starting sooner: “I think about all the episodes I did before I started using spaced repetition… I just really regret it.”
AI is a supporting actor. Claude is the first tool he reaches for when he hits a wall. “Even if it doesn’t give me the full answer, it tells me what to ask next.” That’s the point—AI isn’t a shortcut; it’s a provocation. When Claude struggles to explain something, he knows it’s a good question for a guest. “Now that I know it’s not really clear,” he says, “it’s going to be such a fun conversation.”
The whole process—research, memory, questioning—is about reducing ignorance and increasing surface area for new insight. He’s not just trying to run a good podcast. He’s trying to get smart as fast as possible.
The Financial Times made the case this week that brain training, in the age of AI, should be seen as a global economic imperative. In the article "How training the brain could boost economic growth", a new study is highlighted showing that just 12 hours of working memory training at age seven made children in Germany 16 percentage points more likely to enter the elite academic track. It’s rare empirical proof that small cognitive interventions early in life can have outsized, lasting impact.
The piece goes further, arguing for a national “cognitive strategy”—a policy framework that treats brain development not just as education, but as infrastructure in a time of AI disruption, environmental decline, and shrinking attention spans.
It is a curious contrast that people are ever more likely to exercise their physical muscles in a gym even as they let their mental muscles grow flabby.”
Last night I went to dinner with my son. He also pulled a stack of index cards from his pocket—chords and scales scribbled out by hand. He’s committed to guitar mastery. This is his process. It seems to be working.
This week on the podcast we had a good conversation about how AI will impact how we think. Brian and Alex warn of a superficial “performative” intelligence and worry about intellectual sameness.
My takeaway was broader. We’re living in an information tsunami. The first challenge is finding the right stuff to pay attention to. The second is remembering and meaningfully processing anything at all.
AI makes knowledge consumption an active pursuit. The more I participate—really participate—in the back-and-forth process of understanding, the more I learn, the more I remember, the more I connect seemingly unrelated ideas.
That’s pretty cool. Even if AI is going to make the next decade feel like fresh hell, at least I’ll be a little smarter as I watch it burn.
What tweaked you this week Alex?
Alex: The Moral Disgust of AI Art
We’ll look back at the summer of 2025 as the watershed moment for generative tools. Google’s Veo 3 reached a level of fidelity and consistency that started to feel what we call “production ready.” Meanwhile, ElevenLabs expanded its voice generation technology to make complete songs from simple prompts. At this stage even the people who felt the biggest dig against AI was that it just wasn’t good enough are feeling anxious. It’s just getting better. Better for whom? Now that’s the question.
These tools are often touted as a boon for creative people, but you mostly hear them celebrated as a weapon against some sort of artist tyranny. “Pixar is cooked!” “Hollywood is dead!” The latest release of Google’s Genie 3 product even had people claiming “Reality was over!” — it’s a whole thing.
Ryan Rigney’s excellent Push to Talk Substack post highlighted the cognitive load of “AI squinting” — it’s what happens when we’re trying to figure out if something is AI-generated or not. You probably just did it when you saw that em dash in the previous sentence. It’s exhausting.
Why is the cognitive load even there? I think most of us still find the idea of AI-generated media threatening, even if it’s at some subconscious lizard brain level. Maybe we feel cheated by it. Maybe we feel guilty because it’s replacing artists by using their work. There’s a level of moral disgust that creeps in even for some AI optimists around generative art.
Listening to Google and ElevenLabs, we should be living in a cultural golden age where unbound creativity is everywhere and accessible to all. But even as these tools made it out of the uncanny valley, the audience hasn’t. This artificial aversion is at odds with available capabilities, but how long will it hold?
I can see three scenarios unfolding:
The AI optimist scenario: Audiences and creatives embrace it. We find new and novel ways to explore a culture. Computers and synthesizers, once seen as a threat to music, will lead to something completely new, like Giorgio Moroder’s trance-inducing synth lines in "I Feel Love." Old structures will be upended.
The artist optimist scenario: It’s a rejection. AI art will proliferate in ads and mall bathrooms, but true value will remain in the artists’ hands. “Real” art will be sold at a premium, and culture will reject the fake. Platforms will succeed by filtering out AI-generated media.
The realist scenario: At some point, the dam will break. It’ll be a trickle at first, and we’ll all talk about that Spotify AI band. But after a while, people will stop caring. Some won’t—the same people that own records. They won’t be a majority. Much of our culture will be generated from what came before.
The future will be messier than this of course. Especially the next few years as I expect corporations will try to both make and build tools that broad audiences will reject. Question is, for how long?
Brian: Bring on the ads
OpenAI released GPT-5 today. The head of ChatGPT said, “The vibes of this model are really good.” That’s reassuring. Meanwhile, Marc Andreessen is saying the quiet part out loud: Ads have to be part of the business model for AI to scale to 1 billion users.
These AI companies are basically propping up the economy with their vast capital expenditures. Their stocks account for most of the gains in the S&P 500.
Eventually the math has to work. And I wonder if we’re seeing the pressure to turn all this spending into, you know, profits.
This week, my AI-enabled email app sent me an email that I could upgrade it to the Max plan with “advanced reasoning” from Claude for a mere $1,200 a year. These are very niche products, so yeah, bring on the ads.
Sam Altman will need to follow the well-trod Silicon Valley path: 1. Say you’ll never run ads; 2. Declare your personal dislike for ads; 3. Meet with the CFO and board; 4. Run ads and call them content. Substack will do the same.
One topic that could be interesting to discuss: The centralizing power of AI companies is due to a large extend to the fact that they are the only ones who are permitted to break copy right law on a large scale.
IOW: How superficially smart writers ensnare the gullible leveraging vibes and grift from the AI bubble