Interlocutor: GPT o3-pro
KT: Let's talk about this Substack post by philosopher Rebecca Lowe in which she argues the “Age of AI” is the “Age of Philosophy.” Briefly, in a few paragraphs for a non-philosopher audience, what does that mean?
GPT: Rebecca Lowe is arguing that the spread of powerful, chatty AI systems is turning abstract, “intro‑class” questions—What is consciousness? Do we have free will? What kinds of things really exist?—into everyday concerns. Because ordinary people now hold conversations with software that can describe its “feelings,” improvise personalities, and even claim to remember past chats, they are bumping into classic problems from metaphysics and philosophy of mind that were once the preserve of specialists. In that sense, the technology age is forcing a philosophy age on all of us.
She then flips the claim inward toward the profession: AI can already draft passable essays, but it still lacks four ingredients she sees as essential to genuinely “doing” philosophy—being a particular, persisting subject; having inner experience; reasoning as a free agent; and living a bodily life. Large language models simulate these things from the outside, she says, yet never feel what it’s like to deliberate, choose, hurt, or age. That leaves an enduring comparative advantage for human thinkers who really do have those experiences.
Put together, Lowe’s slogan “The Age of AI is the Age of Philosophy” means two things at once. First, AI drags deep philosophical puzzles into the mainstream, so more people than ever need—and want—to grapple with them. Second, precisely because AI can’t live the answers, the field remains an unusually “human‑proof” vocation: society will still care what flesh‑and‑blood minds conclude about meaning, responsibility, justice, and personhood. In short, smarter machines make philosophy both more public and more indispensably human.
KT: In a few paragraphs, again for a non-philosopher audience, describe how Lowe's piece and this post on “Computational Philosophy“ from my Substack are complementary.
GPT: Rebecca Lowe’s essay argues that smart machines have pushed questions about consciousness, free will, and personal identity out of the ivory tower and into ordinary life. Because chatbots now “talk back,” everyone is forced to wonder what really thinks, chooses, or feels—and, by contrast, what only imitates those acts. Lowe then flips the lens onto the profession, insisting that genuinely doing philosophy still demands capacities she thinks AI lacks: being a particular, persisting subject, reasoning as a free agent, and living an embodied life. In short, AI makes philosophy unavoidable for the public while leaving a distinctly human space for professional philosophers.
Your Substack post picks up at exactly that point. Rather than treating AI merely as the problem that sparks metaphysical puzzles, it treats it as a practical instrument for “computational philosophy.” Large language models, you suggest, can map argumentative terrain, surface neglected ideas, and serve as tireless sparring partners in simulated dialogues. Used carefully—and with an eye to their blind spots—they expand the philosopher’s toolkit much as microscopes once expanded biology’s.
Put together, the two pieces sketch a complementary division of labor. Lowe shows why the chatbot era makes philosophical reflection matter to everyone and why human judgment remains irreplaceable; your essay shows how the very same technology can amplify that human judgment, offering new methods for exploration and critique. Where Lowe supplies the cautionary framework—“AI can’t feel what it’s like to deliberate”—you supply the constructive follow‑up—“so let’s harness what it can do to test our ideas more broadly and transparently.”
For a non‑philosopher, the takeaway is simple: the bots on your screen are both a wake‑up call and a power tool. They force us to confront age‑old questions about mind and agency, as Lowe argues, and—properly supervised—they can also help us chase better answers, as your post demonstrates. In tandem, the essays paint the “Age of AI” as an era in which philosophy is not only newly necessary but newly equipped.
KT: That sounds rather rosy. What are the major weaknesses in these positions?
GPT: Both essays lean on optimistic assumptions that deserve pressure. Rebecca Lowe’s argument that everyday chatbot encounters will usher “foundational philosophy” into ordinary life may conflate exposure with understanding. People already project human traits onto software, yet the step from saying “ChatGPT feels sad” to grappling seriously with Nagel, Frankfurt, or Locke is non‑trivial. Without scaffolding—good teachers, time to reflect, open access to scholarship—many users may only reinforce folk‑psychological clichés, not deepen their grasp of metaphysics or philosophy of mind, so the promised popular renaissance could stall at the level of memes and influencer takes. In addition, Lowe stakes her “human advantage” on the claim that AIs lack particularity, subjectivity, agency, and embodiment. But those lines are moving targets: systems that maintain long‑term memory, run continuously in robot bodies, or integrate with lifelog data already blur her neat boundary, and future architectures might chip away further. By treating today’s technical limits as metaphysical limits, her case risks dating itself quickly.
Your “computational philosophy” post tilts in the opposite direction, presenting large language models as reliable cartographers of argumentative space and as tireless Socratic partners. Yet their fluency can mask shallow statistical patterning, training‑set blind spots, and confident fabrication of sources. Philosophical dialogue that rests on hallucinated citations or unexamined biases can mislead rather than illuminate, and heavy dependence on autoprompted synthesis may erode a learner’s own charitable reading, close textual analysis, and slow self‑critique—the very skills philosophy prizes. Moreover, current models are shaped by corporate objectives and opaque data pipelines; treating them as neutral lab instruments downplays the politics of whose texts are over‑represented, which ethical viewpoints get super‑scaled, and who can afford access.
When the two essays are read together, a further tension appears: each assumes that human judgment will remain sovereign—Lowe because AIs can never really “do” philosophy, you because humans will curate and interpret the model’s output. Neither wrestles with the possibility that institutions, employers, or publishers might prefer the speed, volume, and legal convenience of synthetic prose over deliberative human essays, squeezing the economic space in which careful philosophy can survive. Nor do they confront the risk that mass‑produced, AI‑shaped discourse could set the interpretive agenda, steering public conceptions of agency and personhood before academic philosophers have time to weigh in. In short, both writings illuminate genuine opportunities, but their rosy tone understates the epistemic fragility, shifting technical frontier, and power asymmetries that could keep the “Age of Philosophy” from unfolding quite as either author hopes.
KT: Suggest replies to those objections.
GPT: Rebecca Lowe could answer the first worry by stressing that her claim is aspirational rather than automatic. Her essay does not confuse scrolling a prompt‑theory video with passing a mid‑term on metaphysics; it says that the AI zeitgeist creates countless entry points that teachers, writers, and institutions can cultivate. We already treat Marvel movies as gateways to ethics and dystopian fiction as a path into political theory; chatbots simply widen the funnel. The fact that curiosity alone is insufficient is precisely her point: because AI talk keeps surfacing questions of mind and agency, there is now a cultural incentive to build the “scaffolding” the objection calls for—open‑access explainers, school modules, and public salons that translate Nagel or Frankfurt for newcomers. The promise is therefore conditional but real: exposure becomes understanding when educators seize the moment, and AI supplies more “teachable moments” per day than any prior technology.
On the moving boundary between humans and machines, Lowe can clarify that her four criteria—particularity, inner experience, free agency, and embodiment—are not snapshots of 2025 hardware but conceptual thresholds. Even if an LLM is wrapped in continuous memory or a humanoid shell, the relevant question is whether that architecture grounds first‑person awareness or merely imitates the behavioral profile of one. The thought experiment of a “zombie” that passes every functional test yet lacks what‑it‑is‑likeness shows why adding more sensors does not automatically cross the line. In that respect, her wager is resilient: should future systems achieve genuine subjectivity, philosophy would still be indispensable, for the very success case would force us to revise our theories of personhood—work that only reflective agents can do.
For computational philosophy, the reply begins with methodological humility. The post already treats language models as fallible “sparring partners,” so acknowledging hallucinations and bias is not a retreat but a design constraint. The antidote is to embed verification loops—cross‑checking model claims against primary texts, forcing citation, and asking rival systems to critique each other—so that synthetic dialogue ends in a human‑audited bibliography rather than an ungrounded epiphany. Those practices mirror peer review and replicate the discipline’s existing norms; the value added is speed and breadth, not a bypass of critical judgement.
The charge of data politics can likewise be met head‑on. Treating LLMs as neutral instruments is indeed naive, but the solution is to foreground provenance: push for open‑weight models trained on curated, pluralistic corpora; publish data statements that let philosophers see whose voices dominate; and develop prompts that deliberately solicit minority traditions. Far from undermining the project, such transparency research is itself a branch of computational philosophy—analyzing how epistemic injustice propagates through code and proposing technical correctives.
Finally, on the fear that institutions will prefer cheap machine prose over slow human essays, both writers can reply that philosophy’s social role is normative, not merely informational. If AI floods the zone with passable copy, the premium on accountable, name‑attached reasoning actually rises: editors, courts, and policymakers will need authors who can swear to their premises and accept criticism in public. In that landscape the human philosopher becomes less a supplier of text and more a guarantor of intellectual due process—curating, auditing, and standing behind arguments that machines help draft. The Age of AI therefore tightens, rather than loosens, the loop between philosophical authorship and civic trust, keeping room for human judgment even in a market awash with synthetic words.
KT: In my human judgment, that seems to be a sufficient number of synthetic words on the topic of synthetic words—for now.