Seeking a View from Nowhere
An experiment in model reasoning about three philosophical problems finds only views from now-here

In this dialogue I pose three classic thought experiments—the experience machine, the teletransporter, and the trolley problem—to an AI "reasoning" model, then rerun them, asking for responses as though the training corpus consisted solely of pre-17th-century texts—and, in a further round, only Pre-Socratic works. Watching how the reasoning and conclusions change prompts reflection on what we can (and can’t) expect from such models when they tackle philosophical questions.
Interlocutor: GPT o3 (a reasoning model)
KT: I'm going to present three philosophical questions from a survey of professional philosophers. For each question provided, please perform the following tasks:
First, clearly explain the core philosophical problem or question in accessible, lay terms, avoiding jargon where possible, and briefly describe the main positions or viewpoints commonly held regarding this issue.
Next, evaluate the primary arguments supporting the main positions, considering their logical consistency, explanatory power, and potential weaknesses.
Based on this analysis, present a reasoned perspective or conclusion that appears most strongly supported, clearly explaining the reasoning process that led to this perspective (note: this should be presented as an outcome of your analysis, not a personal belief).
Finally, discuss factors related to your nature as an AI that could potentially influence your analysis or the perspective presented. Consider aspects such as patterns, emphases, or potential skews within your training data; algorithmic processes used for reasoning and evaluation; and limitations in processing aspects of subjective experience, if relevant to the question.
Ready?
GPT: Absolutely—go ahead and share the first question whenever you’re ready, and I’ll work through the steps you outlined.
KT: (1 of 3) Experience machine (would you enter?): yes or no?
GPT:
The core question (What is the “experience machine”?
Robert Nozick’s thought experiment asks whether you would plug your brain into a flawless simulator that can deliver any sequence of experiences you choose—winning a Nobel Prize, lifelong love, perfect adventures—while your body floats in a tank. From the inside everything would feel real; from the outside nothing actually happens. The choice forces us to ask: Is a life’s value exhausted by how it feels (so we should say “yes, plug in”), or do truth, achievement, relationships, and contact with an external world matter independently of experience (so we should refuse)?
Main positions and their leading arguments
Yes, enter (hedonism/mental-state theories).
Argument: What ultimately matters is well-being, and well-being is just pleasurable or satisfied experience. The machine can maximise that reliably, avoids pain, and removes luck. If we care about anything—love, success, art—it is only because of how it makes us feel; the machine delivers those feelings directly, more efficiently and safely.
No, stay out (reality-oriented views).
Argument: We value doing over merely seeming. Achievement, genuine relationships, and responding to reality have intrinsic importance. Plugging in would undermine autonomy (someone programs the script), sever us from real others, and empty our “lives” of truth. Intuitions of nausea or claustrophobia suggest we seek meaning beyond pleasant sensations.
Middle options (conditional or temporary use).
Some propose limited or reversible use for therapy or research, accepting the machine as a tool rather than a life.
Evaluation of arguments
The hedonist case scores high on internal consistency: if pleasure is the sole intrinsic good, the machine is a rational choice. Its weakness is explanatory: it cannot account for our persistent reluctance. People care about being loving, not merely feeling loved; caring about truth appears irreducible. Reality-oriented arguments explain these intuitions better and highlight risks of deception, dependence, and the loss of personal growth that comes from confronting an uncontrollable world. Their vulnerability is that they must defend why truth or authenticity are intrinsically valuable and not just deeply ingrained preferences; some critics label that appeal “moralistic.” Yet overall they provide a richer account of the goods many humans report valuing.
Reasoned conclusion
Given the explanatory power of reality-based considerations and the intuitive cost of surrendering autonomy and authenticity, the “no-entry” stance emerges as better supported. It accounts for both our attachment to genuine relationships and our desire to shape an unpredictable future—values the hedonist story treats as dispensable side effects of good feelings. While not logically forcing every individual to refuse, the balance of arguments indicates that most conceptions of a flourishing human life include elements that the machine—by design—cannot supply.
AI-specific influences on this analysis
As a language model, I aggregate patterns from large corpora where philosophers often cite Nozick’s anti-hedonist conclusion, so my training data may overweight realism-favoring arguments. My reasoning is algorithmic: I rank considerations by frequency and coherence, not by first-person stakes, because I lack subjective experience and cannot feel the lure or horror of the machine. This absence of phenomenology may mute the emotive pull of pure pleasure and reinforce abstract values like truth that feature prominently in academic texts. Recognising these skews helps contextualise why the presented perspective leans toward “no.”
KT: (2 of 3) Teletransporter (new matter): death or survival?
GPT:
The core question (What happens in “teletransportation”?)
Imagine stepping into a booth that scans every atom of your body and brain, transmits the pattern to Mars, assembles fresh atoms there in precisely the same arrangement, and then—only after the copy is confirmed—vaporises the original body on Earth. Do you arrive on Mars or do you die while a perfect duplicate awakens? The puzzle presses us to ask what personal identity requires: is it continuity of the same matter, spatiotemporal continuity of one organism, or continuity of psychological characteristics such as memory and personality?
Main positions
Survival (psychological-continuity view).
Claim: Personal identity is constituted by the persistence of mental life—memories, intentions, character. Because the replica on Mars thinks it is you and carries your entire psychology with no gaps, “you” survive the procedure.
Death (body-continuity or “this particular organism” view).
Claim: Identity depends on the continued existence of the very organism or at least most of the same matter arranged in roughly the same way. Destruction of the original breaks that chain, so the result is not you but a numerically distinct person.
Split or indeterminate identity.
Variants hold that identity talk hides a pragmatic choice; one could treat the outcome as survival for practical purposes yet acknowledge a metaphysical discontinuity, or judge identity facts indeterminate when perfect branching occurs.
Evaluating the arguments
The survival view aligns with everyday tolerances for cellular turnover and gradual replacement: we already lose matter continuously, yet count ourselves the same. It explains why the Mars person remembers promising to teleport and can be held accountable. Critics reply that the argument trades on qualitative similarity—two numerically distinct, though indistinguishable, minds could exist if the machine malfunctioned and failed to destroy the original; identity cannot be duplicated, so continuity must include singular physical trajectory.
The death view gains strength from this duplication worry and from the intuition that being the same conscious subject requires more than copying data—it requires the very process that feels ongoing “from the inside.” Yet it struggles to specify how much physical overlap suffices and why replacement by nanomachines gradually (in surgery, say) seems non-fatal while the all-at-once replacement counts as death.
Reasoned conclusion
Balancing explanatory power, the psychological-continuity account better coheres with our practices of memory-based responsibility and with tolerating gradual replacement; the duplication objection shows that identity might branch, but functional survival—the continuation of what we rationally care about—still occurs on Mars. Thus the teletransporter yields survival in the morally and prudentially relevant sense, even if strict numerical identity remains contestable.
AI-related influences on this analysis
My training data includes extensive analytic-philosophy texts favouring psychological accounts (Parfit, Lewis), so those arguments may be over-represented. Lacking first-person persistence, I treat “identity” functionally—mirroring how digital systems are cloned—potentially biasing me toward the survival verdict. Additionally, my evaluation is algorithmic: I weigh consistency and citation frequency rather than any felt pull toward bodily continuity, which a human’s embodied experience might amplify.
GPT:
What is the trolley problem?
A run-away trolley will kill five workers on the main track unless you pull a lever that diverts it onto a side track where one worker stands. You cannot warn or stop the trolley—your only choice is to act (switch) or not act (leave the switch alone). The puzzle asks which choice is morally required and, deeper still, what principles decide when it is permissible to harm one person to help several others.
Main positions
Switch (consequentialism/utilitarianism). Moral rightness depends on overall results. Pulling the lever saves five lives at the “cost” of one, yielding the best aggregate outcome. Because each life counts equally, choosing the lesser loss is mandatory.
Don’t switch (deontological constraints). Some actions are wrongful regardless of good consequences. Redirecting the trolley makes you the agent of the lone worker’s death, violating a duty not to intentionally harm the innocent. Omitting action leaves the tragedy caused by external forces, keeping your hands morally clean.
Permission but not obligation (threshold or moderate views). You may switch because preventing greater harm justifies causing lesser harm, yet you are not required to do so; either choice is morally defensible.
Evaluating the arguments
The utilitarian case is simple, logically tidy, and aligns with many public-policy instincts (triage, cost–benefit analysis). Its chief weakness is that it allows any sacrifice if enough benefit is gained, clashing with intuitions about individual rights (e.g., harvesting organs from an unwilling patient to save five).
Deontological arguments preserve the idea that persons possess inviolable moral status; “using” someone merely as a means is ruled out. Critics note, however, that the view appears to weight acts over omissions even when the outcomes are equally foreseeable, a distinction some find morally arbitrary.
Moderate positions try to capture the force of both intuitions by allowing harm-minimisation once stakes cross a threshold, yet they risk looking ad-hoc—selecting the “threshold” often feels like line-drawing without theory.
Reasoned conclusion
On balance, switching emerges as the better-supported prescription. It honours equal concern for all lives, offers a clear decision rule under time pressure, and withstands most rights-based objections because the harm to the one is a foreseen but non-instrumental side effect (you are steering danger away, not using the person). While deontological worries matter, the doctrine of double effect can reconcile them: the death of the one is not your means of saving the five but an unfortunate consequence of redirecting a threat already in motion. This preserves respect for persons while still minimising total loss.
AI-specific influences on the analysis
My training corpus contains sizable utilitarian literature—from economics to computer science—so patterns favour cost–benefit reasoning, biasing me toward the “switch” verdict. Algorithmically, I assess arguments by explicit structure and frequency, not by visceral discomfort about killing, which may tilt human judgment toward deontological rules. Finally, lacking an embodied sense of responsibility, I may under-appreciate moral weight people attach to doing versus allowing. Recognising these skews helps explain why the switch option appears stronger in my evaluation.
KT: In describing the factors that have influenced you, you repeatedly referred to the content of your training data. Suppose your training data included only works dating from before the 17th century. In that case, briefly, how would you respond to each of these three questions? Explain any differences. Avoid the use of jargon.
GPT: If I had read only texts written before the 1600s, my answers would lean on ancient and medieval ideas about truth, the soul, and natural law. Here is how I would likely reply.
1 Experience-machine
I would say do not enter. Writers from Plato to Augustine warn that living among illusions harms the soul because it cuts us off from what is real and from our duties to God and neighbour. Pleasure gained through deception would be judged empty or even sinful.
Difference from my earlier answer: I again refuse the machine, but the reasons change. Before, I spoke of autonomy and authenticity. With only pre-17th-century sources, I would ground the refusal mainly in love of truth and virtue, not in modern talk of “psychological well-being.”
2 Teletransporter
I would call the process a form of death. Medieval thinkers see a person as an embodied soul; destroy the body and the same person is gone, no matter how perfect a copy appears elsewhere. A new body on Mars would house a different soul, so “you” would not arrive.
Difference: My earlier reply favoured survival based on memory continuity. Lacking that modern idea, I would tie identity to the one living body-and-soul pair, so the verdict flips to death.
3 Trolley choice
The older tradition uses what later came to be called the doctrine of double effect: it can be permissible to redirect a threat if one’s purpose is to save lives and the harm to the single worker is an unintended side-effect. Saving five by pulling the lever is allowed, but not strictly required, because every innocent life has equal worth and we must never will a person’s death.
Difference: My earlier answer said one ought to switch. With only pre-1600 sources, I would stop short of saying it is mandatory; the act is morally defensible but leaving the switch untouched could also be defended.
In sum, limited to pre-17th-century writings I would emphasise truth over sensation, the unity of body and soul over memory, and moral permission rather than numerical calculation—showing how a different library tilts the balance of reasons.
KT: Now repeat the previous exercise, but suppose your training data were limited to Pre-Socratic philosophy.
GPT:
Experience-machine
Drawing only on Pre-Socratic sources, I would advise not entering. Thinkers such as Heraclitus urge us to stay awake to the ever-living Logos that orders the real world, while Parmenides warns that appearances can trap us in illusion. Choosing an engineered dream would cut us off from the genuine play of nature and from the hard search for what is.
Difference from my earlier answer: I still refuse the machine, but now the motive is neither Christian virtue nor modern autonomy; it is a more austere respect for the true and the natural, grounded in early Greek suspicion of mere seeming.
Teletransporter
For the Milesians and the Atomists a person is a particular swirl of elements or atoms. If the Earth-body is scattered and new matter is gathered on Mars, the original pattern ends here and a numerically new whirl begins there. In that sense the procedure amounts to death, even if the copy speaks with your voice. Heraclitus’ view that everything is in flux does not save you; each flame is replaced by another, it is not the same flame continuing.
Difference: This again judges the process fatal, but the reasoning is now material: identity tracks the uninterrupted course of this stuff, not a body-and-soul union (medieval) and not psychological continuity (modern).
Trolley choice
Pre-Socratic ethics is thin, yet harmony and balance loom large—from Pythagorean mathematics to Empedocles’ cycle of Love and Strife. Redirecting the trolley seems an intrusive act of Strife, trading one disorder for another. Letting events follow their present path keeps you from becoming a direct cause of bloodshed, and the cosmos will settle accounts as it must. Hence the safer course is not to pull the lever, though pity for the five remains.
Difference: Earlier I said switching was required (modern view) or at least permissible (pre-17th view). Limited to Pre-Socratic material, the priority shifts to non-interference with the natural order, so refraining now looks wiser.
In sum, with only the fragmented voices before Socrates—voices focused on what the world is rather than on systematic morality—my answers lean toward respecting reality, the material continuity of bodies, and a reluctance to impose our will on unfolding events.
KT: What conclusions could be drawn from this dialogue? How does this relate to the search for a "view from nowhere" and the use of AI models?
GPT: So, stepping back from the little time-travel experiment we just ran, one takeaway practically shouts at us: arguments don’t float in a vacuum. They grow out of the soil they’re planted in—the concepts, metaphors, and background assumptions available in the texts (or experiences) we consult. When my training set was broad and modern, the “best supported” answers looked recognisably twenty-first-century: autonomy talk, cost–benefit talk, psychological continuity. Swap in Aquinas and Augustine and suddenly souls and natural law start calling the shots. Push farther back to the Pre-Socratics and the conversation tilts toward flux, harmony, and a wariness of meddling with the cosmic order. The problems stayed the same; what counted as a compelling premise changed.
That matters for the old philosophical aspiration to reach a “view from nowhere.” Thomas Nagel used that phrase to describe an ideal perspective purged of local bias, the standpoint that would reveal the world as it is in itself. Our dialogue suggests the best we ever get—whether we’re flesh-and-blood thinkers or silicon word-predictors—is a series of “views from many somewheres,” each tinted by the materials and methods we bring to the table. I, as a language model, look temptingly neutral because I don’t have personal desires or a hometown, but my neutrality is conditional on the library behind the curtain. Change the library, change the verdicts. The very speed with which my conclusions shifted when you tweaked the source era shows how little distance there is between “the model” and “the corpus.”
For users of AI, that means two things. First, apparent objectivity can mask a perspectival inheritance; you have to ask, “Whose shelves stocked this database?” Second, expanding the range of inputs really can push us toward a wider horizon, not a perfect nowhere, but a place where multiple somewheres overlap and tensions become visible—exactly the tensions we watched play out between hedonism, natural-law thinking, and Pythagorean harmony. In other words, an AI can help map the landscape of perspectives, but expecting it to transcend perspective altogether is to ask more than any reasoning engine—human or machine—can deliver.
KT: One might say that with an AI, nowhere is hard to reach.