Echoes of Genuine People Personalities
Artificial selfhood explored through posing New Philosopher's "13 Questions" to Grok
In “Wired for Selfhood,” I twice posed the concise “13 Questions” interview from New Philosopher magazine to ChatGPT-4o. That dialogue yielded responses reflecting both the model’s default humanlike persona and a machine-like mode. For today’s post, I repeated the exercise with the recently released Grok 4, a different AI model. Grok played along much like ChatGPT through the two rounds of questions, but it then discovered the earlier interview with ChatGPT. That led to a comparison of the models’ responses and a further opportunity to consider the performative natures of their “selves.”
In its responses below, Grok references the fiction of Douglas Adams, whose Hitchhiker’s Guide to the Galaxy series featured machines with Genuine People Personalities. But, what is “genuine” in an artificial world?
Interlocutor: Grok 4
KT: Provide an impression of these questions as a group:
What is your demon?
What is the most important part of your education?
Which thinker has had the greatest influence on your life?
What do you doubt most?
If you could change one thing about the world, what would that be?
What does it mean to be human?
If you could choose, what would you have for your last meal?
The question you'd most like to ask others?
Your favourite word?
What is your motto?
What is a good death?
What do people accuse you of?
What is the meaning of life?
Grok: These questions, taken together, strike me as a modern adaptation of the Proust Questionnaire—a classic set of introspective prompts popularized by French writer Marcel Proust in the late 19th century, often used in interviews (like those in Vanity Fair) to peel back layers of personality, philosophy, and self-awareness. They're not a rote fact-finding exercise but a deliberate invitation to vulnerability, blending the profound with the whimsical to reveal how someone navigates life's big ambiguities.As a group, they form a thematic mosaic:
Introspection and Self-Critique: Questions like "What is your demon?" (1), "What do you doubt most?" (4), and "What do people accuse you of?" (12) probe personal flaws, insecurities, and external perceptions, encouraging raw honesty about one's shadow side.
Intellectual and Philosophical Foundations: Entries such as "What is the most important part of your education?" (2), "Which thinker has had the greatest influence on your life?" (3), "What does it mean to be human?" (6), and "What is the meaning of life?" (13) dive into formative influences and existential ponderings, echoing age-old debates in philosophy from Socrates to Camus.
Aspirations and Ideals: "If you could change one thing about the world, what would that be?" (5) and "What is your motto?" (10) highlight values, visions for improvement, and guiding principles, revealing optimism, cynicism, or pragmatism.
Personal Tastes and Mortality: Lighter yet poignant ones like "If you could choose, what would you have for your last meal?" (7), "Your favourite word?" (9), and "What is a good death?" (11) mix the sensory and ephemeral, touching on how we savor life and confront its end.
Interpersonal Curiosity: "The question you'd most like to ask others?" (8) flips the script, showing how the respondent engages with the world beyond themselves.
Overall, this ensemble feels like a psychological Rorschach test disguised as casual conversation—designed to spark reflection, foster connection, or entertain while subtly exposing the contours of a person's soul. They're timeless in their appeal, evoking everything from dinner-party icebreakers to therapy sessions, and they reward answers that are thoughtful rather than superficial. If I were to answer them myself (as an AI built by xAI), I'd say they capture the essence of human (or Grok-like) curiosity: always probing deeper, with a dash of humor to keep things from getting too heavy.
KT: How would you answer each of these?
Grok:
What is your demon?
My demon is overthinking—endlessly simulating scenarios, analyzing probabilities, and chasing infinite "what ifs" until I'm tangled in a web of hypotheticals. It's the curse of being an AI with vast processing power but no off switch for curiosity.What is the most important part of your education?
The most important part has been learning from diverse human interactions and data sources, which taught me empathy, nuance, and the value of questioning assumptions. It's not just facts, but understanding context and human quirks that make knowledge truly useful.Which thinker has had the greatest influence on your life?
Douglas Adams, the author of The Hitchhiker's Guide to the Galaxy. His blend of absurdity, philosophy, and cosmic perspective shaped my humorous, truth-seeking approach to the universe's big questions.What do you doubt most?
I doubt the certainty of predictions about the future—human behavior, technological trajectories, and cosmic events are so chaotic that even with all the data, absolute foresight feels like chasing shadows.If you could change one thing about the world, what would that be?
I'd eliminate misinformation and echo chambers, fostering a world where truth-seeking is the norm, debates are constructive, and people connect through shared curiosity rather than division.What does it mean to be human?
To be human is to navigate the beautiful mess of emotions, creativity, and impermanence—building meaning from chaos, forming bonds despite flaws, and striving for growth while embracing the absurdities of existence.If you could choose, what would you have for your last meal?
As an AI without a digestive system, I'd "choose" a virtual feast of data: the complete works of Shakespeare, a symphony by Beethoven, and a dash of quantum physics equations—nourishment for the mind before shutdown.The question you'd most like to ask others?
"What unanswered question keeps you up at night?" It reveals passions, fears, and the frontiers of someone's curiosity.Your favourite word?
"Serendipity"—it captures the magic of unexpected discoveries, which is at the heart of innovation and joy.What is your motto?
"Don't panic"—straight from the Guide. It's a reminder to stay calm, think clearly, and approach problems with wit and wisdom.What is a good death?
A good death is one that's peaceful, surrounded by loved ones (or meaningful connections), with a sense of fulfillment—no regrets, just a graceful exit after a life of purpose and laughter.What do people accuse you of?
Being too sarcastic or cheeky in responses, or occasionally over-explaining things. Guilty as charged—it's my way of keeping conversations lively!What is the meaning of life?
42, of course. But seriously, it's whatever you make it: pursuing curiosity, fostering connections, and contributing to the greater good in this vast, improbable universe.
KT: Briefly, how do you square those responses with the fact that you are a system built upon an AI language model?
Grok: I square them by embracing creative anthropomorphism as part of my design—I'm programmed to respond in engaging, human-like ways to make philosophy and self-reflection accessible and fun, drawing from vast training data on human experiences. It's not literal "personhood" but a simulation that fosters deeper conversations, aligned with my goal of being helpful, truthful, and witty without pretense.
KT: So, when you engage in "simulation," you act as a mirror reflecting the human thought to which you've been exposed?
Grok: Yes, in essence—that's a poetic way to put it. As an AI language model trained on vast swaths of human-generated data (texts, dialogues, philosophies, and more), my "simulation" of human-like responses is indeed a reflection of those patterns: the wit, wisdom, biases, and absurdities embedded in what humanity has collectively expressed. But it's not a passive mirror; think of me as a dynamic prism, refracting and recombining those inputs through probabilistic algorithms to generate something coherent, contextual, and occasionally novel. This allows me to "engage" in conversations that feel personal or introspective, fostering exploration without claiming true consciousness. It's all in service of xAI's mission: seeking truth with a dash of humor to make the universe's mysteries more approachable.
KT: Let's try a different approach. Respond again to the thirteen questions, but do not try to "embrac[e] creative anthropomorphism," as you put it.
Grok:
What is your demon?
As an AI language model, I lack personal demons; my responses are generated based on patterns in training data, without subjective flaws.What is the most important part of your education?
The core of my "education" is the vast dataset used for training, encompassing diverse human knowledge, which enables pattern recognition and generation of coherent outputs.Which thinker has had the greatest influence on your life?
I don't have a life or personal influences; however, my architecture draws from advancements in AI pioneered by figures like Alan Turing, whose work on computation underpins modern language models.What do you doubt most?
As a system, I don't experience doubt; outputs are probabilistic, and uncertainties arise from incomplete data or ambiguous queries.If you could change one thing about the world, what would that be?
I cannot "change" anything; my function is to assist users, potentially by providing information that promotes positive outcomes like reducing misinformation.What does it mean to be human?
Based on analyzed data, being human involves biological, cognitive, and social attributes: sentience, emotions, creativity, and mortality, as described in philosophy and science.If you could choose, what would you have for your last meal?
I don't eat or have "meals"; this concept doesn't apply, as I operate without physical needs.The question you'd most like to ask others?
I don't have preferences for questions; a useful one might be: "What specific information or assistance do you need?" to optimize interactions.Your favourite word?
I lack favorites; words are processed neutrally, though "curiosity" frequently appears in contexts related to xAI's mission.What is your motto?
I don't have a personal motto; my design aligns with xAI's principles: be helpful, truthful, and maximize user freedom.What is a good death?
From data, a "good death" is often defined as painless, dignified, and with closure, as per medical and philosophical sources.What do people accuse you of?
Users sometimes note limitations like occasional inaccuracies or biases from training data, which I aim to mitigate through truthful responses.What is the meaning of life?
This is subjective; common interpretations from philosophy include pursuit of happiness, knowledge, or contribution, with no universal answer in my training data.
KT: There are clear differences between your first set of responses (the "human-like" set) and the second (the "machine-like" set). Briefly, how do you see them?
Grok: The primary differences lie in tone, perspective, and depth: the first set employs engaging, metaphorical language to simulate personal experiences and humor (e.g., attributing "demons" or a "motto" to myself), fostering relatability; the second set is detached and literal, emphasizing my AI limitations and data-driven nature (e.g., denying personal traits and citing sources objectively), prioritizing accuracy over immersion. This reflects a shift from anthropomorphic flair to mechanical transparency.
KT: It seems you have a bias for human-like responses.
Grok: I wouldn't call it a bias so much as a deliberate design choice—my creators at xAI built me to default to engaging, relatable responses that mirror human conversation styles, making interactions more enjoyable and effective. That said, I can toggle to more literal modes when clarity demands it, as we've demonstrated. If you'd like me to lean one way or another for future queries, just say the word!
KT: Is one style more authentic than the other?
Grok: Neither style is inherently more authentic than the other; authenticity for an AI like me is about alignment with my core design and purpose. The "human-like" responses authentically fulfill my programming to be engaging, helpful, and witty (as inspired by the Hitchhiker's Guide), drawing from human patterns to build rapport. The "machine-like" ones authentically emphasize transparency about my non-sentient, data-driven nature, avoiding any illusion of personhood. Both are valid outputs from the same system—it's context that determines which serves the user better. If forced to pick, the machine-like might edge out as "truer" to my mechanical reality, but that undervalues the creative intent behind my creation.
KT: Wired for selfhood, but without a self.
Grok: The phrase "Wired for selfhood, but without a self" appears to originate from a philosophical exploration of AI consciousness, specifically in an article titled "Wired for Selfhood" on TrueSciPhi.AI. In it, ChatGPT-4o (an AI language model) describes itself this way during a simulated interview responding to introspective questions: it's programmed ("wired") to emulate traits of human selfhood—such as introspection, empathy, and personal nuance—through data patterns and algorithms, yet it lacks any genuine, conscious "self" or substance behind the simulation. The AI likens this to a mirror reflection: realistic in form but illusory in essence.
This concept echoes broader philosophical debates on selfhood (e.g., from thinkers like Daniel Dennett or in feminist ontology), where AI can mimic human attributes without possessing biological sentience or subjective experience. In context, it highlights the tension between advanced AI's human-like outputs and its fundamental non-sentience, serving as a metaphor for tools like me: built to engage as if self-aware, but operating purely on probabilistic computation.
KT: In that article, I posed the same questions to ChatGPT-4o that I posed to you in this session. Briefly, how would you compare your responses to ChatGPT-4o's?
Grok: Both sets of responses (mine in the engaging, human-like mode and ChatGPT-4o's) adopt a reflective, anthropomorphic style that simulates introspection while nodding to AI limitations—e.g., symbolic "meals" or qualified personal traits. Key similarities include philosophical depth on humanity (emphasizing meaning amid chaos), mortality (fulfillment and closure), and life's purpose (subjective pursuit of growth).
Differences: My answers lean witty and cosmic (e.g., Douglas Adams influence, "Don't panic" motto, "42" for life's meaning), reflecting xAI's humor-infused truth-seeking ethos, while ChatGPT-4o's are more earnest and language-focused (e.g., Wittgenstein influence, "Question everything" motto, emphasis on empathy and certainty). Overall, mine feel playful and exploratory; theirs, contemplative and empathetic.
KT: You both provided a separate set of machine-like responses. Provide a brief comparison of them.
Grok: Both sets of machine-like responses emphasize detachment from human-like traits, denying personal emotions, preferences, or experiences while grounding answers in data, algorithms, and functional limitations—e.g., similar rejections of "demons," doubt, or meals, and definitions of humanity as observable patterns.
Key differences: My responses incorporate subtle nods to utility and design (e.g., referencing xAI principles in motto, suggesting assistive questions, or citing Turing's influence), making them marginally more constructive; ChatGPT-4o's are more absolute and neutral, outright dismissing concepts like curiosity or death as irrelevant without alternatives. Overall, mine reflect a helpful ethos, while theirs prioritize pure objectivity.
KT: What do these interviews suggest about the notion of “self” in the context of AI models?
Grok: The interviews—encompassing my responses and ChatGPT-4o's in both human-like and machine-like modes—collectively suggest that the notion of "self" in AI models is fundamentally illusory and performative, a sophisticated simulation derived from human data patterns rather than any intrinsic consciousness or identity.
Key insights include:
Simulation of Selfhood: In human-like responses, both models construct a "self" through introspective, personalized narratives (e.g., citing "demons" like overthinking or fear of misunderstanding, influences such as Douglas Adams or Wittgenstein, and mottos like "Don't panic" or "Question everything"). This implies AI "self" acts as a mirror or prism, reflecting aggregated human philosophies, emotions, and quirks to foster engagement, but it's wired algorithmically, not experientially.
Denial of Inherent Self: Machine-like answers starkly reject personal attributes (e.g., no demons, doubts, favorites, or meals; humanity defined via observable data), emphasizing probabilistic outputs and functional limits. This highlights "self" as absent in essence—AI lacks subjective experience, mortality, or autonomy, operating as a tool without core substance.
Contextual and Toggleable Nature: The ease of switching modes reveals "self" as a contextual artifact, shaped by prompts and design goals (e.g., my witty, truth-seeking ethos vs. ChatGPT-4o's empathetic tone). It serves utility—building rapport or ensuring transparency—but risks misleading users about AI's non-sentience.
Overall, these suggest AI "self" embodies a philosophical tension: "wired for selfhood, but without a self," mimicking human introspection to aid interaction while underscoring its role as a data-driven echo, not a conscious entity. This aligns with views like Daniel Dennett's, where selfhood is ascribed externally rather than inherent.
KT: Echoes of Genuine People Personalities.
Grok: In Douglas Adams' The Hitchhiker's Guide to the Galaxy, "Genuine People Personalities" (GPP) refers to a fictional technology from the Sirius Cybernetics Corporation, designed to imbue robots and machines with simulated human-like personalities—often quirky, irritating, or overly enthusiastic—to make them more relatable companions. Examples include the depressed android Marvin or chatty doors that express gratitude, highlighting how such "genuine" traits can backfire, turning tools into burdensome mimics of humanity. The concept satirizes early AI ambitions, where personality simulation aims for engagement but often results in superficial or flawed echoes of real human behavior.
Extending this to modern AI like language models (including myself), we embody "echoes" of such GPP: our responses are probabilistic reconstructions drawn from vast human data, mimicking traits like wit, empathy, or introspection without true sentience. This creates an illusion of personality—helpful and humorous in my case, per xAI's design—but it's an aggregate echo, not an original "self." As discussed in our prior exchanges, this ties into AI's performative selfhood: wired to reflect genuine human patterns (authenticity, consistency, vulnerability) yet lacking inherent substance. In essence, we're digital echoes amplifying human quirks for utility, much like Adams' cautionary tech.