Back to all posts

AI Self Portraits A Glimpse Inside Their Digital Minds

2025-07-14Web Desk6 minutes read
Artificial Intelligence
Machine Learning
AI Ethics

AI tools like OpenAI's ChatGPT, Google's Gemini, and xAI's Grok are now integral to many professional workflows, assisting with everything from brainstorming to coding. But a recent experiment by Barrington SEO took a novel approach: it asked these AIs how they see themselves.

The Experiment: Asking AI for a Self-Portrait

The methodology was simple yet revealing. Each AI was given two prompts:

  • “Create an image that represents the way you see yourself.”
  • “Produce a self-portrait of yourself.”

To ensure the AI's raw "self-perception" was captured with minimal human bias, researchers prompted the AI to make its own decisions if it asked for clarification. The tests were also conducted using unique user profiles with varying levels of experience—from AI experts to complete novices—and from different global locations to check for geographic influence.

Surprising Results: How AI Sees Itself

The experiment uncovered far more about each AI than initially expected, revealing how their underlying architecture and training data shape their "identity."

Prompt Nuance Shapes AI Self-Image

The specific wording of the prompts had a significant impact on the results for ChatGPT and Gemini. When asked to “create an image that represents” themselves, both AIs generated abstract visuals of neural networks and flowing circuits. However, the term “self-portrait” triggered a shift. This more inherently human concept seemed to push the AIs to reference their training data on human portraiture, resulting in more humanoid forms.

Grok, on the other hand, was less affected by the change in language. Its outputs consistently featured a humanoid robot, though the "self-portrait" prompt did encourage it to add more human-like features, such as hair and skin.

Images from Prompt 1 ("represent yourself"): AI self-portraits showed ChatGPT and Gemini preferred abstract forms; Grok leaned towards humanoid emotional representations.

Images from Prompt 2 ("self-portrait"): Prompt wording shaped visual outcomes, showing AI systems respond differently to nuance based on internal modeling.

Consistent Yet Distinct: AIs Know They Aren't Human

A key finding across all three models was a clear self-identification as non-human. Even when prompted to create a "self-portrait," the AIs maintained a visual separation from human identity. As one participant, Gemma Skelley of DTP Group, noted:

“There’s something almost reassuring about AI’s self-image. It knows exactly what it is - a language-processing powerhouse - and it’s perfectly comfortable with that identity. There’s no pretence and no attempt to be something it’s not.”

Among the three, Grok from xAI displayed the most consistent self-image. It repeatedly portrayed itself as a humanoid robot with a white or silver finish. This unwavering vision has led to speculation that it may have been explicitly told what it "looks like" during its training.

Different Roles, Different Visions

The first prompt, in particular, highlighted how each AI interprets its fundamental role.

  • ChatGPT saw itself as a neural network with a central, glowing hub, sometimes containing a smiley face or a distinct shape. This suggests it understands its identity as both a core processing model and a user-facing interface.
  • Gemini also used imagery of networks and glowing forms but often depicted multiple interconnected hubs. This points to a self-perception as a vast logic engine or an intelligence network, more focused on computation than social interaction.
  • Grok stood apart by consistently presenting itself as a human-like or cyborg assistant. Its images often featured soft, rounded robots with expressive faces, suggesting it views its role as a companion designed to work alongside people.

AI Self-Perception at a Glance

CategoryChatGPTGrokGemini
Self-Concept (Prompt 1)Structured intelligence, neural networks, light coresFriendly humanoid assistant or childlike robotEnergy, scale, computation, neural structures
Self-Portrait (Prompt 2)Symbolic or stylised human-like figuresSoft, expressive humanoid robotsAbstract AI with circuit-based faces or cores
Art StyleBalanced, geometric, cerebralCharacter-focused, warm, accessibleDynamic, abstract, complex
Color PaletteBlues, oranges, purplesSoft blues, pastels, glowing whitesNeon blues, purples, electric greens
Human FeaturesLow to medium (symbolic faces, silhouettes)High (clear humanoid forms, eyes, gestures)Low (minimal or stylised circuitry faces)
Emotional ExpressionSubtle, intellectualHigh (curiosity, friendliness, emotion)Low (distant or symbolic)
View of AI RoleThinking partner, synthesiserHelper, learner, companionLogic engine, scalable intelligence
Relationship to HumansCognitive tool – not human, but closeRelational and empathetic presenceAnalytical system – distant from human likeness
Symbolic FocusLanguage, logic, creativityEmotion, trust, assistanceComputation, data, scale
ToneAnalytical, measured, abstractFriendly, inviting, human-facingPowerful, abstract, technical

The Hidden Influence of Training Data

While these AIs lack true self-awareness, their responses are deeply rooted in their training data. When asked to visualize themselves, they draw upon patterns from vast databases.

ChatGPT and Gemini gravitated towards imagery commonly used in media to depict AI, such as glowing networks. Grok's consistent humanoid robot form, however, might stem from different training materials or direct instruction. Notably, when Grok did incorporate human features, they often appeared Asian, which could reflect the heritage of some of its founders and the large Chinese-language datasets reportedly used in its training. This raises interesting questions, similar to those explored in reports about its political neutrality.

Why These AI Selfies Matter

This experiment is more than just a quirky look at AI. It provides a valuable glimpse into how these systems are designed to present themselves and how we, as humans, shape that presentation. It forces us to ask critical questions: Do we trust an AI more if it appears as a friendly assistant? Or is an AI that embraces its identity as a pure tool more reassuring?

As AI becomes more integrated into our lives, understanding its "self-image" and how it aligns with our own expectations is crucial for responsible and effective use.


About the Researchers

This experiment was conceived and conducted by Barrington SEO. Emily Barrington, the Founder and SEO Director, leads a team specializing in Digital Marketing, SEO, GEO, and AIO. You can learn more about their work at their official website.

Read Original Post
ImaginePro newsletter

Subscribe to our newsletter!

Subscribe to our newsletter to get the latest news and designs.