What shape is your mind?
(a thought experiment)
In the film Companion (starring Sophie Thatcher and Jack Quaid), the main (human) character is a slightly nerdy douchebag who has purchased a startlingly lifelike humanoid AI. She keeps him company and fulfils his sexual desires: she is basically a walking, talking compliance machine, if prone to touchingly human-like moments of self-doubt and vulnerability.
The film centres around a holiday in the countryside that he takes with his mates. Things start to go pear-shaped when he brings his companion bot along. The first third is fantastic: full of carefully calibrated observations about the extreme social strangeness of taking an AI to a party. It captures with uncomfortable precision the way people might try to include, exclude, perform for, or ignore an artificial guest, especially one coded so obviously as attractive and female.
It becomes a little more predictably slasher-like after that, but it’s still pretty good.
It becomes clear early on that the main character controls his companion bot through a smartphone app, via which he can dial her characteristics up or down. These include intelligence (0–100%), aggression level and harm inhibition, voice tone, eye colour, physical strength, and so on. Naturally, when she gets hold of the phone and ramps up both her intelligence and physical power it doesn’t end well… for the douchebag humans.
That moment, when you see what the app looks like, put me in mind of another fictional piece of tech that I've been thinking about recently. This is the anatomical compiler, which is a kind of thought experiment (or maybe a research goal) of Michael Levin and his colleagues working in regenerative medicine. It's framed as a kind of machine where you can ask it to create any part of a creature's anatomy.
In essays and talks, Levin builds on this notion to introduce what he calls freedom of embodiment, wherein he envisions a future in which we're no longer bound to the bodies that fate assigned us at birth. He often talks about how future generations will look back in shock and pity at the fact that if we were born with, say, astigmatism or cerebral palsy or any one of an endless number of abnormalities, that that was simply our corporeal fate. But why stop there, he asks. Why should we use science only to correct disease? Why shouldn't we be using it to enhance?
So freedom of embodiment, in this sense, is freedom to choose the body that one lives in. You want to be taller? Why not? You want a different skin colour? Why not? Despite the slightly science fiction sounding tone of all this, what Levin raises is actually a very serious point. Take cosmetic surgery for example: we are already long past the point where the only function of surgery was restorative or reparative.
Or take performance enhancing drugs in sport. I can't be the only person who is fascinated and slightly disturbed by the idea of the Enhanced Games. This is a sports championship in which the extremes of technology-enhanced human performance are explored. It's got nothing to do with making the best of the body that you're given, so much as turning that body into something more than what you were given, essentially using the power of science.
And the Enhanced Games aren't the only spectacle to embrace this philosophy of enhancement: the entire transhumanist movement focuses on this, as do (for quite different reasons) the so-called looksmaxxing or body-maxxing communities. Two of my masters students are currently doing a project on peptides, looking at the incredible and slightly scary uses of peptides to enhance one's mental state; the paper should be out in preprint in the coming months. This is such an ethically and philosophically fraught area. One sees hints of it even in the discussion of the use of GLP-1 receptor agonists in people who are not morbidly or at least dangerously obese or people who choose to use them for aesthetic reasons. Sure, there are issues about shortages of supply, and it's important that the most needful patients aren't deprived of these medications simply to indulge the aesthetic inclinations of the rich. But once the availability problem is sorted, it's not at all clear to me that there is something definitively wrong about the non-medical use of these drugs. Either way, it's going to happen.
Similar considerations abound with the use of nootropics and cognitive enhancers. But it’s always struck me that when it comes to the narrative around enhancement, there seems something rather flat or one-dimensional about it. The very notion of maxxing suggests the existence of an axis along which you can maxx. There’s a directionality. You can either have less or more of something.
But like so many processes in biology (inflammation being a classic from outside the mind sciences), the idea that there is a single axis with low at one end and high at the other is a total fiction, or at least a gross and ultimately very misleading oversimplification.
The idea of a cognitive enhancer suggests that there is such a thing as cognition and that thing called cognition can be enhanced. Yeah, ok, but… isn’t the more accurate notion that cognition represents a huge number of interacting processes and sub-processes, many of which can be degraded or enhanced independently of the others? ‘Enhancement’ doesn’t actually say very much.
So, that's where I come back to this idea of the anatomical compiler. What would it look like if it was now a machine that allowed you to choose not just the shape of your physical embodiment, but the shape of your ‘cognitive embodiment’?
This would be a machine where you could ‘print’ any kind of mind. It's odd, I think, to think that we talk about body shape without giving it a second thought, but we don't really talk about the shape of a mind. We tend to use other kinds of language, don't we?
But when we need to represent on screen, for example, the iPhone controls that might allow us to change the shape of the mind of our AI companion, it's notable that these are sliding controls. Essentially, there's your characteristic of choice, whether it's intelligence or whatever, and you can dial it up or you can dial it down.
As a brief aside, I occasionally find myself reflecting that it's kind of messed up that we talk about our minds using descriptors that, to a very large extent, have been taken from illness terms. Sure, there's a lot of policing of language that goes on now, and it's common to hear people being reprimanded for using terms like "I'm so OCD" or that building looks "schizophrenic," and so on. And some of these concerns are valid because they are grounded in real harms. But that’s not actually what I’m referring to. We all too easily forget that there are many other words in our normative, or everyday, psychological lexicon that also have found their origin in concepts from medical and psychiatric nosology. Think about “paranoia”. “Melancholy”. “Narcissism”. “Hysteria”. “Depressive”. The term “manic”. None of this is inevitable, and from what I understand, although I am far from an expert in these things, it is a peculiar characteristic of Western psychological systems to define normal primarily by reference to what counts as abnormal, and to map mental variation against an illness-to-health spectrum rather than treating variation itself as primary.
Even the modern neurodiversity movement found its origin in and takes as its departure point descriptors that were once, and still are in many parts of the world, labelled as discrete disorders. Indeed the movement has received some criticism for its inclination towards this way of thinking, with some authors arguing that it enshrines the medical system's authority to define difference rather than building a vocabulary that is independent of it.
The thought experiment that I'm proposing here isn't remotely intended as a critique of neurodiversity or even as a critique of psychiatric classification, although I have many issues with the latter. It is merely an attempt to demonstrate that so much of our thinking about psychology takes pathology as the starting point and everything flows from there. One of the remarkable and maybe unintended effects of the AI revolution has been to supercharge a perspective that takes a much more speculative approach to the possible axes of cognitive variation.
So think about it. If you had to design an app within which you could specify the kind of mind that you wanted, either in yourself or in your as-yet-unborn child or in your as-yet-not-3D-printed companion, what dials would you choose? How many would you have?
Just in terms of designing the user interface, suddenly all this talk about characteristics versus axes, versus dimensions, versus spectra, start to pose a kind of problem, don't they? Maybe the personality psychologists amongst us would say, oh, let's just stick to the Big Five personality characteristics. Well, that wouldn't be very good, would it? You'd be missing out on intelligence, for a start. But do you really just want intelligence to be a single unitary construct? What about fluid vs crystallized? Verbal vs visual? Emotional intelligence? And so on.
And maybe the real point is this: the most radical decision isn’t where you set the sliders, but which sliders exist in the first place. Imagine unboxing your companion bot, or sitting down with the consultant for your own cognitive re-design, and rather than being presented with a bland, universal list of traits, you get to choose a palette of possible axes you can swap in or out.
That’s why the “cognitive compiler” is such an interesting thought experiment. Forget about just changing the values on a dial, let’s ask which dimensions of mind the dials control. Once you step outside IQ points, Big Five personality traits, and DSM symptom scales, the space of possibilities opens dizzyingly wide.
Here’s a taste of what such a system could include: axes that mainstream psychology barely touches, but which could be as fundamental to mental architecture as CPU speed is to a computer.
singular–plural: At the singular end, there’s the traditional Western idea of “one body, one self.” Slide toward plural and you might have a mind with multiple centres of identity. There could be parallel “headmates” you can converse with, or even switch between, as plural communities and tulpamancers already experiment with today . Go further and you could link to other minds altogether: brain-to-brain networks forming a hybrid consciousness, a literal we.
opaque–translucent: Opaque minds are private; you can’t see most of your own processing. Translucent minds on the other hand offer live readouts of the lower (and/or higher?) levels of computation and cognition. Even writing this, I’ve just noticed how I’ve been captured by a hierarchy model. In fact, opacity and translucency will have to apply horizontally too, as far as they pertain to computational centres/attractors/foci/modules (yeeeuch) in a heterarchy. One of the benefits that meditation tries to sell is precisely this: a greater translucency of mind resulting in a greater awareness of what makes you tick. Importantly, this is more than understanding yourself better at the psychological level: that’s hard enough, but doable with enough therapy, perhaps. This is understanding yourself better at a level which is essentially subpersonal, or at least a different level of personhood, and of mechanism.
stable–liquid: A stable mind holds shape; identity and worldview are consistent over decades. A liquid mind can dissolve and re-form at will, shifting beliefs, styles, preferences and maybe even core personality traits as easily as changing outfits . Social media already encourages this kind of micro-reinvention; future neurotech might let you rewrite yourself overnight.
worldbound–worldmaking: A worldbound mind takes reality as given. A worldmaking mind treats it as editable. A worldmaking mind is cosmopoetic, shaping its own perceptual and conceptual worlds the way a VR architect shapes a digital space. Again taking examples from my favourite domain of meditation traditions: this is very much the direction towards which the deeper end of Tantra practice inclines. Or, in a more modern context, what approaches like Rob Burbea’s soulmaking dharma set out to do. Essential to these systems is the understanding that once you realise that all experience of the world is constructed, the world itself becomes plastic and manipulable for you. (What I find fascinating - and this is one for a future post - is that even though constructionist views of perception and experience are fairly mainstream in neuroscience, we haven’t really figured out how to harness that radical experiential plasticity for therapeutic purposes. It’s ok though, because I think it’s coming).
narrative–non-narrative: In narrative mode, you’re the protagonist in a life story, and your past, present and future are woven into a coherent arc. In non-narrative mode, you drop the plot altogether, existing in self-contained episodes, much like Galen Strawson’s “Episodic” selves . Today, meditation and psychedelics can demote the narrative self for hours; maybe a future UI could let you do it for minutes or months.
boundaried–porous: Buffered minds are fortress-selves, distinct and impermeable. Porous minds dissolve into others, into crowds, into nature: sometimes by choice, sometimes not. I am particularly well-disposed towards an understanding of some psychiatric disorders, including psychotic disorders, as disorders of porosity, wherein the distinction between agent and environment breaks down, creating ambiguities of ownership and perturbations of agency. While this is clearly pathological, science fiction writers have long speculated about otherworldly minds whose edges are fundamentally blurred and perhaps even not demarcatable. One tap might drop you into a temporary blurred mind, or group mind for collaboration; another might seal you off completely.
enskulled–extended–enmeshed: At the enskulled end, cognition stays inside your head. At extended, tools and environments become literal parts of your mind (your phone, your AR glasses). At enmeshed, your thinking is interwoven in real time with machines or other minds. Frankly, this doesn't need much exposition. We're already here.
compressed–expanded: Compressed cognition is narrow, precise, detail-bound. In expanded cognition, things sprawl: ideas branching in wild networks, holding multiple contexts at once. Compression is for surgical problem-solving; expansion is for systems-level creativity.
volitional–drifting: Volitional minds are steered like high-performance yachts: attention and thought go where you command. Drifting minds are carried by currents, as seen in our own experience of daydreams, free association, unexpected insights etc.
literal–symbolic: Literal cognition takes things at face value. Symbolic cognition sees patterns, metaphors, layers of meaning . One slider position might make you a flawless contract lawyer; another, a poet seeing omens in coffee stains.
Now imagine the meta-level customisation: choosing which of these axes exist at all in your cognitive UI. The more you think about it, the more you realise there is a pleasing meta-level resonance happening here. Minds are plastic, waaaaay more plastic than we think. But actually, so are our concepts of mind. Even the distinctions we make between bulky and seemingly well-demarcated categories like cognition, emotion, action, belief, personality etc might (and I think will) turn out to be highly revisable.
One fun extension of this thought experiment is as follows: once you’ve chosen your axes, try and imagine how you might configure your app to create some or all of the following minds. Your own. Your pet’s. Your newborn baby’s. ChatGPT-5. The banking system. The immune system. The crowd at a rock concert. Clinicians might like to see how their lists work when applied to the folks they see today. So might teachers. Or shop assistants. You might even try returning to the list of canonical psychiatric diagnoses and asking how differentiable they are using these axes as your system.
My experience playing with this on my own and with friends has been that whichever set of axes one chooses, one tends to get a lot more mileage out of them than one first thought.
Axis. Axes. Axioms? A friend suggested this is a bit like inventing a whole bunch of non-Euclidean geometries and seeing all the cool stuff you can do with that. To some extent, I agree, but the salient different is that we don’t really have an equivalent of Euclidean geometry for the mind. Nothing is really canonical when it comes to the axes of mental variation. Psychiatry and psychology’s greatest secret is that our constructs and descriptors are so wildly unsystematically derived, and their popularity and persistence so clearly due to non-scientific factors, that any conversation about minds is so laden with assumptions, preferences and, frankly, poetry, that there really isn’t much that is axiomatic here at all, much less a set of ‘facts that hold’ that can serve as the bedrock for some kind of ‘science of the mind’. Sure, people have tried, and the endeavour to identify a language of ‘basal cognition’ that can apply across systems and scales, far beyond the human example, is an exciting and noble one. But whether or not such an endeavour is even possible, and whether the result ends up looking like, say some version of the Free Energy Principle, or perhaps the principles of reinforcement learning theory, I think we can be confident that this isn’t going to give us a bunch of controls for our app that are going to be intuitively easy to use. Maybe this is what the UI of the Linux app will look like, and the nerdily and mathematically inclined among us will be able to intuit the real-life effects of, say, shifting the sensory precision dial to the right while at the same time halving the gain on mid-hierarchy prediction errors, but that’s unlikely to be the model that most of us want to get on Christmas morning and have mastered by the time the turkey is on the table. What would the frictionless UI look like? What would Apple design, for example?
If I'm honest, framing this as a thought experiment is a little bit disingenuous. I think it's more important than that. I think that so long as we work with concepts of mind that are derived from the clinical authority of a few, or from siloed and ideologically defended research programmes, as opposed to the experiential wisdom of the many, then we will continue to run up against failures of imagination that will genuinely hinder our progress as a society. Levin describes this as a kind of mind blindness, and I think that is right. If the current bewildering moment of AI expansion has taught us one thing, it's that there really are many different kinds of minds to consider, even if you choose to maintain that they can’t ‘really’ exist.
There is a space of possible minds. Of course, it has been around us forever in the animal kingdom, the plant kingdom, and who knows where else. But our disease-centric, brain-centric, human-centric outlook has made it hard for us to see what may be hiding in plain sight.
If we equip ourselves with a looser set of assumptions about what a mind must look like, then I believe, as do an increasing number of scientists, that we will encounter agency, intelligence or mind-like behaviour in unexpected places. We will also equip ourselves, I am quite sure, with a deeper understanding of our own minds.
I’d love to hear what dials or sliders you might have in your mind design app - or what you might use instead of dials or sliders. Please do make suggestions in the comments. I really doubt there’s such a thing as a bad answer here.




Real cool stuff, Tom. Now I have to watch Companion. Surprised this made it to film before a Black Mirror episode... though honestly, this is beyond Black Mirror’s usual slider settings. Well done!
LANGUAGE
Enskulled / Enmeshed- we've probably been there since the evolution / invention of language, right? Or at least, writing. I suppose an interesting "slider" could be something like phenomenological <-> linguistic. What does it feel like to uninstall the cognitive turbocharger of language and experience the world raw(?). Would everything dissolve into ineffable psychedelia? Is that what the world is like for cats? How will memory encode this experience? In other words, will you be able to bring anything (insights, experiences) back with you? How could we relate/share these experiences with others?
GENDER
I have a trans friend. A thought experiment i had (or someone else who i forgot and accidentally plagiarised) when reflecting on how incremental, painful, and expensive gender transitioning is was a gender printer. If you could just walk through an airport scanner shaped machine and come out the gender of your choice - and it's reversible if you walk back through - would you try it?
Your blog has connected an idle and isolated thought experiment to a rabbit hole of Michael Levin and transhumanist philosophy- thanks!
The idea that there could be a linear sliding scale from Male to Female is insufficient - plenty examples of historical and present day cultures with more than two genders (cultures where these are incorporated and not marginalised/pathologised).
I wonder what those sliders would be? What are the dimensions of gender? My mind goes to boring places- i.e. mainstream psychology / stereotypes.
PSYCHOSIS
I have lived xp of psychosis. I appreciated the section conceptualising psychosis under an axis of boundaried-porous. I often wonder what a threatless/adrenalineless episode of psychosis would be like. What kind of insights could be gained from being able to experience that state dispassionately- and form memories uncomplicated by trauma. Would that be "psychosis" though? I guess not. Still... could be a kind of Aldous Huxley Doors of Perception/Heaven and Hell profound experience (having never done psychedelics I may have overly projected psychosis experience onto those books)
I've wondered about various tools for psychoeducation - psychosis "simulators" for carers and clinicians. I guess if we're going into sci-fi, then a neuroscience headset that uses targeted ultrasound or mri or something to vibrate the brain into a state of transient psychosis- that you could toggle on and off. Like rubbing a finger round the rim of a wine glass. (I suppose at that stage we could probably use that tech to functionally cure psychosis, so maybe the need for psychoeducation would be lessened... also horrible implications for new methods of torture!)
LABOUR
The general idea opens up cyberpunk flavoured reflecting about alienation. Clocking into your shift at the sadness factory. Cranking the sliders to singular, opaque, stable, worldbound between 9am and 5pm on weekdays. Are you "yourself"? Would it feel like you are yourself? Or would you feel locked in as a spectator - like the sunken place in Get Out?
With access to this technology what is your authentic self?
To me it would be more scary if it feels authentic - like a perfected dystopian conformity drug. But maybe it would just expose new dimensions of authenticity "this feels right in a kind of somatic way, but wrong at a more cognitive level because i know it's just the tech". Would it feel volitional? We could probably learn a lot here from recreational drug users and research. Powerful experiences that feel real but you know in the back of your mind are "not real" (or at least, are produced/precipitated by drugs).
Makes my mind vague-out about the mythical utopian horizon of a coercion free society...
P.s. a word i recently enjoyed learning has medical origins: Nostalgia