Microphysics of prompting
Learning to prompt isn’t just a new skill. It’s a mirror of the kind of relationship we accept with power, language, and technology.

Maybe now prompting has become a new act of speech. But it is not just expression, it is operation. A prompt is an instruction, a gesture of delegation, a strange hybrid of desire and code.
In the language of speech act theory, this would be an illocutionary act: we don’t just say something about the world, we do something in the world through saying. In models like ChatGPT, that act is captured, transformed, and returned, but always mediated by filters, weights, and thresholds of legibility. (Just remembering that the theory applies it to people and I believe it fits perfectly here.)
We might start with Wittgenstein: “meaning is use.” Language has no hidden essence; it gains shape in action, when you use your body to say something. In a model like ChatGPT, this becomes literal: prompts are parsed, weighted, tokenized. The model does not understand, it reacts. Language, here, is entirely performative. This is the moment where we separate ourselves from the machine, we have ethics and a judgment choice: how are going to react, if we’re reacting? (Spoiler: not reacting is also an act.)
But if prompting is use, it is also a technique of power.
Deleuze, reading Foucault, spoke of microphysics of power: power lies not just in laws or institutions, but in often invisible mechanisms by which behaviors, categories, and norms are shaped (meaning: you can also have power, it usually lies in what is representative for yourself and others). Prompting is now one of those mechanisms. It is a lever. It chooses what is included, what is excluded, who speaks and who is spoken for.
In that sense, prompting is not neutral. It carries a politics of voice. Of authorship. Of agency. A prompt is a decision about what counts as sense.
A practical example from my work: storytelling comes in very different forms: text, visual design, proposed actions, frameworks, business decisions, relationship with people, it all are strategies that add to the narrative. This is what I was talking about here (Time is un-LLM-able), where I argue that not everything meaningful can be computed, prompted, or packaged.
And what if we are not careful? We risk becoming Justine, the virtuous sufferer from Sade’s novel. Justine obeys every rule, seeks only to do good, but the system she inhabits has no place for such submission. I remember reading it, and spent a while afraid of priests, powerful people who seems virtuos and mainly natural death (touching windows during storms maybe is still my nightmare, sorry for the spoiler). What destroys Justine is her faith in structure. Likewise, we may believe that careful prompting guarantees fairness, truth, or alignment, but these models do not reward virtue. They reward legibility (you need to talk to them as someone who understood what it is doing, not a mechanical person that copy + paste it). Compliance becomes a form of exposure.
So comes Björk, as the voice of refusal:
“Self-sufficiency, please! Get to work!”
Army of Me is not just a song of resistance, it’s a refusal to collapse into dependency. Now that we may fear AI speaks for us, this lyrics reasserts a self that doesn’t delegate its meaning.
So maybe the real question is not: How do I get better at prompting? But rather: What am I giving up when I let the model speak for me?
(Have you orchestrated the “army of me”? or being orchestrated?)