0%
Back to Notes

On vibe-prompting

The term “vibe coding” came up in a thread I read last week, and it felt like it captured something. It’s more like a reflection on what happens between the body that types and the model that responds.

5 min read
#Strategic Thinking#Creativity And technology#Design Philosophy#Vibe Coding

I recently came across a thread on “vibe coding”, trying to describe how CTA promises often fall short. It’s rarely as fast or as precise as a person executing, unless there’s someone able to cross the borders and create this bridge: co-thinking with AI.


You write like you’re tuning an instrument: trying variations, holding a certain tone, sensing when something resonates. It’s not just about logic, it’s about rhythm, density and atmosphere.


It’s not prompt engineering, it’s prompting composition. But here’s the thing: The vibe isn’t in the code. Not because the model understands vibes, but because you do.



The body that types

I’ve been thinking about how much this depends on the body. A body that holds memory, rhythm, hesitation. A body that knows what it’s trying to feel. And that is very different from what the model is calculating.

When you prompt, you’re not just giving instructions. You’re creating an atmosphere, a pulse, a frame of expectation. You’re setting the scene: what will happen, what should be felt, what could fail, and what kind of emergence you’re allowing.


When I was finishing my undergraduate degree in Communications, I wrote about how a photographer’s expectations shape the image they capture. At that decisive moment, the click isn’t neutral. It’s a gesture of relation with the world, with what the photographer anticipates, with how others will perceive the image. (That’s when Bergson first entered my thinking.)


Prompting isn’t so different. When someone uses tools like Sora to generate visuals, they don’t just describe an image, they perform a scene. Sora expects a 3D modeler’s rigor: precise structure, aesthetic settings, layered logic, written almost like a choreography in JSON-like text.


The thread I mentioned earlier analyzes how a workflow that breaks each task, bite-sized, can help AI to build things like you want it. And it hit me: this is how any good planning works. Decompose all tasks, by understanding how it fits to your strategy, then feed it to workflow, evaluating each step of it.


In both cases: The AI follows, but the mood is yours. Not just what you say — but how you shape it. Vibe coding isn’t the model understanding the vibe. It’s you transmitting one.



This is not optimization

Most AI tools today turn prompting into productivity metrics: Write faster. Automate more. Ship now. But vibe coding seems to resist this logic. It values slower attention. It’s not about controlling the model. It’s about engaging with it, as if in a dance. A shared improvisation or a choreography of not-knowing.


You’re not asking AI for answers. You’re co-creating the conditions for something meaningful to appear.

If AI can’t feel, why does it sometimes seem like it does? Because your prompt carried enough weight, enough rhythm, that the model could echo it.


That’s resonance.



Post-scriptum: on memory and resistance

French philosopher Henri Bergson once said that memory is not just a function of the mind, it’s a force of internal resistance. In his words:

The complex organization of the nervous system, which seems to ensure a greater independence of the living being from matter, only symbolizes this independence materially, that is, the internal force that allows the being to free itself from the rhythm of flowing things, to retain more and more of the past in order to influence the future more deeply. In the special sense we give this word: its memory.


This is what prompting carries, too: not just information, but intention.


To vibe-code is to withhold the instant, to let something breathe. It’s to resist the pace of automation with the weight of a body that remembers — and dares to slow down.