There’s a special kind of madness that only graphical AI can induce. Not the philosophical “Will AI replace us?” kind. No, I’m talking about the real psychological warfare: trying to get an image model to follow a simple, kindergarten-level instruction. AI Idiocy.
You know the drill.
You ask it to rotate an object.
Or make a cat fall backward.
Or—heaven forbid—move something two inches to the left.
And what do you get?
A surrealist fever dream that looks like Salvador Dalí tried to storyboard a migraine.
Meanwhile, the AI is happily munching through your tokens like a toddler with a bucket of Skittles.
🎨 The Illusion of Competence
Text-based AI? Sharp. Articulate. Helpful.
Graphical AI? A golden retriever with a paintbrush taped to its paw.
Ask a text model to write a prompt for the image model, and it will confidently produce a beautifully structured, hyper-detailed, technically precise masterpiece of instruction.
Then you feed that prompt to the image model, and it responds with:
- A cat that is not falling backward, but instead growing extra legs
- An object that is not rotated, but duplicated, melted, or replaced with a Victorian teapot
- A scene that looks like it was generated by someone who skimmed your prompt while sprinting past it at 40 mph
It’s like watching two coworkers who have never met try to collaborate on a group project.
🐈 The Cat Problem
Let’s talk about the cat.
You want a cat falling backward.
Simple. Gravity exists. Cats exist. Cameras exist.
But graphical AI?
Graphical AI hears “cat falling backward” and decides:
- The cat should be sideways
- Or upside down
- Or fused with a chair
- Or replaced entirely with a fox, because “close enough”
And if you dare to ask for a specific angle?
Congratulations—you’ve just unlocked a new form of digital chaos.
🔄 The Rotation Curse
“Rotate the object 90 degrees.”
You’d think this is the easiest possible instruction.
Humans do it instinctively.
Even toddlers rotate blocks.
But graphical AI?
It will:
- Rotate the background
- Rotate the camera
- Rotate the concept of time
- Rotate everything except the one object you asked for
And then it will proudly present the result like a child handing you a drawing of a purple stick figure and saying, “It’s you!”
🍬 Token Gluttony
The best part?
Every failed attempt costs tokens.
Every. Single. One.
You watch your balance drain as the AI confidently produces:
- Wrong angles
- Wrong objects
- Wrong species
- Wrong universe
It’s like paying someone by the hour to ignore you.
And the moment you think, “Fine, I’ll ask the AI to write a better prompt,” the text model steps in like an overconfident intern:
“I’ve crafted a highly detailed, technically accurate prompt that the image model will surely understand.”
Spoiler:
It won’t.
The image model reads that prompt the way a cat reads a tax form.
🤖 Why Does This Happen?
Because graphical AI isn’t actually “seeing” the way we do.
AI is not rotating objects—it’s hallucinating pixels.
It’s not understanding physics—it’s remixing patterns.
It’s not following instructions—it’s guessing.
And sometimes it guesses well.
But when it doesn’t?
You get a cat with three spines and a shadow that belongs to someone else.
🧠 The Real Joke
The funniest part is that we keep trying.
We keep believing that this time the model will understand.
OK, this time it will rotate the object.
Maybe this time the cat will fall backward.
I got it, this time the tokens will be worth it.
But graphical AI is like that friend who nods while you explain something, then immediately does the opposite.
And we love it anyway.
Because when it does get it right?
It feels like magic.
Chaotic, unpredictable, token-devouring magic.
Here are some fun things that I have frustrated over:
Flight Shorts – Turbulence – YouTube
Frank & Matilda – The Blender
and don’t miss my other posts in the Blog Archive.
Share this post: on Twitter on Facebook



