Yesterday I wrote about the AI sinkhole. Today I am going to go dive deep into that sinkhole on purpose. Every time it finishes a run and suggests what to do next, I say yes. No independent thought, no redirecting. Just following the what’s-next suggestions completely. I will document what happens after saying “yes, continue” 10 times in a row.
The story is written and the page spreads of the first version are ready at low resolution. So I have everything ready for a first 4K test run that outputs an actual hardcover book.
However when I attempted a first run I immediately ran into problems. The Nano Banana Pro API was timing out on every run. It did so before, but I was attributing this to the server load. After retrying a couple of runs at different hours I still got these timeouts each time. So I had Codex switch over to the Nano Banana 2 model instead.
After that was done I still ran into the same problems again, so I asked ChatGPT what could be wrong with these requests. I asked why Gemini returned a transient error while editing:
If it happens consistently on the same request, the message may be misleading and the real cause could be payload size, malformed input, a tool-specific bug, or a workflow-specific limitation
So I fed the diagnosis directly to Codex.
Your point is exactly what we need to verify. Right now we only persist the Gemini message, not the raw HTTP status/body metadata.
The verified status is HTTP 503, so this is service-side/backend class, not a 400/401/413 client-side failure. Still, because it happens on the same spread, I’m going one step further: shrink the reference images sent into each edit request while keeping the output at 4K. That reduces payload and model workload without changing print resolution.
Key Insight
Having no guidance and the “yes, continue sinkhole” I was attempting was not working. The yes-continue loop by itself couldn’t surface the root cause of this problem. The experiment didn’t fail. It hit its limit. There’s a difference.