When an AI agent completes a run it almost always tells you what it could continue on. This way you feel the tool is more useful as it takes away the need to think what a next step would be. I coin this behavior the “what’s next sinkhole”. The sinkhole swallows you very gradually, until you don’t notice it anymore. You’re already in it.
I have a lot of trust in the current capabilities of AI tools like Codex and Claude Code. These tools know better what to do next than me. And it makes me a bit anxious knowing my thinking could be diminishing in the process.
Then I realized AI is making people dumber. Especially if you outsource next-step thinking. It isn’t just about next steps in code, it’s spreading into how we get to knowledge itself.
Last week I started to see this dependency spreading further. Andrej Karpathy posted on X that he was using LLMs to build personal knowledge bases.
View this post on X
My experience is that having AI help me with thinking through things helps me structure my thoughts. It also spots patterns I wasn’t aware of. This is super helpful as these patterns would have otherwise have stayed hidden. Does having the pattern handed to you cost you something that finding it yourself would have given you?
Key Insight
The danger with using AI tools is not that AI thinks for you. It’s that you stop noticing that you’ve stopped thinking for yourself.