In the near future programming is no longer needed. AI tools are likely to overtake the 10x engineer in programming soon, if they have not already done so. This will make writing code very cheap, let’s say it is at zero costs. So I was thinking, what is the value of things like PRD’s and other technical documents in this new world?
These kinds of documents are made because the software developers need help knowing what kind of program to write. As cycles in programming were expensive before it made a lot of sense to have the specifications very clear before the programmer started to write the actual code.
For this experiment I want to test whether it makes sense to have AI write a PRD document that I then feed to another AI tool to start building.
Is the extra translation layer (prompt > PRD > code) worth anything compared to just going straight (prompt > code)?
I started with just having ChatGPT create a PRD document by having a simple prompt:
Write me a PRD for a new feature on my website: hugomelis.nl I want a “random experiment” button, that shows a random experiment and takes me there.
The result was a very detailed PRD document that included things like goals, non goals, tests and code examples.
The PRD was then given to Codex, because Claude was at the usage limits again and it started building the feature for my website.
In another instance of Codex I just directly asked it to start building my feature in a separate git branch:
I want a “random experiment” button on my website, that shows a random experiment and takes me there.
The first thing that caught my attention was that both of the processes took a while to complete:
- PRD version - 18 minute and 52 seconds.
- Non PRD version - 15 minute and 28 seconds.
Even though I thought of the feature as simple. Just a button showing a random experiment, right? Well it turned out the feature wasn’t that simple.
This could have been overcome by having the specifications of the feature made very basic and written in the PRD by me. Both of the AI processes (PRD and non PRD version) complicated the feature IMO. Things like edge cases: hiding the button when no experiments are there or missing experiment page. I know these can’t happen in production, and if they can, I have bigger problems.
When both versions were completed the results differed quite a bit.
The non PRD based version had a menu bar Random button that, when clicked navigated to a random experiment detail page.

The PRD based version had a similar button, but added in the hero section of the website, and no button in the menu bar.

The menu bar for this website is already a bit crowded on mobile, so I opt for the hero section button instead of the menu bar option.
I prefer the PRD created version. I am not sure what made the PRD version have the button placed in the right place (suiting my preferences). Was the PRD thinking about this specifically, or was it just dumb luck it did it the way it did (this time)? Reading the PRD I found this section:
5.2 Button Placement
The button should be visible in at least one of the following locations:
Option A (recommended) • Homepage hero section • Label: “Random Experiment”
Option B • Navigation bar • Sidebar
So it actually mentioned both placements, but recommended the homepage hero section. Interesting. Why did it do this? Because of the menu being crowded or was there another reason? This is harder to grasp, but I asked it straight out to ChatGPT.
It came with a ton of reasons like visibility, better engagement with the archive and mobile UX. All of the reasons very well articulated but none of them because of the menu being crowded. The PRD applied general best practices so that it happened to land on the right answer, without actually knowing the reason (my crowded menu).
In the end, in this experimental setting it made sense to have an intermediary step to create the PRD first. However I must be honest in saying that it could have easily turned out the other way I think.
Of course you can now try out the button on the homepage yourself.
Key Insight
The PRD created by a (short) prompt didn’t understand my problem better than a version where I just used the same prompt to build this feature on my website directly. The question I think is most important: is the extra surface area worth the extra time and complexity? I would say, no. Even though it made the better choice. It was by being thorough instead of being smart. If you can’t trust the process to reliably produce better results, the extra time does not pay for itself.