Your Job Is Safe
I recently had to do an integration between a Chinese robot and a fleet management platform.
There was some time pressure so I started vibe coding it: I shared the pdfs and links to the documentation of both platforms to Cursor and Chatgpt and told them to figure it out.
Big Mistake
I spent 10+ hours trial and erroring, clicking run in Cursor or saying yes when it suggested changes. Without me doing any effort to understand what it was doing. Iterating error after error towards a solution.
Turns out, Cursor didn't know what it was doing as well.
The documentation and instructions were quite poor, for both platforms, and it turns out that the adagio garbage in garbage out also applies to LLMs.
But I didn't know that at the time.
Desperate I continued clicking, intimidated by the time pressure, and the size of the code base.
Slowly I developed a 6th sense, when the LLM was trying to bullshit its way through, and leading me down irrelevant paths.
In the beginning it only dawned on me after I pressed accept in cursor, but I learned to recognise it earlier and earlier.
Now I have half-ass code, half of which I understand, the majority of which I don't.
And that would be ok if this was the end. But I have to keep building on it.
And with every new step I have to implement, I realise how poorly I understand what it does, and how I need to spend extra time understanding where I'm at.
This isn't a call to write bad documentation (though there will always be bad documentation, just like there will always be bad companies).
It's a call to use LLMs in a smart way. They can help understand codebases faster, they can help figure out documentation faster, and sometimes - just sometimes - the shortcut works and it wings it.
But I wouldn't count on it.
I learned to let the LLM guide me, but not to let it steer me.
Especially when the LLM is as clueless as me. With that difference that it doesn't admit it.
Even with good documentation, processes like AWS which are extensively documented on millions of pages, it's still important to understand what the underlying code is doing.
I don't need to understand how eelctricty works to turn on a light switch. But we're not at that point yet. Maybe in a distant future, our tasks will be to be architects, to decide what we want, how we want it, and describe that in detail and let the LLM run with it
Writing your ideas down, and someone else executes them. Isn't that what we all want?
But for now, I believe your engineering job is safe