- cross-posted to:
- programming@programming.dev
- cross-posted to:
- programming@programming.dev
Yeah, they’re cooked.
No shit.
So we “fixed” the easiest part of software development (writing code) and now humans have to clean up the AI slop.
I’ll bet this lovely new career field comes with a pay cut.
I would charge more. Fixing my own code is easier than fixing someone elses code.
I think I might go insane if that was my career.
They really want to enforce that quote. The cleverly as possible part doesn’t really apply. Let the idiot write the code and the more experienced person debug it. I feel like we’ve seen this with airline pilots already. Huge shortage mainly caused by retirement and regulation changes making it harder to get into the field. I guess their hope is by the time that happens with programmers AI doesn’t suck.
At least this won’t be true anymore.
I occasionally check what various code generators will do if I don’t immediately know the answer is almost always wrong, but recently it might have been correct, but surprisingly convoluted. It had it broken down into about 6 functions iterating through many steps to walk it through various intermediate forms. It seemed odd to me that such a likely operation was quite so involved, so I did a quick Internet search, ignored the AI generated result and saw the core language built-in designed to handle my use case directly. There was one detail that was not clear in the documentation, so I went back to the LLM to ask that question and it gave the exact wrong answer.
I am willing to buy that with IDE integration or can probably have much richer function completion for small easy stuff I know to do and save some time, but I just haven’t gotten used to the idea of asking for help on things I already know how to do.
I’ve found the same thing.
Whenever I ask an LLM for a pointer, I end up spending just as long (if not longer) refining the question than just figuring it out myself it’s doing a search on SO it in other online resources.
But even the IDE integration is getting annoying. I write a class with some functionality baked in, and the whole time it’s promoting me with a shit load of irrelevant suggested code. I get the class done, then I go to spin up a unit test. It knows which class I’m trying to create a unit test for, which is cool. But then the suggested code is usually completely wrong or it’s much more convoluted than it needs to be. In the latter case, the first several characters of the suggested code is good, but then there’s several lines after it of shite. And hitting tab injects all of it in, which then requires me to delete it all. So almost every time I end up hitting escape anyway.
I’ve heard a few people rave about ‘vibe coding’ - usually people with no or little programming experience. I have to assume that generated code was either for very simple atomic actions and/or it’s spaghettified, inefficient garbage.
I had been planning to, but being lazy about trying to enable my IDE setup but was giving it the benefit of the doubt. Your feedback resonates with how much I end up fighting auto-complete/auto-correct in normal language and seeing it potentially ruin current code completion (which sometimes I have to fight, but on balance it helps more than it annoys). I suppose I’ll still give it a shot, but with even more skepticism. I suppose maybe it can at least provide an OK draft of API documentation… Maybe sometimes…
On the ‘vibe coding’, on the cases I’ve seen detailed, it seems they do something that, to them, is a magical outcome from technologies that intimidated them. However, it’s generally pretty entry level stuff for those familiar with the tools of the trade, things you can find already done dozens of time on github almost verbatim, with very light bespoke customization. Of course there is a market for this, think of all the ‘no code’/‘low code’ things striving to make approachable very basic apps that just end up worse than learning to code. As a project manager struggles to make a dashboard out of that sort of sensibility, a dashboard that really has no business being custom but tooling has fostered the concept that everyone has a snowflake dashboard, it’s a pain. But maybe AI can help them generate their dashboard. Of course, to be a human subjected to the workflows those PMs dream up is a nightmare. Bad enough already at my work there are hundreds of custom issue fields, a dozen issue types, and 50 issue states with maddening project to project unique workflows to connect the meaning of all this, don’t like AI emboldening people to customize further.
The thing about ‘vibe coding’ is when they get stuck and they get confused/frustrated about why the LLM stopped getting them what they want. One story was someone vibe coding up a racing game. He likely marveled as his vision materialized. From typing prose without understanding how to code he got some sort of 3D game with cars and tracks and controls. This struck him as incredibly difficult otherwise, but reachable through ‘vibe coding’. Then he wanted to add tire marks when the player did something, maybe on a hard turn) and it utterly couldn’t do it. After all the super hard stuff, why could the LLM not do this conceptually much simpler thing? Ultimately spitting out that the person needed to develop the logic himself (claiming it was refraining to do it because it would be better for him to learn, but I’m wagering that’s the generated text after repeated attempts to generate code that the LLM just could not do).
I’m actually quite enjoying watching the LLM evangelists fall into the trough of despair after their initial inflated expectations of what they thought stochastic text generation would achieve for the business. After a while you get used to the waves of magic bullet solutions that promise to revolutionise the industry but introduce as many new problems as they solve.
But the only way to learn debugging is to have experience coding. So if we let AI do the coding then all the entry level coding jobs go away and no one learns to debug.
This isn’t just a code thing. This is all kinds of professions. AI will kill the entry level which will prevent new people from getting experience which will have downstream effects throughout entire industries.
It already started happening before LLM AI. Have you heard the joke that we were teaching our parents how to use printers and PCs with mouse and keyboard and now we have to do the same with our children? It’s really not a joke. We are the last generation that have seen it all evolving before our eyes, we know the fundamentals of each layer of abstraction the current technology is built upon. It was natural process for us to learn all of this and now suddenly we expect “fresh people” to grasp 50 years or so of progress in 5 or so years?
Interesting times ahead of us.
Good point
Have you used any AI for programming? There is 0 chance entry level jobs will be replaced. AI only works well if what it needs to do is well defined, as a dev that is almost never the case. Also companies understand that to create a senior dev they need a junior dev they can train. Also cooperations do not trust Google, openAI, meta, ect with their intellectual property. My company made it a firedable offense if they catch you uploading IP to an AI.
Also companies understand that to create a senior dev they need a junior dev they can train.
We live in a world where every company wants people that can hit the ground running, requires 5 years of experience for an entry level job on a language that’s only been out for three years. On the job training died long ago.
In my experience, LLMs are good for code snippets and input on best practices.
I use it as a tool to speed up my work, but I don’t see it replacing even entry jobs any time soon.
The junior devs are my job are way better at debugging than AI, lol. Granted they are top talent hires because no one else can break in these days.
“AI” is good for pattern matching, generating boiler plate / template code and text, and generating images. Maybe also translation. That’s about it. And it’s of course often flawed/inaccurate so it needs human oversight. Everything else is like a sales scam. A very profitable one.
So, AI gets to create problems, and actually capable people get to deal with the consequences. Yeah that sounds about right
And it’ll be used to suppress wages, because “you’re not making new stuff, just fixing some problems in existing code.” That you have to rewrite most of it is conveniently not counted.
That’s at least what was tried with movie writers.
Most programmers agree debugging can be harder than writing code, so basically the easy part is automated, but the more challenging and interesting parts, architecture and the debugging remain for programmers. Still it’s possible they’ll try to sell it to programmers as less work.
but the more challenging and interesting parts, architecture and the debugging remain for programmers
And is made harder for them. Because it turns out the “easy” part is not that easy to do correctly, and if not it just makes maintaining the thing miserable.
Additionally, as others have said in the thread, programmers learn the skills required for debugging at least partially from writing code. So there goes a big part of the learning curve, turning into a bell curve.
As a very experienced python developer, I have tried using chatgpt for debugging and vibe coding multiple times and you just end up going in circles and never get to a working solution. It ends up being a lot faster just to do it yourself
Absolutely agree. I just use it for some simple stuff like “every nth row in a pandas dataframe slice a string from x to y if column z is True” or something like that. These logics take time to write, and GPT usually comes up with a right solution or one that doesn’t need a lot of modification.
But debugging or analyzing an error? No thanks
I have on multiple occasions told it exactly what the error is and how to fix it. The AI agrees, apologizes, and gives me the same broken code again. It takes the same amount of time to describe the error as it would have for me to fix it.
This is my experience as well. Best case scenario it gives me a rough idea of what functions to use or how to set up the logic, but then it always screws up the actual implementation. I’ve never asked ChatGPT for coding help and gotten something I can use off the bat. I always have to rewrite it before it’s functional.
My rule of thumb is, if he doesn’t give you the solution right off the bat he won’t give you one. If that happens either fix it yourself or start a new chat and reformulate the question completely.
But trust me Bro, AGI is around the corner. In the meantime have this new groundbreaking feature https://decrypt.co/314380/chatgpt-total-recall-openai-memory-upgrade /s
LLMs are so fundamentally different to AGI, it’s a wonder people believe that balderdash
Can AI fix itself so that it gets better at a task? I don’t see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
Personally, I will always lie to AI when asked for feed back.It is worse. People can’t even fix AI so it gets better at a task.
That’s been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there’s just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won’t work and could also make other things worse.
Ars Technica would die of an aneurysm if it stopped posting about generative AI for even 30 seconds
as they’re the authority on tech, and all they write about is shitty generative AI from 2017, that means shitty generative AI from 2017 is the only tech worth writing about
I’m full luddite on this. And fuck all of us.
“Give me some good warning message css” was a pretty nice use case. It’s a nice tool that’s near the importance of Google search.
But you have to know when its answers are good and when they’re useless or harmful. That requires a developer.
the tool can’t replace the person or whatever
Are those researchers human or is this just an Ai that’s too lazy to do the work?