My company is going wild like this too. I mean, it makes sense - more AI use means code is being written faster and thus we’re making more money.
But they go weirdly quiet when devs ask if AI has actually been speeding up the time to get a finished product in front of customers. You know, the only metric you should care about?
That’s an… interesting correlation they’re making, more code = more money. I know it’s not you personally making that comparison, but man is it strange. That’s a very business school way of thinking.
What good is “more code” from the LLMs, if I have to scrutinize it for bugs and vulnerabilities? More code only means more surface area, more points of failure. And of the AI I’ve tried, every single one writes far far far too much code. And all that time in code review, QA, user acceptance testing, that absolutely does not make the company more money - it costs them more money, in paying for labor. And it doesn’t get the product to the end user faster anyway.
I’m just ranting and this a minor point, but speed is also not the only metric I would care about. I’d also care about making sure the user doesn’t experience many bugs - preferably no bugs at all. The classic engineer’s triangle still holds: “Fast, Cheap, and Good: choose 2.” And AI seems to pick “Fast” twice. XD
More code only means more surface area, more points of failure. And of the AI I’ve tried, every single one writes far far far too much code. And all that time in code review, QA, user acceptance testing, that absolutely does not make the company more money - it costs them more money, in paying for labor. And it doesn’t get the product to the end user faster anyway.
Duh, just have the LLM do code review, QA, and testing for you! And then blindly ship it to production once that’s done.
I mean, it makes sense - more AI use means code is being written faster
This AI you speak of - is it in the room with us right now?
Because if you mean to say LLM slop machines, then “code is being written faster” does not imply the code is in any way useful.
My company is going wild like this too. I mean, it makes sense - more AI use means code is being written faster and thus we’re making more money.
But they go weirdly quiet when devs ask if AI has actually been speeding up the time to get a finished product in front of customers. You know, the only metric you should care about?
That’s an… interesting correlation they’re making, more code = more money. I know it’s not you personally making that comparison, but man is it strange. That’s a very business school way of thinking.
What good is “more code” from the LLMs, if I have to scrutinize it for bugs and vulnerabilities? More code only means more surface area, more points of failure. And of the AI I’ve tried, every single one writes far far far too much code. And all that time in code review, QA, user acceptance testing, that absolutely does not make the company more money - it costs them more money, in paying for labor. And it doesn’t get the product to the end user faster anyway.
I’m just ranting and this a minor point, but speed is also not the only metric I would care about. I’d also care about making sure the user doesn’t experience many bugs - preferably no bugs at all. The classic engineer’s triangle still holds: “Fast, Cheap, and Good: choose 2.” And AI seems to pick “Fast” twice. XD
Duh, just have the LLM do code review, QA, and testing for you! And then blindly ship it to production once that’s done.
Silly me why didn’t I think of that lol
This AI you speak of - is it in the room with us right now? Because if you mean to say LLM slop machines, then “code is being written faster” does not imply the code is in any way useful.
/>thatsthepointtheyweremaking.jpg
Nothing has changed much huh? Plenty of companies uses lines written as a metric for productivity before all these.