Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.
If we’re talking about freshman CS 101, where every assignment is the same year-over-year and it’s all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his “explanations”, but they’re probably tired from their endless stack of work, so why bother?
If we’re talking about a 400 level CS class, this kid’s screwed and even someone who’s mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.
You mean o3 mini? Wasn’t it on the level of o1, just much faster and cheaper? I noticed no increase in code quality, perhaps even a decrease. For example it does not remember things far more often, like variables that have a different name. It also easily ignores a bunch of my very specific and enumerated requests.
03 something… i think the bigger version….
but, i saw a video where it wrote a working game of snake, and then wrote an ai training algorithm to make an ai that could play snake… all of the code ran on the first try….
could be a lie though, i dunno….
These things work by remind how likely other words are to appear next to certain words. Do you know how many tutorials on how to code those exact rules it must have scanned?
Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.
If we’re talking about freshman CS 101, where every assignment is the same year-over-year and it’s all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his “explanations”, but they’re probably tired from their endless stack of work, so why bother?
If we’re talking about a 400 level CS class, this kid’s screwed and even someone who’s mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.
deepseek rnows solid, autoapprove works sometimes lol
i guess the new new gpt actually makes code that works on the first time
You mean o3 mini? Wasn’t it on the level of o1, just much faster and cheaper? I noticed no increase in code quality, perhaps even a decrease. For example it does not remember things far more often, like variables that have a different name. It also easily ignores a bunch of my very specific and enumerated requests.
03 something… i think the bigger version….
but, i saw a video where it wrote a working game of snake, and then wrote an ai training algorithm to make an ai that could play snake… all of the code ran on the first try….
could be a lie though, i dunno….
Asking it to write a program that already exists in it’s entirety with source code publicly posted, and having that work is not impressive.
That’s just copy pasting
he asked it by describing the rules of the game, and then asked it to write and ai to learn the game….
it’s still basic but not copy pasta
These things work by remind how likely other words are to appear next to certain words. Do you know how many tutorials on how to code those exact rules it must have scanned?
that’s not how these things work
That’s exactly how LLMs work.
o3 yes perhaps, we will see then. Would be amazing.