• SkunkWorkz@lemmy.world
    link
    fedilink
    arrow-up
    107
    arrow-down
    1
    ·
    1 month ago

    Yeah fake. No way you can get 90%+ using chatGPT without understanding code. LLMs barf out so much nonsense when it comes to code. You have to correct it frequently to make it spit out working code.

    • Artyom@lemm.ee
      link
      fedilink
      arrow-up
      13
      ·
      1 month ago

      If we’re talking about freshman CS 101, where every assignment is the same year-over-year and it’s all machine graded, yes, 90% is definitely possible because an LLM can essentially act as a database of all problems and all solutions. A grad student TA can probably see through his “explanations”, but they’re probably tired from their endless stack of work, so why bother?

      If we’re talking about a 400 level CS class, this kid’s screwed and even someone who’s mastered the fundamentals will struggle through advanced algorithms and reconciling math ideas with hands-on-keyboard software.

    • AeonFelis@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      1 month ago
      1. Ask ChatGPT for a solution.
      2. Try to run the solution. It doesn’t work.
      3. Post the solution online as something you wrote all on your own, and ask people what’s wrong with it.
      4. Copy-paste the fixed-by-actual-human solution from the replies.
      • Eheran@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 month ago

        You mean o3 mini? Wasn’t it on the level of o1, just much faster and cheaper? I noticed no increase in code quality, perhaps even a decrease. For example it does not remember things far more often, like variables that have a different name. It also easily ignores a bunch of my very specific and enumerated requests.

        • xor@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          03 something… i think the bigger version….
          but, i saw a video where it wrote a working game of snake, and then wrote an ai training algorithm to make an ai that could play snake… all of the code ran on the first try….
          could be a lie though, i dunno….

          • Bronzebeard@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 month ago

            Asking it to write a program that already exists in it’s entirety with source code publicly posted, and having that work is not impressive.

            That’s just copy pasting

            • xor@lemmy.dbzer0.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 month ago

              he asked it by describing the rules of the game, and then asked it to write and ai to learn the game….
              it’s still basic but not copy pasta

              • Bronzebeard@lemm.ee
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 month ago

                These things work by remind how likely other words are to appear next to certain words. Do you know how many tutorials on how to code those exact rules it must have scanned?

    • Maggoty@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      Usually this joke is run with a second point of view saying, do I tell them or let them keep thinking this is cheating?