• Free_Opinions@feddit.uk
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    5
    ·
    5 days ago

    We’ve had definition for AGI for decades. It’s a system that can do any cognitive task as well as a human can or better. Humans are “Generally Intelligent” replicate the same thing artificially and you’ve got AGI.

    • LifeInMultipleChoice@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      2
      ·
      5 days ago

      So if you give a human and a system 10 tasks and the human completes 3 correctly, 5 incorrectly and 3 it failed to complete altogether… And then you give those 10 tasks to the software and it does 9 correctly and 1 it fails to complete, what does that mean. In general I’d say the tasks need to be defined, as I can give very many tasks to people right now that language models can solve that they can’t, but language models to me aren’t “AGI” in my opinion.

      • hendrik@palaver.p3x.de
        link
        fedilink
        English
        arrow-up
        8
        ·
        5 days ago

        Agree. And these tasks can’t be tailored to the AI in order for it to have a chance. It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner. Or at least do something comparable to a human. Just wording emails and writing boilerplate computer-code isn’t enough in my eyes. Especially since it even struggles to do that. It’s the “general” that is missing.

        • Free_Opinions@feddit.uk
          link
          fedilink
          English
          arrow-up
          4
          ·
          4 days ago

          It needs to drive to work, fix the computers/plumbing/whatever there, earn a decent salary and return with some groceries and cook dinner.

          This is more about robotics than AGI. A system can be generally intelligent without having a physical body.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            4 days ago

            You’re - of course - right. Though I’m always a bit unsure about exactly that. We also don’t attribute intelligence to books. For example an encyclopedia, or Wikipedia… That has a lot of knowledge stored, yet it is not intelligent. That makes me believe being intelligent has something to do with being able to apply knowledge, and do something with it. And outputting text is just one very limited form of interacting with the world.

            And since we’re using humans as a benchmark for the “general” part in AGI… Humans have several senses, they’re able to interact with their environment in lots of ways, and 90% of that isn’t drawing and communicating with words. That makes me wonder: Where exactly is the boundary between an encyclopedia and an intelligent entity… Is intelligence a useful metric if we exclude being able to do anything useful with it? And how much do we exclude by not factoring in parts of the environment/world?

            And is there a difference between being book-smart and intelligent? Because LLMs certainly get all of their information second-hand and filtered in some way. They can’t really see the world itself, smell it, touch it and manipulate something and observe the consequences… They only get a textual description of what someone did and put into words in some book or text on the internet. Is that a minor or major limitation, and do we know for sure this doesn’t matter?

            (Plus, I think we need to get “hallucinations” under control. That’s also not 100% “intelligence”, but it also cuts into actual use if that intelligence isn’t reliably there.)

        • NeverNudeNo13@lemmings.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 days ago

          On the same hand… “Fluently translate this email into 10 random and discrete languages” is a task that 99.999% of humans would fail that a language model should be able to hit.

          • hendrik@palaver.p3x.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            4 days ago

            Agree. That’s a super useful thing LLMs can do. I’m still waiting for Mozilla to integrate Japanese and a few other (distant to me) languages into my browser. And it’s a huge step up from Google translate. It can do (to a degree) proverbs, nuance, tone… There are a few things AI or machine learning can do very well. And outperform any human by a decent margin.

            On the other hand, we’re talking about general intelligence here. And translating is just one niche task. By definition that’s narrow intelligence. But indeed very useful to have, and I hope this will connect people and broaden their (and my) horizon.

    • rational_lib@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 days ago

      So then how do we define natural general intelligence? I’d argue it’s when something can do better than chance at solving a task without prior training data particular to that task. Like if a person plays tetris for the first time, maybe they don’t do very well but they probably do better than a random set of button inputs.

      Likewise with AGI - say you feed an LLM text about the rules of tetris but no button presses/actual game data and then hook it up to play the game. Will it do significantly better than chance? My guess is no but it would be interesting to try.

      • Free_Opinions@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        4 days ago

        It should be able to perform any cognitive task a human can. We already have AI systems that are better at individual tasks.

    • ipkpjersi@lemmy.ml
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      edit-2
      5 days ago

      That’s kind of too broad, though. It’s too generic of a description.

      • Entropywins@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        arrow-down
        2
        ·
        5 days ago

        The key word here is general friend. We can’t define general anymore narrowly, or it would no longer be general.

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        1
        ·
        4 days ago

        That’s the idea, humans can adapt to a broad range of tasks, so should AGI. Proof of lack of specilization as it were.