Just out of curiosity. I have no moral stance on it, if a tool works for you I’m definitely not judging anyone for using it. Do whatever you can to get your work done!

  • diffuselight@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    That may have been their plan, but Meta fucked them from behind and released LLama which now runs on local machines, up to 30B parameter size and by end of the year will run at better than GPt3.5 ability on an iphone.

    Local llms, like airoboros, WizardLm, Stable Vicuña or Stable Coder are real alternatives in many domains.

    • cwagner@lemmy.cwagner.me
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Uh, Llama, at least the versions I can run (up to 64B on CPU if I’m into waiting an hour for the reply) is far behind gpt3.5, and that is without considering GPT4. Even GPT3.5 is a toy compared to 4.

      Llama2 is supposedly better, but still not quite at 3.5 levels. Of course, that’s amazing considering the resource difference, but if all you care about is the endresult, then you still have to wait for some advancements.