• Kalash@feddit.ch
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    It’s not that I hate it, but like, chatGPT sucks.

    There was this uber hype around it, then we started using it … and it just makes so many errors, it’s literally just generating more work. Scrapped it after less than a week. It’s modern snakeoil.

    • DustyNipples@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Bard is the same, I asked it questions about two of my favourite bands whom I know a lot about. It omitted facts and invented things that were not true!

      • Kalash@feddit.ch
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        We used it for code generation. But we ended up spending more time fixen and debugging the generated code than it would have taken us to just write it. Also it introduces the most annoying type of bugs. Like once it misspelled a property name, but only at one point in the code, got it right everywhere else.

      • XEAL@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        That’s why, in the case of a GPT model you would feed it custom training data using something like LlamaIndex. I don’t know if there’s an API available for Bard, tho.

        You’re wrong assuming that the free models that we have at our disposal are the only possible and best implementations of these LLMs.

    • Mojo@ttrpg.network
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      What! I have the opposite experience.
      Im a tabletop roleplaying gamemaster and it has helped me immensely with translations, formatting of text, compiling and keeping track of my players character backgrounds and even coming up with plots and scenes that are suited for each player.

    • XEAL@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      What did you use it for? I helps me a lot with coding, scripting, translations, terminologies… Sometimes it makes mistakes, but other times it produces working code that accomplishes what I asked for.

      In any case, ChatGPT is just a demo that uses the GPT-3.5 Turbo model. Many people is being misled assuming that the ChatGPT research preview is all that the model has to offer. You can also try the improved model GPT-4, but it’s not free.

      If you really want to get its full potential you need a custom implementation in Python that works against the API and do things like fine tune the model, embeddings, feed it custom data or give it access to tools with LangChain.

      Of course that’s not something easy to do, but don’t think that the ChatGPT web/app is GPT models’ full potential.