Slow June, people voting with their feet amid this AI craze, or something else?

  • Platomus@lemm.ee
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    It’s because it’s summer and students aren’t using it to cheat on their assignments anymore.

    • TheEllimist@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      It’s definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.

  • i_lost_my_bagel@seriously.iamincredibly.gay
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I tried it for about 20 minutes

    Had it do a few funny things

    Thought huh that’s neat

    Went on with life

    Since then the only times I’ve thought about ChatGPT has been seeing people using it in classes I’m in and just sitting here thinking “this is a fucking introductory course and you’re already cheating?”

    • idolofdust@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      In discrete mathematics right now and overheard way too many students hitting a brick wall with the current state of AI chatbots. as if thats what they used almost exclusively up to this point

  • wackypants@kbin.social
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    It’s Summer. Students are on break, lots of people on vacation, etc. Let’s wait to see if the trend persists before declaring another AI winter.

  • Magiwarriorx@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    I still use free GPT-3 as a sort of high level search engine, but lately I’m far more interested in local models. I havent used them for much beyond SillyTavern chatbots yet, but some aren’t terribly far off from GPT-3 from what I’ve seen (EDIT: though the models are much smaller at 13bn to 33bn parameters, vs GPT-3s 145bn parameters). Responses are faster on my hardware than on OpenAI’s website and its far less restrictive, no “as a large language model…” warnings. Definitely more interesting than sanitized corporate models.

    The hardware requirements are pretty high, 24GB VRAM to run 13bn parameter 8k context models, but unless you plan on using it for hundreds of hours you can rent a RunPod or something for cheaper than a used 3090.

  • Poob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    It’s really fucking annoying getting “As an AI language model, I don’t have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on…” at the beginning of every prompt, followed by the driest, most bland answer imaginable.

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer

      “Ok wikipedia says this ____”

      It answered in this long winded way that had all these things like “blessed prophet of Allah”. Basically the answer I would expect from an Imam.

      I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don’t know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Yeah, it’s boring as shit, if want a conversation partner there’s better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There’s even scripts to break through the “guardrails”

      I love the boring. Every other day, I think "man, I really don’t want to do this annoying task. I’m not sure if it even saves much time since I have to look over the work, but it’s a hell of a lot less mentally exhausting.

      Plus, it’s fun having it Trumpify speeches. It’s tremendous. I’ve spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through… Seriously, it never gets old

  • randon31415@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    On that, what would people recommend for a locally hosted (I have a graphics card) chatgpt-like LLM that is open source and doesn’t require a lot of other things to install.

    (Just one CMD line installation! That is, if you have pip, pip3, python, pytorch, CUDA, conda, Jupiter note books, Microsoft visual studio, C++, a Linux partition, and docker. Other than that, it is just one line installation!)

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It’s been somewhat amusing, or perhaps confounding, seeing people use it thinking it’s an oracle of knowledge and wisdom that knows everything. Maybe someday.

  • gaiussabinus@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I have a number of language models running locally. I am really liking the gpt4all install with Hermes model. So in my case i used chatgpt right up untill i had one i could keep private.

    • ClemaX@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      How does it compare with ChatGPT (GPT 3.5), quality and speed wise?

      • gaiussabinus@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Depends how you get in accomplished if you use the python bindings its slow but using the gpt4all its quick and there is a gpt4all api should you wish to build a private assistant. I like that one but its still run by a company so mileage may vary there are a few projects on github for use with opensource models. I can get better quality from the hermes model than i can with GPT 3.5 IMO but some models are better than others in regards to what you are trying to do. If you have done any work with stable diffusion lots of different models are popping up right now for different use-cases like you see on civit.ai. A good coding bot is probably going to be a bit shit in a conversation.

  • SuperSleuth@lemm.ee
    link
    fedilink
    English
    arrow-up
    0
    arrow-down
    1
    ·
    1 year ago

    The novelty has worn off. I jumped on board and tried out every bot when they were first released: Bard, Bind, Snapchat, GPT—I’ve given them all a go.

    It was a fun experience, asking them to write poems or delve into the mysteries of consciousness, as I got to know their individual personalities. But now, I mainly use them for searching niche topics or checking grammar, maybe the occasional writing.

    In fact, this very comment was reformated in Bard for instance. Though, since Google integrated their LLM into search (via Labs), I use them even less.