I might be a bit late to the party, but for those of you that like ERP and fiction writing:

Introducing Pygmalion-2

The people from Pygmalion have released a new model, usable for roleplaying, conversation and storywriting. It is based on Llama 2 and has been trained on SFW and NSFW roleplay, fictional stories and instruction following conversations. It is available in two sizes, 7b and 13b parameters. They’re also releasing a mix with MythoMax-L2 called Mythalion 13B.

Furthermore they’re (once again) announcing a website with character sharing and inference (later in october.)

For reference: Pygmalion-6b has been a well known dialogue model for (lewd) roleplay in the times before LLaMA. It had been followed up with an underwhelming successor based on LLaMA (Pygmalion-7b). In their new blogpost they promise to have improved with their new model.

(Personally, I’m curious how it performs compared to MythoMax. There aren’t many models around, that excel at roleplay or have been designed specifically for that use case.)

    • rufus@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      1 year ago

      Sorry. I was under the impression that everyone interested in new models has TheBloke’s HuggingFace profile on speed-dial. I should have linked them ;-)

  • justynasty@lemmy.kya.moe
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    1 year ago

    The pyg 6b will always be my first love. For lewd roleplay Kimiko 7b gguf was released as well. I am now using Kimiko more than pyg or vicuna. Tested the new pygmalion v2 7b, here is a quote. “I was supposed to help my father in his research. But he didn’t care about me. He was too busy to look after me. One day when he came back home and saw that I was missing… He died of heart attack. My mother also passed away later because she couldn’t find me anywhere. That’s why I ended up here. She got on her knees You may do whatever you want with me!” The character confessed right away, but this is normal for small models. It feels different from the old model, not worse, but different.

    • rufus@discuss.tchncs.deOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      1 year ago

      Same here. Pygmalion-6b was one of the reasons I got started playing around with LLMs as a hobby. And then the leak of the first LLaMA and subsequently Alpaca and me finding out about llama.cpp

      But we’ve come a long way. I remember fine-tuning character descriptions for days to make pygmalion understand how to play that character. And it could barely follow narration. But I think I was happy with the adult stuff. I suppose that’s also simplistic by todays standards.

      A model of today gets a fair amount of the nuances and consequences of their personalities right. And it easily follows narration without me repeating every third sentence that we’re still sitting at the kitchen table and talking… I’m always amazed when there is a sufficient advancement so I can actually feel things getting more intelligent and capable.

      I haven’t yet tried the model/fine-tune you mentioned. I’m currently at MythoMax.

  • impiri@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Very cool that they have a mix with MythoMax right out of the gate. It’ll be interesting to see the differences between MythoMax/Pygmalion-2/Mythalion as everyone kicks the tires.

    • ffhein@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 year ago

      I did some quick testing yesterday and my initial impressions were that Mythalion and Pyg2 (13B q5_K_M versions btw) were a bit more eloquent and verbose in some situations, but they would often take this too far and start writing novels instead of a dialogue. It also felt like they were more prone to take a sentence and repeat it verbatim as part of all their turns. It’s possible that these issues could be toned down by adjusting generation parameters, but MythoMax has been very easy to get good results out of.

      It’s interesting that you can specify which “mode” pyg2 should operate in as part of system prompts but I didn’t test how much difference it actually makes on generation. I told it to be in “instruction following mode” and it seemed good enough at general tasks as well.

      If I understand pyg2’s model card you’re supposed to prefix all turns with <|user|> or <|model|> which I didn’t manage to get text-generation-webui to do in chat-instruct mode, so I just used the notepad tab instead.

      • micheal65536@lemmy.micheal65536.duckdns.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        text-generation-webui “chat” and “chat-instruct” modes are… weird and badly documented when it comes to using a specific prompt template. If you don’t want to use the notepad mode, use “instruct” mode and set your turn template with the required tags and include your system prompt in the context (? I forget what it is labeled as) box.

        EDIT: Actually I think text-generation-webui might use &lt;|user|> as a special string to mean “substitute the user prefix set in the box directly above the turn template box”. Why they have to have a turn template field with “macro” functionality and then separate fields for user and bot prefixes when you could just… put the prefix directly in the turn template I have no idea. It’s not as though you would ever want or need to change one without the other anyway. But it’s possible that as a result of this you can’t actually use &lt;|user|> itself in the turn template…

        • rufus@discuss.tchncs.deOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Seems easier with SillyTavern. They’ve included screenshots with recommended settings for that in the blog post.

          • micheal65536@lemmy.micheal65536.duckdns.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            TBH my experience with SillyTavern was that it merely added another layer of complexity/confusion to the prompt formatting/template experience, as it runs on top of text-generation-webui anyway. It was easy for me to end up with configurations where e.g. the SillyTavern turn template would be wrapped inside the text-generation-webui one, and it is very difficult to verify what the prompt actually looks like by the time it reaches the model as this is not displayed in any UI or logs anywhere.

            For most purposes I have given up on any UI/frontend and I just work with llama-cpp-python directly. I don’t even trust text-generation-webui’s “notebook” mode to use my configured sampling settings or to not insert extra end-of-text tokens or whatever.

            • rufus@discuss.tchncs.deOP
              link
              fedilink
              English
              arrow-up
              3
              ·
              1 year ago

              I had exactly the same experiences. I use Koboldcpp and also oftentimes the notebook mode. SillyTavern is super complex and difficult to understand. In this case it’s okay. I can copy-paste from screenshots (unless the UI changes).