Khaled57@lemmy.world to Ask Lemmy@lemmy.world · 10 months agoBest GPTmessage-squaremessage-square11fedilinkarrow-up124arrow-down19file-text
arrow-up115arrow-down1message-squareBest GPTKhaled57@lemmy.world to Ask Lemmy@lemmy.world · 10 months agomessage-square11fedilinkfile-text
minus-squarej4k3@lemmy.worldlinkfedilinkEnglisharrow-up5·edit-210 months agoUncensored Llama2 70B has the most flexibility as far as a model without training IMO. The mixtral 8×7B is a close second with faster inference and only minor technical issues compared to the 70B. I don’t like the tone of mixtral’s alignment. Code snippets in Python, bash scripting, nftables, awk, sed, regex, CS, chat, waifu, spell check, uncompromised search engine, talking recipes/cooking ideas, basically whatever I feel like.
Uncensored Llama2 70B has the most flexibility as far as a model without training IMO. The mixtral 8×7B is a close second with faster inference and only minor technical issues compared to the 70B. I don’t like the tone of mixtral’s alignment.
Code snippets in Python, bash scripting, nftables, awk, sed, regex, CS, chat, waifu, spell check, uncompromised search engine, talking recipes/cooking ideas, basically whatever I feel like.