• 0 Posts
  • 21 Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle







  • 100% agree.

    For anyone who may disagree, consider thinking of excess wealth as excess food.

    If you were in a stadium full of people that represent all of humanity, and you have more food than you could ever even eat in multiple lifetimes are you not an evil person for not sharing with those who are literally starving to death?

    These are people with the amount of wealth who could easily subsidize paying a team of people to plan out how to appropriate give away most of their wealth so they don’t have “excess food” by the time they die - and not have it impact their day to day lifestyle. Instead they let others starve.


  • I was also not sure what this meant, so I asked Google’s Gemini, and I think this clears it up for me:


    This means that the creators of Llama 3.3 have chosen to release only the version of the model that has been fine-tuned for following instructions. They are not making the original, “pretrained” version available.

    Here’s a breakdown of why this is significant:

    • Pretrained models: These are large language models (LLMs) trained on a massive dataset of text and code. They have learned to predict the next word in a sequence, and in doing so, have developed a broad understanding of language and a wide range of general knowledge. However, they may not be very good at following instructions or performing specific tasks.
    • Instruction-tuned models: These models are further trained on a dataset of instructions and desired outputs. This fine-tuning process teaches them to follow instructions more effectively, generate more relevant and helpful responses, and perform specific tasks with greater accuracy.

    In the case of Llama 3.3 70B, you only have access to the model that has already been optimized for following instructions and engaging in dialogue. You cannot access the initial pretrained model that was used as the foundation for this instruction-tuned version.

    Possible reasons why Meta (the creators of Llama) might have made this decision:

    • Focus on specific use cases: By releasing only the instruction-tuned model, Meta might be encouraging developers to use Llama 3.3 for assistant-like chat applications and other tasks where following instructions is crucial.
    • Competitive advantage: The pretrained model might be considered more valuable intellectual property, and Meta may want to keep it private to maintain a competitive advantage.
    • Safety and responsibility: Releasing the pretrained model could potentially lead to its misuse for generating harmful or misleading content. By releasing only the instruction-tuned version, Meta might be trying to mitigate these risks.

    Ultimately, the decision to release only the instruction-tuned model reflects Meta’s strategic goals for Llama 3.3 and their approach to responsible AI development.






  • Go fuck yourself with this pacifist attitude

    there you go again, assuming you know anything about who i am, who i support, and what actions i’ve taken/not taken.

    my critique of your attitude is that you blame others in a “matter of fact” way without knowing anything about them. childish, immature, and edgy are appropriate descriptions for your reactions. i’m no longer responding after this as clearly you are quick to anger and anger doesn’t lead to rational thought. if you’re not already seeing a therapist, i strongly suggest it. lashing out and projecting doesn’t hurt anyone but yourself.


  • I’ll be honest, not a lot of pros to using Firefox if you don’t care about using the best adblocker (uBlock origin). That said, if you haven’t tried it recently, you should give it a shot. I would recommend installing uBlock Origin and Dark Reader (if you like your pages dark) for the best experience.

    It’s not as fast as Chrome (not noticeable on newer devices) but I personally disagree with giving Google so much power over the web, so I’m ideologically opposed to using anything based on Chromium or not open source, so my options are limited.





  • Uhh. Billions of ways?

    Okay, I’m exaggerating for effect. That said, I’m not saying it isn’t gruesome. I’m saying the “sacrifice” pales in comparison to everyday horrors that happen to real people who aren’t promised eternal bliss for the loss of a single weekend. Just look at what the cartels post online, much worse options.