- cross-posted to:
- hackernews@derp.foo
- technews@radiation.party
- cross-posted to:
- hackernews@derp.foo
- technews@radiation.party
If you’re just wanting to run LLMs quickly on your computer in the command line, this is about as simple as it gets. Ollama provides an easy CLI to generate text, and there’s also a Raycast extension for more powerful usage.
when running models locally, I presume the models are trained and the weights and stuff are exported to a “model.” For example Meta’s LLama model.
Do these models get updated, new versions released? I don’t quite understand