• 0 Posts
  • 112 Comments
Joined 9 months ago
cake
Cake day: December 25th, 2023

help-circle






  • User perspective:

    If you want something big I’d pitch nixos. As in the core distribution. It’s a documentation nightmare and as a user I had to go over options search and then trying to figure out what they mean more often than I found a comprehensive documentation.

    That would be half writing and half coordinating writers though I suspect.

    Another great project with mixed quality documentation is openhab. It fits the bill of more backend heavy side and the devs are very open in my experience. I see it actually as superior in its core concepts to the way more popular home assistant in every aspect except documentation!

    That said: thanks for putting the effort in! ♥



  • You don’t! You observe the result. When no interaction happens the resulting pattern is described well with wave functions. If interaction happens to determine which slit it is traveling through the double line result is seen and can be described by mechanical functions.

    This “we have math for both results” for interpreted to “has properties of both wave and particle”. Which I guess was one press release away from n"it’s both and depends on if I’m looking!"









  • According to their page it’s a pure searxng instance. I didn’t see anything on my own instance changing so there are three options I see:

    • The mentioned server side changes (e.g. A server move you mentioned but could also be server settings, provider settings, etc).
    • client side changes: somehow your Firefox provides different information to alter the results
    • subjective change: it’s always a possibility that either what you searched or your perception was more fine tuned to Cyrillic.

    And then there’s the obligatory “none or all of the above”.

    Personally I’d guess it’s just a fluke. I gave it a few searches from Firefox mobile on “all languages” and had a mix of mainly English and a bit of German und French in there as results.

    Edit: if you’re comfortable with that feel free to share some search terms and we can compare results. Would be curious myself!




  • I’d try chat gpt for that! :)

    But to give you a very brief rundown. If you have no experience in any of these aspects and are self learning you should expect a long rampup phase! Perhaps there is an easier route but I’m not familiar with it if there is.

    First, familiarize yourself with server setups. If you only want to host this you won’t have to go into the network details but it could become a cause for error at one point so be warned! The usual tip here is to get yourself familiar enough with docker that you can read and understand docker compose files. The de facto standard for self hosting are linux machines but I have read of people who used Macos and even windows successfully.

    One aspect quite unique to themodel landscape is the hardware requirements. As much as it hurts my nvidia despicing heart at this point in time they are the de facto monopolist. Get yourself a card with 12GB VRAM or more (everything below will be painful if you get things running at all. I’ve tried and pulled or smaller models on a 8GB card but experienced a lot of waiting time and crashes). Read a bit about Cuda on your chosen OS and what their drivers need.

    Once you can understand this whole port, container, path mapping and environment variable things.

    Then it’s going to the github page linked, following their guide and starting a container. Loading models is actually the easier part once you have the infrastructure running.