• 0 Posts
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle








  • Lots of answers about use-cases of additional wifi networks, so I won’t go into that. I haven’t seen the downsides mentioned here, though. While technically you can run lots of wifi networks of off the same wifi router/ap, each SSID takes a bit of air time to broadcast. While this might sound rather insignificant since this is only a rather tiny bit of information transmitted, it is actually more significant than one might expect. For one the SSIDs are broadcast quite often, but also they are always transmitted at the lowest possible speed (meaning they require a lot more airtime than normal WiFi traffic would require for the same amount of data) for compatibility reasons. This is also the reason why it is a good idea to disable older wifi standards if not needed by legacy clients (such as 54 Mbit/s 802.11G wifi).

    Having two networks is usually fine and doesn’t cause noticable performance degradation, having 4 or more networks is usually noticable, particularily in an already crowded area with lots of wifi networks.


  • When it comes to privacy (and also security), using a router provided by the cable company is a concern, because that router can see and access all devices on your local network and you can’t be sure that security issues are patched in a timely fashion if ever… Using a modem provided by the cable company on the other hand is not much of an issue, because you have to trust the company anyway, when it comes to your traffic to/from the Internet. These days most of the Internet traffic is encrypted (except DNS, which is often still unencrypted), so that is not a big deal. Of course there can be other reasons to use a different modem.

    In either case, it makes sense to switch to a non-ISP DNS server, preferably an encrypted one (DNS-over-TLS or DNS-over-HTTPS), so the ISP can’t see which websites you are accessing.


  • Compared to other SBCs, Raspberry Pis have been pretty inefficient for a while. A Pi 5 idles at about 3 W, which is pretty terrible for such a board, compared to other similar devices. You can get X86 PCs that idle at 3 W which are way more powerful. Other ARM SBCs use less than half that at idle and similarly less under load.

    There are probably multiple reasons for that. The Pi’s SoCs have always used rather old process nodes, which are more power hungry than more modern ones used by other single board computers and PCs - 16 nm for the Pi 5 SoC and 28 nm for the Pi 4. Also, with the Pi 5 there is this additional “south bridge” chip which is attached via PCIe. This consumes additional power and for some reason the PCIe link is configured such that it never enters power saving states. I don’t know why.

    Also, the power supply circuitry on the Pi 5 is far from ideal with its 5 V / 5 A power supply. Such a low voltage at such a high current can easily cause additional losses on the wire. That’s mostly relevant under high load though.





  • Disableing the root login gains nothing in regarding security.

    This is usually not the reason people recommend disabling root login. Since root is an anonymous account not tied to an actual person, in a corporate setting, you do not really know who used that account if you allow root login. If this is relevant for a personal home network is for you to decide. I would say there is not such a strong argument for it to be made in that setting.




  • There is quite a significant difference. An ssh server - even when running on a non-default port - is easily detectable by scanning for it. With a properly configured Wireguard setup this is not the case. As someone scanning from the outside, it is impossible to tell if there is Wireguard listening or not, since it simply won’t send any reply to you if you don’t have the correct key. Since it uses UDP it isn’t even possible to tell if there is any service running on a given UDP port.


  • I always found the software updates of AVM - the manufacturer of those "Fritz!Box"es - to be of questionable quality. If you take a look at the source code that they have to release upon request of the GPL’ed source code they use, you’ll notice that they use ancient versions of the Linux kernel, Busybox and other tools. By ancient, I mean many years old, unsupported by upstream for years. Also, they only publish those sources manually when someone asks for them, which doesn’t bode well for their internal development processes. If they used CI/CD pipelines, they could easily push out updates of those sources with every new release…