It blocked YouTube ads when ads where served from other domains or subdomains. Now that they’re served from the same subdomains as videos, it’s not blocking anymore.
It blocked YouTube ads when ads where served from other domains or subdomains. Now that they’re served from the same subdomains as videos, it’s not blocking anymore.
I think it has to do with Apple Intelligence that requires 16GB.
Thank you, I had this problem for a while, without actively looking to fix it. Your message gave me everything I needed to reprocess all my failed imported emails!
Let’s automate EVERYTHING!
Arctic is really cool (pun intended). I’m using the beta version for now, but the released one is already really good!
Orion allows you to install extensions. It works so-so, but that’s a first step.
It seems to have it as it’s based on yt-dlp.
Looks good, I’ll need to give it a try!
88? Only?
Beware, it’s “Super Meat Boy Forever” that is free for a limited time on the Epic Games store, not “Super Meat Boy” that becomes forever free…
Indeed, I installed it and it’s fixed!
They will be exempted, as well as residents and professionals.
Your mouth was salty? What’s the problem then?
Mo, the cleaning robot in Wall-E
It’s done to have a big contrast between the letter and its background, so you can read all the information all the time (except in edge cases, like the seconds in the video you show, the background is mostly 50-50 with black and white, so the text is kept in black, and can be a bit hard to read)
I’m using the Jeeaaasus/youtube-dl docker image, which is basically a cron to yt-dlp.
I’m downloading channels and playlists in a directory structure, with a proper naming (like every video is named with S[YY]E[MMDD]
, so a video released today will be episode 1129 of season 23), that’s usable by Plex so I can access it both locally and remotely. I’m pretty sure emby or jellyfin could do the same on this front.
I also added little scripts to update poster images so the channels appear nicely in my plex interface.
The youtube-dl channels configuration is through a text file that’s not really easy to access, but I don’t change it very often. I’d like to have a nice interface (the docker image provides one but it’s a simple text editor and it’s easier to open it in the terminal), but once the list of subscriptions, channels and playlists are set, I don’t change it that often.
That’s my setup, I know it’s not fit for everybody, but it suits me well. Feel free to ask me the config and script files in detail if you’re interested.
Forward slash in file names is the way to go if you want to make a mess in your filesystem. It may be allowed in windows (I don’t even know if that’s really the case), but it’s a path delimiter on Linux, so it’s probably replaced by another character that’s not printable in the console.
I suggest you try renaming your files before sending them, with hyphens (-) for example, plus you can use a proper ISO8601 date and time representation that will allow you to have sorted files just by listing them in alphabetical order.
The informations you enter in the detail panel are the ones you need to access the service.
Say, you have a default installation of Vaultwarden on the port 1234, you can access it directly, in http, with the URL http://hostname:1234
, and therefore, you will need to configure NPM so that the new proxy host access the service with the http
scheme, on the hostname
host, with the port 1234.
Now, you change your installation of Vaultwarden, and add all the necessary TSL public and private keys that it requires, then you will need to directly access your instance with the https scheme, on the https://hostname:1234
URL, so that the TLS handshake can be performed and a secure connection can be made. The NPM configuration then will need to use the https
scheme as well to access the service, otherwise, NPM won’t be able to properly connect to the service and it will fail.
That’s for the “internal” part of your configuration. You can still provide a service with TLS certifications, force TLS and everything, that will be for the external part of your service. If you trust your network, the communication inside it, or the device that holds all your services, it’s totally fine to use an http scheme to access your service internally. But if you have to access it through a network that you don’t trust, say, all the communications are unencrypted, and your NPM host is not the same as your Vaultwarden one, then you should definitely go through the hassle of setting up all the TLS encryption directly inside Vaultwarden first, and access it only with the https scheme.
Trump’s hand make me laugh, not for its size!