What might be interesting would be to have it displayed, but grouped by instance. That way we could see some data and potentially uncover troll instances or attempts to brigade the conversation without opening ourselves up to personal attacks.
What might be interesting would be to have it displayed, but grouped by instance. That way we could see some data and potentially uncover troll instances or attempts to brigade the conversation without opening ourselves up to personal attacks.
Ah, I was hoping for something native as I access it from multiple devices. Thanks though, I’ll check it out!
What theme is that? I’ve tried a few but they never look that good.
You could run Firefox in a container attached to the VPN for browsing. You could then connect to it from your workstation over your LAN.
I tried that once. They never watched the show and didn’t give back the USB. 🙁
In a similar vein, I’ve seen a lot of auto moderator implementations created. If instead of creating yet another project, people started contributing to existing ones we’d have a good core set of functionality that could be shared across instances. Competing implementations are fine, but at some point the efforts get spread so thin that progress is limited.
I’m not sure what’s worse. The engineer that thought this would work or the company that doesn’t do code reviews.
Squirt me some tunes, bro!
It’s based on WireGaurd with some added benefits. Free for up to 3 users. I’ve had no issues with it and even use it for corporate networks. An alternative is ZeroTier, while I haven’t used it I hear a lot of people recommend it too.
I get what they’re saying and it may be ‘technically correct’, but the issue is more nuanced than that. In my experience, some trackers have strict requirements or restricted auth tokens (e.g. can’t browse & download from different IPs). Proxying may be the solution, but I’d have to look at how it decides what traffic gets routed where.
There’s some overlap with my torrrents.py
and qbitmanage, but some of its other features sound nice. It also led me to Apprise which might be the notifications solution I’ve been looking for!
Some of the arr-scripts already handle syncing the settings. I had to turn them off because it kept overwriting mine, but Recyclarr might be more configurable.
Thanks!
The problem I’ve found is that the services will query indexers and that not all of the trackers allow you to use multiple IPs. This is where I found it easier to make all outbound requests go through the VPN so I didn’t get in trouble. It’s also why I have the Firefox container set up inside the network with it exposed over the local network as a VNC session. So I can browse the sites while maintaining a single IP.
I do have qbittorrent set up with a kill switch on the VPN interface managed by Gluetun.
The server itself is running nothing but the hypervisor. I have a few VMs running on it that makes it easy provision isolated environments. Additionally, it’s made it easy to snapshot a VM before performing maintenance in case I need to roll back. The containers provide isolation from the environment itself in the event of a service gone awry.
Coming from cloud environments where everything is a VM, I’m not sure what issues you’re referring to. The performance penalty is almost non-existent while the benefits are plenty.
The wiki is a great place to start. Also, most of the services have pretty good documentation.
The biggest tip would be to start with Docker. I had originally started running the services directly in the VM, but quickly ran into problems with state getting corrupted somewhere. After enough headaches I switched to Docker. I then had to spend a lot of time remapping all of the files to get it working again. Knowing where the state lives on your filesystem and that the service will always restart from a known point is great. It also makes upgrades or swapping components a breeze.
Everyone has to start somewhere. Just take it slow and do be afraid to make mistakes. Good luck and have fun! 😀
If you have the time and resources, I highly recommend it. Once it’s all running it becomes mostly a ‘set it and forget it’ situation. You don’t have to remember to scroll through pages of search results to find content. It’ll automatically grab them for you based on your configured quality profile (or upgrade it to better quality). Additionally, you can easily stream it to any devices in our home network (or remote with a VPN).
You don’t have to do it all at once. Start with a single service you’re interested in and slowly add more over time.
For a long time, that was the case. Then the greed nation attacked. Now they’ve reproduced the cable model on the web and more than half of which have terrible clients / infrastructure.
If I could pay for a single service that operated similar to this setup:
I probably would sign up for it as that’s what was so successful for Netflix until all of the studios thought they could do better. And now the consumer has to suffer the consequences.
Good point, updated with HQ link.
Each service is a separate docker-compose.yml
, but they are more-or-less the same as the example configs provided by each service. I did it this way as opposed to a single file to make it easier to add/remove services following this pattern.
I do have a higher quality version of the diagram, but had to downsize it a lot to get pictrs to accept it…
Likely because it’s
$current_year
and there are better choices available.