If you need to add a pinch of salt, get better coffee
Wrangler relaxed fit boot cut
Apple changing their browser was the final straw that killed the internet for ya?
If protests worked they'd be illegal
I'm using Kopia with AWS S3 for about 400GB and it runs a bit less than $4/mo. If you set up a .storageconfig file it will allow you to set a storage level based on the file names. Kopia conveniently makes the less frequently accessed files begin with "p" so you can set them to the "infrequently accessed" level while files that are accessed more often stay in standard storage:
undefined
{ "blobOptions": [ { "prefix": "p", "storageClass": "STANDARD_IA" }, { "storageClass": "STANDARD" } ] }
I've been using OneDev. It's really easy to set up, kinda just works out of the box
Are these western chauvanists in the room with us right now?
As of now, it's just one person getting some clarity on the definition of poverty from another person and you. You must really want to peddle your paywalled links because that's quite the comment hair-trigger you got there
Elon's wet fucking dream
It needs to run the McD app... JFC....
I use it in a homelab, I don't need to apply prod/team/high-availability solutions to my Audiobookshelf or Mealie servers. If an upgrade goes wrong, I'll restore from backup. Honestly, in the handful of years I've been doing this, only one upgrade of an Immich container caused me trouble and I just needed to change something in the compose file and that was it.
I get using these strategies if you're hosting something important or just want to play with new shiny stuff but, in my humble opinion, any extra effort or innovating in a homelab should be spent on backups. It's all fun and games until your data goes poof!
Komodo is a big topic so I'll leave this here: komo.do.
In a nutshell, though, all of Komodo is backed by a TOML-based config. You can get the config for your entire setup from a button on the dashboard. If have all of your compose files inline (using the editor in the UI) and you version control this file, you can basically spin up your entire environment from config (thus my Terraform/Cloudformation comparison). You can then either edit the file and commit, which will allow a "Resource Sync" to pick it up and make changes to the system or, you can enable "managed mode" and allow committing changes from the UI to the repo.
EDIT: I'm not really sure how necessary the inline compose is, that's just how I do it. I would assume, if you keep the compose files in another repo, the Resource Sync wouldn't be able to detect the changes in the repo and react ¯(ツ)_/¯
I guess I don't get that granular. It will respect the current docker compose image path. So. if you have the latest
tag, that's what it will use. Komodo is a big topic: https://komo.do/
Not sure why Renovate is necessary when Komodo has built-in functionality to update Docker images/containers. I wish there was an option to check less often (like once a day), maximum time is hourly.
Also, if you're using Komodo and have one big repo of compose files, consider just saving your entire config toml to a repo instead. You end up with something akin to Terraform or Cloudformation for your Docker hosts
It's almost like punctuation was made for a reason...
I, apparently, have the pleasure of introducing you to Cave Johnson: https://youtu.be/NyLUU3O4zW8
Oh no, they might have to flip over another couch cushion to find that kind of money
I used nextcloud for a while but ended up with a combo of syncthing and filebrowser to similar effect
Multiple sag layers? What is this, the early 90's?
You're the reason we can't have nice things
Permanently Deleted
...to understand Donald Trump...
I'll just stop you right there, nobody needs this

Proxmox Backup Server network traffic


Hey all! I'm running Proxmox VE with the tteck PBS LXC and I can't figure out why there is this constant network traffic on PBS. I have backups set to run in the early morning and the screenshot is from when it should be idle. Any ideas? I know I'm not providing much info here so any clarifying questions are welcome since I don't know what would be important for troubleshooting. Thanks!

Docker network internet access
Hey all! I'm having an issue that's probably simple but I can't seem to work it out.
For some history (just in case it matters): I have a simple server running docker and all services being defined in docker-compose files. Probably doesn't matter, but I've switched between a few management UIs (Portainer, Dokemon, currently Dockge). Initially, I set everything up in Portainer (including the main network) and migrated everything over to Dockge. I was using Traefik labels but was getting a bit annoying since I tend to tinker on a tablet. I wanted something a bit more UI-focused so I switched to NPM.
Now I'm going through all of my compose files and cleaning up a bunch of things like Traefik labels, homepage labels, etc... but I'm also trying to clean up my Docker network situation.
My containers are all on the same network, and I want to slice things up a little better, e.g. I have the Cloudflared container and want to be selective about what containers it has access to network-wise

Traefik conditional certificate for same URL
Hey all!
I have a bunch of services running on my home server and was looking to expose some of them publicly via Cloudflare tunnel. This is done and working great using the origin server certificate and strict TLS.
Up until now, I've been using self-signed certs internally but now I don't want to deal with the "proceed anyway" crap on browsers. I have Traefik set up to get certs from Cloudflare using DNS challenge and that seems to be working.
So, now my problem is: how do I switch between these certificates for the same URL when I'm internal vs public? I'd rather keep that traffic local if I'm at home, which is also working, I just can't figure out how to get Traefik to use the appropriate certificate depending on if the request is coming from my LAN or Cloudflare.
Any suggestions? Is there a better way to accomplish what I want to do?
EDIT: Looks like I'm just going full Cloudflare on this one, thanks for your help everyone!