Skip Navigation
Posts
236
Comments
531
Joined
2 yr. ago
  • I was hoping the distros would just do the scrub/balance work for you - makes it no effort then! Good to know OpenSUSE does it for ya. Searching it looks like Fedora doesn’t have anything built in sadly, but the posts are +1 yr old so maaaybe they’ve done something.

  • It’s great for single drive, raid 0, and raid 1. Don’t use it for more raid, it is not acceptable for that (raid 10 obv ok). It still can lose data for raid 5/6 still.

    I’m not sure of the tools that Fedora includes to manage BTRFS but these scripts are great https://github.com/kdave/btrfsmaintenance you use them to scrub and balance. Balance is for redistributing blocks and scrub checks if bits have unexpectedly changed due to bit rot (hardware issue or cosmic ray). Scrub weekly for essential photos, important docs, and the like. Monthly for everything else. Balance monthly, or on demand if free drive space is tight and you want a bit more bits.

    RAID 1 will give you bit rot detection with scrub and self-recover said bit rot detection (assuming both drives don’t mystically have the same bit flip, which is very unlikely). Single drive will just detect.

    BTRFS snapshot then send/receive is excellent for a quick backup.

    Remember that a BTRFS snapshot will keep all files in the snapshot, even if you delete them off the live drive. Delete 500 GB of stuff, but the space didn’t reduce? Probably a snapshot is remembering that 500 GB. Delete the snapshot and your space is back.

    You can make sub volumes inside a BTRFS volume, which are basically folders but you can snapshot just them. Useful for scrubbing your essential docs folder more often than everything else, or snapshotting more often too.

    Lastly, you can disable copy-on-write (cow) for volumes. Reduces their safety but increases write speed, good for caches and I’ve read VM drive images need it for performance.

    Overall, great. Built-in and no need to muck with ZFS’s extra install steps, but you get the benefits ZFS has (as long as you’re ok to be limited to RAID 1)

  • Nice OC very relatable 11/10

  • Me and who??

  • Carol want what carol want

  • Odd, I’ll try to deploy this when I can and see!

    I’ve never had a problem with a volume being on the host system, except with user permissions messed up. But if you haven’t given it a user parameter it’s running as root and shouldn’t have a problem. So I’ll see sometime and get back to you!

  • I try to slap anything I’d face the Internet with with the read_only to further restrict exploit possibilities, would be abs great if you could make it work! I just follow all reqs on the security cheat sheet, with read_only being one of them: https://cheatsheetseries.owasp.org/cheatsheets/Docker_Security_Cheat_Sheet.html

    With how simple it is I guessed that running as a userand restricting cap_drop: all wouldn’t be a problem.

    For read_only many containers just need tmpfs: /tmp in addition to the volume for the db. I think many containers just try to contain temporary file writing to one directory to make applying read_only easier.

    So again, I’d abs use it with read_only when you get the time to tune it!!

  • Looks awesome and very efficient, does it also run with read_only: true (with a db volume provided, of course!)? Many containers just need a /tmp, but not always

  • I trust the check restic -r '/path/to/repo' --cache-dir '/path/to/cache' check --read-data-subset=2000M --password-file '/path/to/passfile' --verbose. The --read-data-subset also does the structural integrity while also checking an amount of data. If I had more bandwidth, I'd check more.

    When I set up a new repo, I restore some stuff to make sure it's there with restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' restore latest --target /tmp/restored --include '/some/folder/with/stuff'.

    You could automate that and make sure some essential-but-not-often-changing files match regularly by restoring them and comparing them. I would do that if I wasn't lazy I guess, just to make sure I'm not missing some key-but-slowly-changing files. Slowly/not often changing because a diff would fail if the file changes hourly and you backup daily, etc.

    Or you could do as others have suggested and mount it locally and just traverse it to make sure some key stuff works and is there sudo mkdir -p '/mnt/restic'; sudo restic -r '/path/to/repo' --cache-dir '/path/to/cache' --password-file '/path/to/passfile' mount '/mnt/restic'.

  • It was DNS, but how?

  • I have my router (opnsense) redirect all DNS requests to pihole/adguardhome. AdGuard home is easier for this since you can have it redirect wildcard *.local.domain while pihole wants every single one individually (uptime.local.domain, dockage.local.domain). With that combo of router not letting DNS out to upstream servers and my local DNS servers set up to redirect *.local.domain to the correct location(s), my DNS requests inside my local network never get out where an upstream DNS can tell you to kick rocks.

    I combined the above with a (hella cheap for 10yr) paid domain, wildcard certified the domain without exposure to the wan (no ip recorded, but accepted by devices), and have all *.local.domain requests redirect to a single server caddy instance that does the final redirecting to specific services.

    I’m not fully sure what you’ve got cooking but I hope typing out what works for me can help you figure it out on your end! Basically the router doesn’t let anything DNS get by to be fucked with by the ISP.

  • I’m surprised no one’s mentioned Incus, it’s a hypervisor like Proxmox but it’s designed to install onto Debian no prob. Does VMs and containers just like Proxmox, and snapshots too. The web UI is essential, you add a repo for it.

    Proxmox isn’t reliable if you’re not paying them, the free people are the test people - and a bit back there was a bad update they pushed that broke shit. If I’d have updated before they pulled it, I’d have been hosed.

    Basically you want a device that you don’t have to worry about updates, because updates are good for security. And Proxmox ain’t that.

    On top of their custom kernel and stuff, it’s just less eyes than, say, the kernel Debian ships. Proxmox isn’t worth the lock-in and brittleness for just making VMs.

    So to summarize, Debian and Incus installed. BTRFS if you’re happy with 1 drive or 2 RAID 1 drives. BTRFS gets scrubbing and bitrot detection (protection with RAID 1). ZFS for more drives. Toss on Cockpit too.

    If you want less hands-on, do to OpenMediaVault. No room for Proxmox in my view, esp. for no clustering.

    Also the iGPU on the 6600K likely is good enough for whatever transcoding you’d do (esp. if it’s rare and 1080p, it’ll do 4k no prob and multiple streams at once). The Nvidia card is just wasting power.

  • I wish too for an in-depth blog post, but the github answer is at least succinct enough

  • This answers all of your questions: https://github.com/containers/podman/discussions/13728 (link was edited, accidentally linked a redhat blog post that didn’t answer your Q directly but does make clear that specifying a user in rootless podman is important for security for the user running the rootless podman container if that user does more than just run the rootless podman container).

    So the best defense plus ease of use is podman root assigning non-root UIDs to the containers. You can do the same with Docker, but Docker with non-root UIDs assigned still caries the risk of the root-level Docker daemon being hacked and exploited. Podman does not have a daemon to be hacked and exploited, meaning root Podman with non-root UIDs assigned has no downsides!

  • I would trust my life to this genius math dog’s calculations

  • Look, I’m not perverted, I’m just Italian

  • 196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    Duality of humankind rule

  • This is shit, I looked at the EU limits on cadmium/lead per the lab reports https://gmoscience.org/wp-content/uploads/2025/01/GSC-HeavyMetalsReports.pdf and EU limits https://eur-lex.europa.eu/legal-content/en/TXT/?uri=CELEX%3A32023R0915 (mg/kg == ppm, ug/kg == ppb) and their heavy metal amounts are very low.

    For the aluminum the EU recommends 1 mg/kg per week on avg - but this EU report makes clear that ~10 mg/kg in baked goods is the norm https://efsa.onlinelibrary.wiley.com/doi/epdf/10.2903/j.efsa.2008.754 . So that’s even fine.

    I don’t care to go into the pesticides but since the metal levels are good to fine but presented as horrendous, I would suspect the pesticide levels are overinflated as well.

  • 196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    bottomless (🥺) breadsticks rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    rule

  • I don’t have many books and yet you have quite a few of them as well, clearly you have exquisite taste

  • I see, do you know of a way in Docker (or Podman) to bind to a specific network interface on the host? (So that a container could use a macvlan adapter on the host)

    Or are you more advocating for putting the Docker/Podman containers inside of a VM/LXC that has the macvlan adapter (or fancy incus bridge adapter) attached?

  • Selfhosted @lemmy.world
    glizzyguzzler @lemmy.blahaj.zone

    How to get a unique MAC/DHCP IP for a Docker/Podman container without MACVLAN?

    I have a bridge device set up with systemd, br0, that replaces my primary ethernet eth0. With the br0 bridge device, Incus is able to create containers/VMs that have unique MAC addresses that are then assigned IP addresses by my DHCP server. (sudo incus profile device add <profileName> eth0 nic nictype=bridged parent=br0) Additionally, the containers/VMs can directly contact the host, unlike with MACVLAN.

    With Docker, I can't see a way to get the same feature-set with their options. I have MACVLAN working, but it is even shoddier than the Incus implementation as it can't do DHCP without a poorly-maintained plugin. And the host cannot contact the container due to the MACVLAN method (precludes running a container like a DNS server that the host server would want to rely on).

    Is there a way I've missed with the bridge driver to specify a specific parent device? Can I make another bridge device off of br0 and bind to that one host-like? Searching really fell apart when I got to

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    butts rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    tithe rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    praxis rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    who is Sandy Loam rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    pov rule: posting to the people's onehundruledninetysix

    Context is:

    • I was luckily banned from the fallen onehundredninetysix for vehemently rejecting the orchestrated hoodwinking
    • luckily banned because i'd have posted boston's sloppiest there like three times before it properly made it to the people's onehundredninetysix
    • I use the default web UI which is aggressively broken on my old phone like the pleb I am
    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    🤤🤤rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    war never changes rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    this is a real brand sold in eurulpe with a real backstory

    Is the backstory a culinarified and gussied up version of the 1969 movie Easy Rider, which has had Jack Nicholson in the cast?

    Or is the backstory what a ghost less version of Ghost Rider starring Nick Cage would look like?

    The Maltese-ified run-on sentence Has?

    So many questions, like why is Nick The Easy Rider Pancake Mix in my good Prussian German market?

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    hiberuletion

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    rulep

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    choose your adventure rule

    196 @lemmy.blahaj.zone
    glizzyguzzler @lemmy.blahaj.zone

    drone rule