Skip Navigation
Posts
10
Comments
279
Joined
2 yr. ago
  • I use RSS feeds, bump version numbers when a new release is out, git commit/push and the CI does the rest (or I'll run the ansible playbook manually).

    I do check the release notes for breaking changes, and sometimes hold back updates for some time (days/weeks) when the release affects a "critical" feature, or when config tweaks are needed, and/or run these against a testing/staging environment first.

  • Fail2ban is a Free/Open-Source program to parse logs and take action based on the content of these logs. The most common use case is to detect authentication failures in logs and issue a firewall level ban based on that. It uses regex filters to parse the logs and policies called jails to determine which action to take (wait for more failures, run command xyz...). It's old, basic, customizable, does its job.

    crowdsec is a commercial service [1] with a free offering, and some Free/Open-Source components. The architecture is quite different [2], it connects to Crowdec's (the company) servers to crowd-source detections, their service establishes a "threat score" for each IP based on detections they receive, and in exchange they provide [3] some of these threat feeds/blocklists back to their users. A separate crowdsec-bouncer process takes action based on your configuration.

    If you want to build your own private shared/global blocklist based on crowdsec detections, you'll need to setup a crowdsec API server and configure all your crowdsec instances to use it. If you want to do this with fail2ban you'll need to setup your own sync mechanism (there are multiple options, I use a cron job+script that pulls IPs from all fail2ban instances using fail2ban-client status, builds an ipset, and pushes it to all my servers). If you need crowdsourced blocklists, there are multiple free options ([4] can be used directly by ipset).

    Both can be used for roughly the same purpose, but are very different in how they work and the commercial model (or lack of) behind the scenes.

  • Odoo major version upgrades are a pain in the ass. Wouldn't recommend.

  • Fail2ban unless you need the features that crowdsec provides. They are different tools with different purposes and different features.

  • Tested SMS Import/Export (installed from F-droid), works fine.

  • Ansible should only run to make changes to a existing system.

    No. Ansible is fine for provisioning and initial deployment.

  • Back up your git service/repositories to offline storage.

  • Right, I just spent 10 minutes looking for documentation that doesn't involve shitty expensive SaaS/PaaS, couldn't find anything. That disqualifies it for me as well, sorry for wasting your time.

    I'll keep watching this thread, relevant to my interests as well. At work we let ansible (in pull mode) handle the Linux fleet, Android we don't have enough devices to bother, and are looking towards jamf for macs. But I'd love to find a FOSS solution too, our requirements are simple enough (as you said install/remove stuff, change basic settings)

  • My prod and testing environments are 2 libvirt VMs on the same hypervisor. They run the same services, deployed and managed by ansible. The testing VM just gets less disk/CPU/RAM resources, and is powered off most of the time. Simple config changes? Straight to prod. New feature, risky change? Testing first.

  • Ionos works for me. I've used OVH, Scaleway as well, no problems.

  • https://fleetdm.com/ doesn't look bad, would this work?

  • BuyFromEU @feddit.org
    vegetaaaaaaa @lemmy.world
    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world

    Organic Maps migrates to Forgejo due to GitHub account blocked by Microsoft.

  • Data loss is not a problem specific to self-hosting.

    Whenever you administrate a system that contains valuable data (a self-hosted network service/application, you personal computer, phone...), think about a backup and recovery strategy for common (and less common) data loss cases:

    1. you delete a valuable file by accident
    2. a bad actor deletes or encrypts the data (ransomware)
    3. the device gets stolen, or destroyed (hardware failure, power surge, fire, flood, hosting provider closing your account)
    4. anything you can think of

    For these different scenarios try to find a working backup/restore strategy. For me they go like

    1. Automatic, daily local backups (anything on my server gets backed up once a day to a backups directory using rsnapshot). Note that file sync like nextcloud won't protect you against this risk, if you delete a file on the nextcloud client it's also gone on the Nextcloud server (though there is a recycle bin). Local backups are quick and easy to restore after a simple mistake like this. They wont protect you against 2 and 3.
    2. Assuming an attacker gains access to your machine they will also destroy or encrypt your local backups. My strategy against this is to pull a copy of the latest local backup, weekly, to a USB drive, through another computer, using rsync/rsnapshot. Then I unplug the USB drive, store it somewhere safe outside my home, and plug in a second USB drive. I rotate the drives every week (or every 2 weeks when I'm lazy - I have set up a notification to nag me to rotate the drive every saturday, but I sometimes ignore it)
    3. The USB strategy also protects me against 3. If both my server and main computer burn down, the second drive is still out there, safely encrypted. It's the worst case scenario, I'd probably spend quite some time setting up everything again (though most of the setup is automated), and at this point I'd have bigger problems like, you know, burned down house. But I'd still have my data.

    There are other strategies, tools, etc, this one works for me. It's cheap (the USB drives are a one-time investment), the only manual step is to rotate the drives every week or so.

  • If you're interested I wrote a quick HOWTO to migrate TT-RSS data from Mysql to Postgres a while ago. Ctrl+F search for Migrating tt-rss data to Postgresql from a MySQL-based installation here

    I still use that same migrated database 4 years later

  • upgrades:

    • distribution packages: unattended-upgrades
    • third party software: subscribe to the releases RSS feed (in tt-rss or rss2email), read release notes, bump version number in my ansible playbook, run playbook, done.

    vulnerabilities:

    • debsecan for distribution packages
    • trivy fort third-party applications/libraries/OCI images
    • wazuh for larger (work) setups
  • Sometimes you need to understand the basics first. The points I listed are sysadmin 101. If you don't understand these very basic concepts, there is no chance you will be able to keep any kind of server running, understand how it works, debug certificate problems and so on. Once you're comfortable with that? Sure, use something "simpler" (a.k.a. another abstraction layer), Caddy is nice. The same point was made in the past about Apache ("just use nginx, it's simpler"). Meanwhile I still use apache, but if needed I'm able to configure any kind of web server because i taught me the fundamentals.

    At some point we have to refuse the temptation to go the "easy" way when working with complex systems - IT and networking are complex. Just try the hard way first, read the docs, and if it's too complex/overwhelming/time-consuming, only then go for a more "noob-friendly" solution (I mean we're on c/selfhosted, why not just buy a commercial NAS or use a hosted service instead? It's easier). I use firewalld but I learned the basics of iptables a while ago. I don't build apache from source when I need to upgrade, but I would know how to get 75% there - the docs would teach me the rest.

  • By default nginx will serve the contents of /var/www/html (a.k.a documentroot) directory regardless of what domain is used to access it. So you could build your static site using the tool of your choice, (hugo, sphinx, jekyll, ...), put your index.html and all other files directly under that directory, and access your server at https://ip_address and have your static site served like that.

    Step 2 is to automate the process of rebuilding your site and placing the files under the correct directory with the correct ownership and permissions. A basic shell script will do it.

    Step 3 is to point your domain (DNS record) at your server's public IP address and forwarding public port 80 to your server's port 80. From there you will be able to access the site from the internet at http://mydomain.org/

    Step 3 is to configure nginx for proper virtualhost handling (that is, direct requests made for mydomain.org to your site under the /var/www/html/ directory, and all other requests like http://public_ip to a default, blank virtualhost. You may as well use an empty /var/www/html for the default site, and move your static site to a dedicated directory.) This is not a strict requirement, but will help in case you need to host multiple sites, is the best practice, and is a requirement for the following step.

    Step 4 is to setup SSL/TLS certificates to serve your site at https://my_domain (HTTPS). Nowadays this is mostly done using an automatic certificate generation service such as Let's Encrypt or any other ACME provider. certbot is the most well-known tool to do this (but not necessarily the simplest).

    Step 5 is what you should have done at step 1: harden your server, setup a firewall, fail2ban, SSH keys and anything you can find to make it harder for an attacker to gain write access to your server, or read access to places they shouldn't be able to read.

    Step 6 is to destroy everything and do it again from scratch. You've documented or scripted all the steps, right?

    As for the question "how do I actually implement all this? Which config files and what do I put in them?", the answer is the same old one: RTFM. Yes, even the boring nginx docs, manpages and 1990's Linux stuff. Each step will bring its own challenges and teach you a few concepts, one at a time. Reading guides can still be a good start for a quick and dirty setup, and will at least show you what can be done. The first time you do this, it can take a few days/weeks. After a few months of practice you will be able to do all that in less than 10 minutes.

  • I wrote my own, using plain HTML/CSS. Actually the final .html file gets templated by ansible depending on what's installed on the server, but you can easily pick just the parts you need from the j2 template

  • Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world
    etherarp.net Routeable Loopback Addresses

    Today we will learn about loopback addresses that can be reached from the outside via routing. This is useful for running services on a router In a previous post, I talked about the loopback interface and how we can locally bind services to any address in the range 127.0.

    Old article I found in my bookmarks. Although I didn't have the use for it, I thought it was interesting.

    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world

    Consider Security First | The Changelog

    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world

    A new home and license (AGPL) for Synapse and friends

    Synapse and Dendrite relicensed to AGPLv3

    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world

    awesome-selfhosted.net now has subpages for each platform/language

    Hi c/selfhosted,

    I just wanted to let you know that I have added a frequently requested feature to https://awesome-selfhosted.net - the ability to filter the list by programming language or deployment platform. For example:

    You can navigate between platforms/languages by clicking the relevant link in each software project's metadata. There is no main list of platforms, but if someone creates an issue for it, it can be looked into (please provide details on where/how you expect the platforms list to show up).

    A quick update on project news since the new website was released (https://lemmy.world/post/3622280): a lot of [curation work](https://github.com/awesome-selfhosted/awesome-selfhosted-data/pulls?q=is%3Apr+label%3

    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world

    Searx is no longer maintained

    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world
    scotthelme.co.uk Cryptographic Agility Part 1: Server Certificates

    We've encountered a lot of problems of our own making in the TLS/PKI ecosystem in recent years, and whilst we've got better at dealing with them and even avoiding them, there's still a way to go. Certificate Lifetime The focus of these blog posts will be on the maximum

    Blog post about TLS certificates lifetime

    Selfhosted @lemmy.world
    vegetaaaaaaa @lemmy.world

    awesome-selfhosted.net - a list of Free Software network services and web applications which can be hosted on your own server(s)

    This is a new, improved version of https://github.com/awesome-selfhosted/awesome-selfhosted/

    Please check the release announcement for more details.

    Maintainer here, happy to answer questions.

    Linux Audio @waveform.social
    vegetaaaaaaa @lemmy.world

    awesome-linuxaudio · A list of software and resources for professional audio/video/live events production on Linux.