Skip Navigation

InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)L
Posts
0
Comments
80
Joined
2 yr. ago

  • HA doesn't need either of these, but if you want an SSL certificate (to run over HTTPS instead of plain HTTP) it is bound to a domain name, which must be public unless you want to enter in the zone of adding our custom certification authority to each of your devices. This name is resolved by a public DNS. You asked how to use it when internet is down, in this case a public DNS is not reachable so you need your own on the local network.

    The reverse proxy is useful when you have a bunch of web services and you want to protect all of them with HTTPS. Instead of delivering the certificate to each of them, you add the HTTPS layer at your reverse proxy and it queries the servers behind in plain HTTP. The reverse proxy has also the benefit of making handling subdomains easier. So instead of distinguishing the different services because they have a different port number you can have a few https://ha.my.domain/ and https://feedreader.my.domain/

    If you just have homeassistant and not care of HTTPS the easiest option is to use the local resolution: modern OSes advertise the name of the device on the network and it can be resolved on the .local domain. But, if you configured HTTPS to use https://name.duckdns.org/ you'll se an error when you try to use https://name.local/ because your browser sees a mismatch between the name in the certificate and the name that you are trying to connect to. You can always ignore this error and move on, but it mostly defeats the point of HTTPS.

  • it make sense to handle Certificate renewal where your reverse proxy is just because they are easier to install this way. Having a single homeassistant let it handle it. The day you'll start hosting more staff and put all of it behind a single reverse proxy (caddy or nginx are the most popular options) you can move certificate handling on the machine with reverse proxy.

    to make your homeassistant reachable even when internet is down you just need a local DNS that resolves yourdomain.duckdns.org to your local IP. This is usually easier configured on the router but many stock firmwares don't allow it. Another option is to install a DNS (pihole is the most famous, I personally use blocky) somewhere and configure your router to advertise this DNS instead of its own.

  • I use https://mycorrhiza.wiki/. It is really lightweight and stores data in a git repo. So it is terribly easy to export and backup it. the only drawback is that it uses its own markdown dialect

  • It was a huge codebase in c# with a single file in VB.net with a comment at the top "copied from codinghorrors.com/…". I never dared to try understanding what that file was supposed to do and why nobody bothered converting it in c#

  • I have an initramfs script which knows half decription key and fetches the other half from internet.

    My threat model is: I want to be able to dispose safely my drives, and if someone steals my NAS needs to connect it to a similar network of mine (same gateway and subnet) before I delete the second half of the key to get my data.

  • How frequently do you send these updates? Most of dynDNS provider rate limit the updates you can send, so it is possible that you send a bunch of useless updates when the IP didn't change and the actual update that is required gets discarded because you hit the limit.

    Do you log your script errors somewhere? Are you sure that the IP changes so frequently?

    I know at least 3 European fiber providers which offers static IPs. For broadband always on connections IP changes should be pretty rare

  • It is not just a matter of how many ports are open. It is about the attack surface. You can have a single 443 open with the best reverse proxy, but if you have a crappy app behind which allows remote code execution you are fucked no matter what.

    Each port open exposes one or more services on internet. You have to decide how much you trust each of these services to be secure and how much you trust your password.

    While we can agree that SSH is a very safe service, if you allow password login for root and the password is "root" the first scanner that passes will get control of your server.

    As other mentioned, having everything behind a vpn is the best way to reduce the attack surface: vpn software is usually written with safety in mind so you reduce the risk of zero days attacks. Also many vpn use certificates to authenticate the user, making guessing access virtually impossible.

  • Ikea and Lidl zigbee work well for me and are reasonably cheap. Ikea's look a little better, but I have only one because it is a pretty recent product.

  • I have automated it with a small initramfs script which has half password and download the other half from internet. My threat model is to protect from a random thief. So they should connect it to a network similar to mine (same netmask and gateway) and boot it before I can remove the half key from internet.

    some security which is on my TODO list is: allow fetching the half key only from my home IP and add some sort of alert for when it is fetched.

  • The really important things (essentially only photos) are backed up on a different USB drive and remotely on backblaze. Around one terabyte cost 2-3$ per month (you pay by operation, so it depends also by how frequently you trigger the backup). You want to search for "cold storage" which is the name for cloud storage unfrequently accessed (in other words, more storage than bandwidth). As a bonus, if you use rclone you can encrypt your data before sending it to the cloud.

  • Backup and encryption. encryption prevents the thief to see my data, backup allows me to make a new server. Furthermore, as other pointed out, I don't expect that a common thief will see a lot of value in a small black box on top of a shelf

  • I remember reading a post on mastodon where it was explained that no mother board validates the secure boot keys expiration dates otherwise it wouldn't boot the first time the BIOS battery gets empty and the internal clock gets reset. The post was written well and was citing some sources. But I didn't try to verify these assertions.

  • I'm not familiar with this setup. But do you want for the server to boot as soon as it receive any packet addressed to its IP?

  • You need to send the WOL packet to the broadcast address of your network, not to the machine IP address. It this way all the machines on the network will receive it, including the ones that have been powered off for a while

  • I remember searching for a similar workaround in the past. I'm not sure parallel will work because the whole automation is blocked on error if I recall correctly. A workaround I found suggested on the ha website (but never tried) was to put the command that may error in a script and run the script as "fire and forget" from the automation. If the automation doesn't wait for the script to finish it won't detect the error either. But, as other pointed out, try to make the zigbee network more stable first.

  • Also, since zigbee is a mesh network, the fix could be as easy as adding a smart plug halfway between the controller and the light. Every zigbee device not running on battery works as a repeater too

  • Just use the directory listing of your favourite web server. You have a HTTP read only view of a directory and all of its content. If you self host likely you have already a reverse proxy, so it is just matter of updating its configuration. I'm sure it is supported by Apache, Nginx, LightHttpd, and Caddy. But I would expect every webserver supports it. Caddy is the easiest to use if you need to start from scratch.

  • I use filestash. I like it because it can connect with so many backends. In my setup it uses samba behind the scenes all the shares permissions are in a single configuration and I don't have to worry about a different set of user credentials.

  • I'd say that the most important takeover of this approach is to stop all the containers before the backup. Some applications (like databases) are extremely sensitive to data corruption. If you simply ´cp´ while they are running you may copy files of the same program at different point in time and get a corrupted backup. It is also important mentioning that a backup is good only if you verify that you can restore it. There are so many issues you can discover the first time you recover a backup, you want to be sure you discover them when you still have the original data.