Skip Navigation
InitialsDiceBearhttps://github.com/dicebear/dicebearhttps://creativecommons.org/publicdomain/zero/1.0/„Initials” (https://github.com/dicebear/dicebear) by „DiceBear”, licensed under „CC0 1.0” (https://creativecommons.org/publicdomain/zero/1.0/)TR
Posts
16
Comments
62
Joined
1 yr. ago
  • Ah yes, thanks for raising. I had forgotten. So we have three Foss contenders:

    • Frigate
    • Motioneye
    • Zoneminder

    I've only tried ME about 6 years ago with RPi camera. It was alright but I'm sure things have moved on.

  • Are you able to expand? What exactly did not impress you? Motioneye (which I briefly used 3-4 years ago with a dodge camera) and Frigate seem to be those that are used mostly as OOS solutions and hooked up to either OpenHab or HA.

  • homeassistant @lemmy.world
    trilobite @lemmy.ml

    A good compromise between cost, integration and reliability

    Starting to think about setting up my home camera surveillance system and what to avoid making poor decisions, so need to tap into community experience ... you guys :-)

    Frigate seems like the direction to take if I want to strike a good balance between cost, integration and reliability. Hardware is a key issues. I could install HA and Frigate on my VM running on my Truenas server but I feel I'm putting all my eggs in one basket so thinking of having a dedicated machine for HA + Frigate. The Frigate website advertises this little beasts with a Coral PCIe unit. That about 300€ if I'm lucky, but I could live with that.

    For camera I definitely do not want chinese-call-home/cheap stuff. The Lorytas advertised on the Frigate website seem to be difficult to get hold of (are these not chinese btw?) in Europe so was wondering what other peoples experience is. Cameras need to be comparable to these Lorytas in terms of

  • As I'm thinking of getting HA to look after my IP cameras with Frigate, I'm thinking of a mini PC wiht a PCIe Coral device installed. Would this make sense and are there others doing this i.e. have HA on a miniPC with Cora PCIe installed and Frigate on top?

  • But setting up a VPN on a VPS is not really going to do much for privacy is it? It wouldn't take much to work out who is renting the VPS and the VPS has no incentive to hold back any info if a they were issued a search warrant.

    Feels like it becoming more and more challenging living on the Internet without leaving breadcrumbs all over the place.

  • Privacy @lemmy.ml
    trilobite @lemmy.ml

    Why is privacy a luxury

    I've been noticing over la last few years that is is becoming more and more difficult to login to accounts, whether a bank account, a membership account, sometimes even browsing websites for shopping, through my VPN server. Is this just my impression or is there something going on now whereby there are services that keep list of VPN servers that are then sold to backs so that these parties can keep out anyone from trying to login via a VPN. It feels like the general consensus is VPN=malicious rather than "VPN="this guy is just trying to protect his privacy". I use AIRVPN but was wondering if there are VPN services that are more sophisticate and try to circumvent these VPN server blocks? It becoming a real pain to the point I'm wondering what it the point of paying fro a VPN is I'm finding myself having to login through my ISP IP rather than my VPN IP.

  • I think some form of monetization is going to have to come to the federverse at some point. Money is what driver 95% of the internet. The point is making it sustainable and ensuring the money goes to the creators rather that a couple of greedy pigs that own massive corporations that have literally taken over the Internet. The internet has become what we see in geopolitics today. Lots of authoritarian, sovereign and non democratic countries. The Internet was born as an expression on democracy. In liberal democracies, you have (sustainable) capitalism that does not leave anyone behind (social welfare). The federverse is our hope to take internet back to the users but extremist ideas (no monetasation) will make this a missed opportunity. I hope someone is thinking about this.

  • Self Hosted - Self-hosting your services. @lemmy.ml
    trilobite @lemmy.ml

    [QUESTION} Tools to manage homelab

    I've noticed that with time, my homelab is growing and with this comes complexity and time required to maintain. A big challenge is keeping on top of updates of firmware and key components (router and NAS, with pfsense and Truenas Scale respectively). What area people doing to ensure they keep on top of their homelab?

    Self Hosted - Self-hosting your services. @lemmy.ml
    trilobite @lemmy.ml

    LMS to Lyrion migration in docker

    I have LMS at its latest version (8.5.3) installed on a VM. I hadnàt updated for a while and so decided to do so. To my surprise, I learnt that Logitech Media Server is now called Lyrion Music Server. Has anyone migrated from one to the other in a non-painfull way? I did a quick search. There are some guides for Synology and QNAP servers but I was after a more generic guide for pure docker. Anyone come across any?

  • Sound like a great journey you have taken. Well done. I did more or less the same transition, except iPhone bit , and using the same apps you use. This difference is that it took me 10 years to complete this journey in full but that reflect my age more than anything else. Gen X

  • Self Hosted - Self-hosting your services. @lemmy.ml
    trilobite @lemmy.ml

    pfBlockerNG requires log for ASN lookup? What about privacy

    I recently update pfBlockerNG on my pfsense box and after login in several days after I have loads of messages saying: "pfBlockerNG ASN - To utilize the ASN functionality, you must register for a free IPinfo Account. Review IP Tab for more information." Once I register are they going to start collecting data every time pfSense querries their ASN database?

  • Let me get this straight. Ur saying that on the laptop I have two instances, one for me and one for my wife and they both sync to nas instance. I have syncthing installed via apt on my Debian laptop. How do u get two instances going?

  • Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Should I go back to next cloud?

    I moved from next cloud to syncthing some months back. I had nextcloud as an app for Truenas scale. Several times after app dates, next cloud would stop running and would have to setup up everything again.

    Syncthing is OK but 2 things annoy me:

    A. I get huge amounts of conflict file generated that use up space

    B. File sharing with family is complicated. I tried to setup a share account that everyone uses but as syncthing works with device ids, it refuses two accounts from the same machine. I share my Linux laptop with my wife. We each have our own linux account. I've got syncthing running but can't even get my wife's account to sync because I get errors that device I'd already exists.

    I don't want to go back to next cloud just for file sharing. I don't generally like the idea of relying on one service for multiple objectives (calendar, file sharing, etc.).

    Is there a way to get syncthing to do what I want?

  • I think hoarder must ne similari to wallabag which i use solely for preserving content I like. I use floccus plus WebDAV for bookmarks saving because floccus is avaible for android too. So bookmarks are the same on all my devices. Sounds like linkwarden does both but I can't find android app which is a killer for me. Waljabag also is available for android where idomost of my reading.

  • Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Installing Proxmox on a DELL Optiplex 3020

    I've been running VMs on some old DELL T110ii but realise that I've loaded it a bit too much so want to leave it doing the job of NAS with Truenas Scale and move all my VMs to Proxmox. The idea is that I would have two optiplex that provide redundancy. Truenas Scale has got me used to ZFS but clear may not be an option with Optiplex 3020 as ZFS is pointless with one SSD. Has anyone got some similar arrangement and has their VMs and containers running on these simple desktop machines? How are you managing high availability and resilience?

  • make me shake ... brrr

    I'm going to try and see is I can get a VM running on the second Truenas server using the replicated dataset. I only use the second machine to duplicate datasets in case the first machine fails and have to rebuild it.

  • Selfhosted @lemmy.world
    trilobite @lemmy.ml

    ZFS snapshots of VM Truenas datasets - am I safe?

    Hi folks, I've got a VM that is running my Firefly iii instance and Paperless instance as containers. A lot of work and time goes into managing these tools and I want to make sure I don't lose them. This is my setup:

    Turenas Scale machine 1 -> VM1 - Docker containers. The VM sits on its own dataset in Truenas.

    I replicate the dataset to Truenas Scale 2 one a week and this machine only goes on on Sunday to save power.

    I Rsync the dataset to a 3rd machine where there is a hard disk that I store offsite.

    I recognize that I could lose up to one week of work but that is nothing compared to the human hrs spent building those databases from scratch.

    Apart from snapshotting e rsyncing every day, what else could I do to make this more resilient without increasing CAPEX and OPEX costs?

  • I've been asking myself the same question for a while. The container inside a VM is my setup too. It feels like the container in the VM in the OS is a bit of an onion approach which has pros and cons. If u are on low powered hardware, I suspect having too many onion layers just eat up the little resources you have. On the other hand, as Scott@lem.free.as suggests, it easier to run a system, update and generally maintain. It would be good to have other opinion on this. Note that not all those that have a home lab have good powered labs. I'm still using two T110's (32GB ECC ram) that are now quite dated but are sufficient for my uses. They have Truenas scale installed and one VM running 6 containers. It's not fast, but its realiable.

  • TrueNAS @programming.dev
    trilobite @lemmy.ml

    Sharing NFS share that belongs to root

    Hi, I have Immich installed as an app on TrueNAS-SCALE-22.12.4.2. I'm trying to get the Immich folder to share via NFS to my client so that I can rsync it across for back up purposes. While I seem to not be getting any problems mounting the dataset on the client (no showing any errors), the folder is showing empty. The Immich dataset belongs to root on Truenas and permissions are set as u(rwx), g(r-x), o(r-x). I thought that because "other" have read permission of the dataset, I should be able to at least read the contents of the dataset folder. This is all I need for backup purposes. Any thought? Clearly I can't start messing around with permissions or changing user of the Immich dataset or I risk Immich not working anymore.

    Privacy @lemmy.ml
    trilobite @lemmy.ml

    Polycenric and Harbour

    Hi,

    anyone come across and used the Polycentric + Harbour option for managing digital ID? What do you think about it? Does it really manage IDs in a private and secure way? I came across FLUTO who seem to be great promoters of "software for the benefit of humanity" but you always wonder how much you can trust these thrid parties ... when they decide to sell your data?

    Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Connectin pfsense directly to 1GBit ZTE ONT

    Hi, I have my TIM (Italy) ONT installed (its a ZXHN F6005, which I think is also installed by OpenFibre in the UK). This is connected to a TIM router and them to a minipc machine that has pfsense installed. I believe the ZTE ONT can be directly connected to the WAN port of the pfSense machine by having pppoe set on the WAN interface. That way I can drop this intermediate TIM router which is simply sucking up energy. I tried setting a pppoe connection the pfsense machine by giving it userid and password but the connection never comes up. Strangely, even when leaving the WAN interface set to pppoe on pfsense and reconnecting it to the intermediate TIM router, the connection comes up (i.e. doesn't seem to be a requirement).

    Any thoughts?

    Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Now I have 1 GBit fiber and can't benefit :-(

    My old setup was:

    VSDL modem -> pfsense on mini J1900 Celeron (2 GHz) -> CISCO SG300 10MPP switch -> Rukus R310 wifi -> Laptop

    Currnet setup

    Fiber model -> pfsense on mini J1900 Celeron (2 GHz) -> CISCO SG300 10MPP switch -> Rukus R310 wifi -> Laptop

    Today i got my 1GBit fiber installed (big deal for those like me living in rural areas) only to discover that my current network setup is not allowing me to benefit from it.

    I was on VSDL copper wire before and was probably in the region of 50-60 MBit/s with my above current setup. Even when removing the wifi bottle and linking with Cat5 UTP wire directly to switch, I'm not getting major improvements.

    When I got the fiber installed this morning I was disappointed when I saw only marginal gain running at 80 MBit/s (c. +30 MBit). So I decided to connect the laptop via LAN cable directly to modem. I got a starkling 900MBit/s. So, along my network I have bottlenecks.

    THe first one I tested was my little pfsense machine. I installed the

    Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Getting Radicale to work system wide

    Hi folks,

    I installed Radicale earlier today and when I installed it as a user as described on the homepage using $ python3 -m pip install --upgrade radicale.

    I initially created a local storage and ran as normal user $ python3 -m radicale --storage-filesystem-folder=~/.var/lib/radicale/collections. I was able to see the webpage when I type the server address (VM on Truenas) http://192.168.0.2:5234. So the install went well. But I wanted to create system wide so that I can have multiple users loggin in (family members).

    So i did the following:

    • $sudo useradd --system --user-group --home-dir / --shell /sbin/nologin radicale
    • $sudo mkdir -p /var/lib/radicale/collections && sudo chown -R radicale:radicale /var/lib/radicale/collections
    • sudo mkdir -p /etc/radicale && sudo chown -R radicale:radicale /etc/radicale

    Then I created the config file which looks like:

     undefined
        
    [server]
    # Bind all addresses
    hosts = 192.168.0.2:5234, [::]:5234
    max_connections = 10
    # 100 MB
    max_cont
      
    TrueNAS @programming.dev
    trilobite @lemmy.ml

    I'm trying to get my head around this. If I have a media folder of videos that I mount via NFS so that I can access from my laptop, my understanding is that I need to disable "Configuring Host Path Validation" if I then have an app like Jellyfin reading that folder for serving videos to my family. It this correct or am I misunderstanding?

    The alternative is that I would need two difference datasets created, one for the NFS share and one for Jellyfin but this defeats the purpose, plus is an incredible waste of space. Please tell me I have it all wrong ...

    Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Syncthing ... where are the users?

    Just installed Syncthing on my Scale server. It looks like it doesn't have users but rather folder IDs that are then used to sync devices. One of the cool features of Nextcloud is the ability to share files with other users. Can this be done with Syncthing?

    Selfhosted @lemmy.world
    trilobite @lemmy.ml

    Is Radicale the way forward?

    Just thinking of ditching nextcloud and its just too much for my family use. All i needis carddav, caldav and file sync. Have a Debian VM running on Scale and was thinking of using Cloudron docker install. Is this the way others are installing on VMs?