A recent storm damaged the siding of my house so I'll have to have it replaced. The thought occurred to me to run some network cabling behind the new siding (and likely new insulation) while its all pulled off. Should I run standard riser cabling or outdoor-rated cabling if I do so?
Obviously the most ideal solution is standard in-wall but I don't have the appetite for such a project given half the house was built in the 19th century and I know such an undertaking would involve quite a few surprises that I almost definitely lack the know-how to handle, and I'll probably be moving in a couple of years so I don't want to invest too much time or money into the endeavor.
Alternatively is there a good type of conduit I could run instead?
I’m looking to switch. I like to tinker and try new things.
Replies I’d like to see:
Tell me what OS you’re using! What do you like about it? What don’t you like? What is your primary use for it?
I don’t just want recommendations for my use cases, though I’ll list it below. I want to learn what’s out there and what’s possible.
My Use Cases:
I’m currently using QTS that was preloaded on a QNAP NAS that I got used.
My main goal is to do more self-hosting. I want to be as independent and self-sustainable as possible. Like many of us, I’m burnt out on being treated as a product by big tech.
I’d also like to try to set up a game server for my Steam library.
Really, I just like to tinker. I like when things break or don’t go according to plan. I like to research a problem and fix it!
So I need help with a split dns approach, or a direct fix, normally when running my tunnel on the simplest configuration I get this error:
undefined
Couldn't resolve SRV record &{region1.v2.argotunnel.com. 7844 1 1}: lookup region1.v2.argotunnel.com. on 10.43.0.10:53: read udp 172.16.91.156:54443->10.43.0.10:53: i/o timeout
When I tried to change the nameserver to cloudflare to make it accessible I get this error:
undefined
2025-04-07T10:06:38Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: dial tcp: lookup traefik on 1.1.1.1:53: no such host" connIndex=3 event=1 ingressRule=3 originService=http://traefik/
2025-04-07T10:06:38Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: dial tcp: lookup traefik on 1.1.1.1:53: no such host" connIndex=3 dest=https://nextcloud.spidershomelab.xyz/index.php/204 event=0 ip=198.41.200.23
So I have a issue. whenever I accessed all of my services via 192.168.1.22/wordpress for example. it forwarded that /wordpress to the actual wordpress domain, leading to page not found, however when i strip the initial proefix, i can access the base page, however, when lets say wordpress wants any css or assets, it will look at 192.168.1.22/assets which wont work, so basically, I need a way for sort of, emulate the url paths, so it wont take actual queries to places that dont exist and tries to access resources the incorrect way, i know siteURL exists for WP, but i want a catchall solution which helps my other services.
As the title says I’m curious what quality people choose. I chose Ultra-HD and am a little stunned when I look at the file sizes for some movies at 100GB.
Another question is how can I change the quality and will Radarr remove the existing download and then get a smaller one?
If it helps I have a GPU capable of hardware transcoding.
Today, the ingress-nginx maintainers have released patches for a batch of critical vulnerabilities that could make it easy for attackers to take over your Kubernetes cluster. If you are among the over 40% of Kubernetes administrators using ingress-nginx, you should take action immediately to protect...
When combined with today’s other vulnerabilities, CVE-2025-1974 means that anything on the Pod network has a good chance of taking over your Kubernetes cluster, with no credentials or administrative access required.
I got a server recently and now I need storage and I’m a little upset seeing the prices of HDDs.
My aim is for 4x10TB to run on RAID 5.
I saw other people linking serverpartdeals and they do ship to the uk but the shipping is insane particularly as I want to buy them one at a time to spread the cost.
So what would you guys suggest? New ones are hella expensive, but I can’t seem to find any decent places selling manufacturer recertified ones.
Hey community! Do you use any database or anything to manage your environments? I started a relational Notion database a while back but never dug in. Objects types for things like hardware, software, deployments, technologies, and tie-ins to my other LifeOs databases. (e.g., the inkbird aquarium thermostat in Smart Life/Tuya via Home Assistant is linked to my aquarium database as a gear object.)
I'm rebuilding half my lab right now and thought it might be worth seeing if there's a better method before returning to my half-assed system.
(I pay for Notion - I also failed at making my super complex Obsidian implementation work across my devices and platforms. I'm in the market for a replacement for that whole universe, so if this thread turns into a moratorium on Notion, I get it. Still, I'm open to discussion, so snark is unnecessary.)
A process for automating Docker container base image updates. - GitHub - beatkind/watchtower: A process for automating Docker container base image updates.
I am upgrading my drives and have created a new pool. The original pool had a drive fail, but it has since been replaced. Each drive CKSUM value is at 4.04k and pool had two files with permanent errors. I deleted those, but it now shows this:
undefined
errors: Permanent errors have been detected in the following files:
zfsa:<0x8220>
zfsa:<0x8056>
When I try to create a snapshot and send it to the new pool. After a few terabytes, it fails with this error warning: cannot send 'zfsa@zpool_transfer': insufficient replicas.
CKSUM was always at zero until the first drive failed. The data is not important and I don't care about whatever is corrupt, I just want to get the data to the new pool.
Edit: Forgot to mention, I have scrubbed the pool many times.
Another edit: I've also tried using zpool clear as well. Despite the high checksum errors I've had no issues outside the two now deleted files. The pool is used for a media server which has been working perfectly.
I was looking to upgrade my storage and was recommended to go with used SAS drives and an LSI SAS controller. I purchased an LSI 9211-8i HBA, 8TB Seagate Exos drives, and these cables. The drives are not spinning up at all when connected to the power supply. Are these cables not the right choice for this?
Edit: I have confirmed that a regular SATA drive works if connected with an SFF-8087 to Sata cable. Either I've somehow received 10 dead drives or I'm not powering them right.
Hi all!
I had posted a similar post in selfhosted, but it was deleted because it was only hardware-related. Therefore a new attempt here in the correct sub.
I want to reorganize my home server landscape a bit. A Proxmox server is to receive an LXC, with ollama and open webui.
This is to be used for other containers that categorize via AI (paperless-ai, hoarder, actualbudget-ai, maybe home assistant speech) and also for one or the other chat. Speed is not so important to me, focus is on low idle and models up to 100 GB. It's ok to wait several minutes for answers. I don't want a GPU.
(Currently I successfully use models up to 32b on a Lenovo M920x with i7-8700 and 64 GB RAM. These models are supposed to run faster or slow up to 100b)
I want to spend 2000, if necessary 3000€ (mainboard, CPU, RAM).
My research showed that the bottleneck is always the memory bandwidth of the CPU. I would take 128GB RAM and want to use all channels.
Current variants:
**Intel i9-10940X, 14C/28T, 3.30
I have been given an old T630 which has spent a bit of time in a light industrial environment, some exposure to dust and heat. It works OK except when it turns off and it doesn't log anything in th...
Link Actions
I have a T630 that has started powering off after a random amount of time, usually less than 12 hours. When it powers off the backlight of the front panel LCD goes off as do all lights on the case, and iDRAC also doesn't work. So it looks like there's a problem in the power. Dell support seem to have run out of ideas, presumably because they don't want to suggest that I replace parts and they know I'm not going to pay Dell support.
I suspect it could be a faulty Power Backplane board J14R7 0J14R7, how would I test for that?
I’ve been running a little army of raspberry pi and libre computer lepotato for many years now.
Sometime died of overheating, one died because the microsd card failed so hard that some kind of electrical shock took off the whole pi.
I’m looking at this trend: replace that with a single or a 2 node cluster of mini pc.
The point is I still want to consume as less electricity as possible. So low TDP CPUs 10 to 15W is my most important criteria, then 2 disk bays (don’t care about the form factor or connector).
Reading buyers comments on Amazon indicates that cheap Chinese mini pc have their ssd dying quickly, or their motherboard, or their power supply, sometimes in months, not even a year.
Would you please recommend a low power mini pc please ? It may be Chinese but from a reputable brand (which I fail to determine).
I'm looking to upgrade some of my internal systems to 10 gigabit, and seeing some patchy/conflicting/outdated info. Does anyone have any experience with local fiber? This would be entirely isolated to within my LAN, to enable faster access to my fileserver.
Current existing hardware:
MikroTik CSS326-24G-2S+RM, featuring 2 SFP+ ports capable of 10GbE
File server with a consumer-grade desktop PC motherboard. I have multiple options for this one going forward, but all will have at least 1 open PCIe x4+ slot
This file server already has an LSI SAS x8 card connected to an external DAS
Additional consumer-grade desktop PC, also featuring an open PCIe x4 slot.
Physical access to run a fiber cable through the ceiling/walls
My primary goal is to have these connected as fast as possible to each other, while also allowing access to the rest of the LAN. I'm reluctant to use Cat6a (which is what these are currently using) due to reports of excessive heat and instability from the SFP
I'm currently planning to build a low power nas for my upcoming minirack (10").
It's going to store daily proxmox vm disk snapshots, some image files and some backups from my laptop, all via NFS. Plus some more in the future, but generally, it's going to idle 95% of the day. Not decided on the OS yet, probably TrueNAS Core or OMV.
I already have an Olmaster 5,25" JBOD in which I'll put 3 x 2,5" 2TB SSD via SATA. The JBOD needs a single Molex connector for powering all SSDs. So I need at least 3 SATA + Boot.
Some recherche led me to this post and I tend towards a similar build with a J4105-ITX (cheaper, probably little less power consumption, enough CPU ofr NAS).
These officially are limited to 8GB RAM but seem to work fine with more if you don't update your BIOS which is not optimal but acceptable if everything else wor
I want to establish a second LAN at home. It's supposed to host different services on different infrastructure (vms, k8s, docker) and mostly serving as a lab.
I want to separate this from the default ISP router LAN (192.68.x.0/24).
I have a machine with 2 NIC (eno1 plugged in at ISP router and eno2), both with corresponding bridges and proxmox. I already set up the eno2 bridge with a 10.x.x.x IP and installed a opnsense vm that has eno1 as the WAN interface in the 192 network and eno2 as the LAN interface as 10. network with dhcp server.
I connected a laptop (no wifi) to eno2, got a dhcp lease and can connect the opnsense interface, machines in the 192 network and the internet, same for a vm on the eno2 bridge, so that part is working. There's a pihole in the 192 network that I successfuly set as the dns server in opnsense.
Here's what I am trying to achieve and where I'm not sure about how to properly do it:
Block access from the 10 network to 192 network except