😇 A Docker Compose bundle to run on servers with spare CPU, RAM, disk, and bandwidth to help the world. Includes Tor, ArchiveWarrior, BOINC, and more…
Search

Is TrueNAS VM useful in my use case?
BLUF: Do I really need to run TrueNAS VM?
Newbie, running ProxMox to host Plex and associated *arr servers, Nextcloud, DNS, and about a dozen other services. I am running TrueNAS Core in VM.
I have three zfspools; one for OS install, one for Cloud storage, and one for Media storage.
In configuring my Media storage pool, I passed the disks from ProxMox to TrueNAS VM and created a SMB share. Then, apparently to mount the share to Plex, I needed to pass the pool from TrueNAS back to ProxMox. This seems overcomplicated to me but I'm not sure if my thinking is correct.
Basically I'd like some sort of management GUI for storage, just to the extent that shows me how full the storage is, errors, and if a disk goes bad.
If I do get rid of TrueNAS, how do I properly mount the disks back into ProxMox without losing the TBs of data I have on my Media storage pool?
I prefer KeePass over Bitwarden because it is just a simple database file, less that can go wrong (no server component).
I am the original author of the Rust library for decrypting and modifying KeePass databases.. The current best implementation of KeePass, KeePassXC, is written in C++, so there could theoretically be security-relevant memory corruption bugs in it (though the developers of the project are excellent and I don't think it is super likely). Rust is a language that does not have that class of issues by design, so I thought it would be interesting to see how far I could get. So far, I am still having fun and adding features bit by bit, and it is quite cool to me to be able to write one codebase that deploys to Windows, Linux, MacOS, Android (potentially iOS), and any modern web browser.
Our son is fortunately very relaxed, he eats and sleeps a lot so I can get some coding done while he is sleeping. Germany has decent parental leave, so my partner and I are both not working the first two months of his life.

Running Frigate on VM that in turn runs on Proxmox
Hi, I want to get Frigate installed on DELL Optiplex 3020. Given its a Intel Gen4 i5, I suspect I would be asking too much if I installed it on a VM that is running on Proxmox? From the Frigate website "Frigate runs best with Docker installed on bare metal Debian-based distributions. For ideal performance, Frigate needs low overhead access to underlying hardware for the Coral and GPU devices. Running Frigate in a VM on top of Proxmox, ESXi, Virtualbox, etc. is not recommended though some users have had success with Proxmox.". Anyone had any luck getting it up and running on a VM?

Why does a Docker container have access to a directory on my system not explicitly mounted as volume?
I am in the process of migrating my Nextcloud instance from one server to another. I copied the Borg archive to one mountpoint, /mnt/ncbackup
and intend to keep my data in /mnt/ncdata
.
I couldn't really find out what to mount the backup directory to, so I just fired it up as documented in the documentation, and I was able to retrieve my backups from the non-mounted directory.
So this reveals a fundamental flaw in my understanding of how Docker works - I had assumed the container only had access to whatever was explicitly mounted. But I guess I am wrong?
This is the command I run:
undefined
sudo docker run \ --init \ --sig-proxy=false \ --name nextcloud-aio-mastercontainer \ --restart always \ --publish 8080:8080 \ --env APACHE_PORT=11000 \ --env APACHE_IP_BINDING=0.0.0.0 \ --env APACHE_ADDITIONAL_NETWORK="" \ --env SKIP_DOMAIN_VALIDATION=false \ --env NEXTCLOUD_DATADIR="/mnt/ncdata" \ --volume nextcloud_aio_mastercontainer:/mnt/docker-aio-config \ --volume /var/run/docker.sock:/var/r
For a media server :
- Audiobookshelf for audiobooks and podcasts (for podcasts it can fetch them online from a RSS link and download them, you don't need to manually download them)
- Jellyfin for films, series and music (for music you can use jellyfin as backend and another app as frontend if you don't like jellyfin's music player, a lot of people find it lacking)
- Komga for reading comics and manga (there's also Kavita but I haven't tried it)
- Komf for fetching metadata for comics with Komga or Kavita
- Suwayomi Server for manga (it doesn't only act as a reader, with extensions it can find manga online and download them; it can sync your reading progress with AniList, and it's compatible with Tachiyomi if you need that)
- Haven't found one yet for ebooks. I passionately hate Calibre and wouldn't touch it again with a 10 foot pole, but a lot of people swear by it so you might give it a try and see whether you love it or hate it (it's usually one of the two). Be warned though, it will automatically rename all your books and sort them in subfolders in a very stupid way, making it difficult to find anything again manually. So if you want to test it, do it on a copy of your ebooks first, that way if you don't like it you won't be stuck with everything in your ebook library renamed weirdly (speaking from experience -_-).
Cloud :
- Nextcloud : your very own locally hosted Cloud.
Everything can be run in docker containers so your distro or even OS doesn't matter.
Hardware :
- Personally I run everything from my NAS in docker containers but it's starting to get overloaded so I'm planning to make a dedicated media server on a cheap mini PC like a refurbished Dell OptiPlex SFF.
- You could also go for something like an OrangePi or RaspberryPi if you don't mind using ARM.

That's working as intended; as the compose docs state, the command passed by run
overrides the command defined in the service configuration, so it wouldn't normally be possible to actually shut down all the containers and then use docker compose run
to interact with one of them. Run doesn't start anything up in the container other than the command you pass to it.
I'm not familiar with funkwhale, but they probably meant either to (a) shut down all the containers except postgres so that running pg_dump
has something to connect to, or (b) use exec
as you have done.
Personally, I do what you did, and use exec
most of the time to do database dumps. AFAIK, postgres doesn't require that all other connections to it are closed before using pg_dump
. It begins a transaction at the time you run it, so it's not interfering with anything else going on while it produces output (see relevant SO answer here). You could probably just leave your entire funkwhale stack up when you use docker compose exec
to run pg_dump
.

“docker compose run” isn’t doing what I think it’s supposed to
I’m running funkwhale in docker. This consists of a half dozen docker containers one of which is postgres.
To run a backup, funkwhale suggests shutting down all of the containers and then docker compose running pg_dump on the postgres container. Presumably this is to copy the database when nobody is accessing it.
For some reason when I do this, I get an error like:
undefined
pg_dump: error: connection to server on socket "/var/run/postgresql/.s.PGSQL.5432" failed: No such file or directory Is the server running locally and accepting connections on that socket?
It would seem that postgres isn’t running. I see the same error with other commands such as psql.
If I fully boot the container and then try exec-ing the command, it works fine.
So it would seem that the run command isn’t fully booting the instance before running the command? What’s going on here?
The container is built from postgres:15-alpine

good-karma-kit: self host on spare compute
So this includes diverse set of services one could self host. For example:
- tor relay
- scientific compute nodes for protein folding
- ipfs node (I would not host that one with my current knowledge)
Thoughts on this?
Minipc is probably going to be the cheapest and most power efficient option. Check out the Minisforum Refurbished page.

On Home Server Hardware
I'm looking into to building a small media server. I will be th only user of it. So I don't need the ability to do simultaneous streams. I am planning on running Jellyfin.
Sense I am on a more limited budget I was thinking and older desktop and transcoding my files (on my powerful desktop) to be best supported and play smoothly.
Are there any specific recommendations you guys have for hardware I should keep a look out for that's cheaper or other recommendations for cheap hardware?

Any mobile keyboards (iOS or Android) that can transcribe using a self-hosted Whisper instance?
Just got Whisper working on my local server so I can send it audio files via curl POST request and receive transcribed text.
Are there any keyboard plugins for phones that could be directed to a personal server running Whisper to replace functions like Siri/Google assistant voice transcription?
No. I really need Production to be stable because other people will be using it. And I do not want my own playing around to cause issues.
I am really considering both @[email protected] idea of a smaller NAS from the same company and @[email protected] idea of just creating a new volume in the same NAS.

If you want to have two environments (lets call it Sandbox and Production), and you want to have a NAS in Production (Terramaster, Ugreen, etc), what do you put in your Sandbox environment?
I am trying to plan my home lab to satisfy two different needs:
- I want a stable environment where I will put a relatively expensive NAS and maybe some other Zima boards.
- I also want to try new versions and configurations in an env where I can break stuff BEFORE trying things on my Production environment. I would also like to use that environment to try other things like playing with Kubernetes, Docker, Iceberg, etc. I am a backend software engineer so this is very useful to me. Besides being fun.
So, I am just trying to gather ideas on how to configure this both in terms of software and hardware.

Real printers used real paper...


Is VPS still considered self-hosting?
The services are maybe hosted by myself, but the servers aren't mine. I'm only borrowing a small chunk of resources from some company, so can it still be considered self-hosting?

Asking for recommendations regarding cold database storage
I run a Clickhouse database. My usecase is 99% writes and 1% reads - I rarely query the database. Currently, the tables (excluding system logs) use 6GB of the 80GB on my Ionos VPS, with the VPS having 50GB free space total.
In the far future, when that 50GB starts to run out, are there any cheap storage services out there that support a filesystem or a database? Due to querying the data so rarely, read speed isn't that big a deal, and if the storage is on HDD, so be it.

Join Finamp's First Hackathon - Starting Today!
cross-posted from: https://feddit.nl/post/31222548
TL;DR:
Digital Hackathon for Finamp, an open source Jellyfin music client.
From today until April 6th, so two weekends and the week in-between. Looking for designers and developers, as well as anyone else interested in contributing! Check out the Finamplify GitHub project and our Discord server for more info!Hey everyone!
Today's the day, Finamp's first-ever Hackathon - called "Finamplify" - is starting! Let's have a week of hacking together on your favorite open source music client for Jellyfin :DThis is a digital event happening on Finamp's GitHub repository and our beta Discord server.
Check out our previous post for some background information, including the Whys and

MAZANOKE: A self-hosted local image compressor that runs in your browser
A self-hosted local image compressor that runs in your browser. - civilblur/mazanoke
cross-posted from: https://lemmy.world/post/27452084
MAZANOKE is a simple image compressor and converter that runs entirely in your browser. No external uploads, works offline as a web app, and is powered by the "Browser Image Compression" library.
Github project page: https://github.com/civilblur/mazanoke
Features
- 🚀 Compress & Convert Images Instantly In Your Browser
- Adjust image quality (0-100%).
- Set a target file size.
- Set max dimensions, to not exceed a certain width/height.
- Convert between JPG, PNG, and WebP.
- 🌍 Installable Web App
- Use as a Progressive Web App (PWA).
- Dark and light mode.
- Fully responsive for desktop, tablet, and mobile.
- 🔒 Privacy-Focused
- Works offline.
- All image processing happens locally.
- No data is uploaded to external servers. Your files stay on your device.
Use case
This app is designed to co

Join Finamp's first Hackathon Next Week!
cross-posted from: https://feddit.nl/post/30905225
TL;DR:
Digital Hackathon for Finamp, an open source Jellyfin music client.
Saturday, 2025-03-29 to Sunday, 2025-04-06, so two weekends and the week in-between. Looking for designers and developers, as well as anyone else interested in contributing! Checkout the GitHub repository and our Discord server for more info when the time comes!Hey everyone!
I'm thrilled to announce that Finamp, an open source Jellyfin music player, will have its first Hackathon starting next week, starting on Saturday, March 29th and continuing until Sunday, April 6th!
Get ready for over a week of improvements to your favorite open source music client for Jellyfin :DThis is a digital event happening on Finamp's GitHub repository and our beta Discord server.
Why Should I

@justanotherperson I don't yet have HA installed. I was trying to figure out the best way to do it when I saw that in the docs. Good to know that's not so true anymore.

@justanotherperson I read a little on the HomeAssistant website, and it didn't sound like you could use addons if you installed in Docker.

Wondering if I should switch my #RaspberryPi OS from #Stormux, based on #ArchLinuxARM, to #HomeAssistantOS. I mostly work with it over SSH anyway and this might allow me to do more with it. What do
Wondering if I should switch my #RaspberryPi OS from #Stormux, based on #ArchLinuxARM, to #HomeAssistantOS. I mostly work with it over SSH anyway and this might allow me to do more with it. What do others who #SelfHost think?
SelfHosting #SelfHosted #Linux
@selfhost @selfhosting @selfhosted

@RareBird15 @selfhost @selfhosted @selfhosting
I have an rpi3b+ with inn2 newsserver (connected out via uucp), postfix, ngircd, thelounge, and a few other things
I also have a vps (dooes that count as self-hosted?) with a similar setup (can't get uucp to work yet), but also znc and bitlbee
I also had a pihole/unbound dns server, but that one died a while ago

I'm curious to hear what others are #SelfHosting! Here's my current setup:
I'm curious to hear what others are #SelfHosting! Here's my current setup:
Hardware & OS
- Hardware: #RaspberryPi500 (8 GB RAM, 512 GB SD card) #RPi #RPi500 #SingleBoardComputers #HomeLab
- OS: #Stormux, an accessible #Linux distro based on #ArchLinuxARM #LinuxAccessibility #AccessibleTech
Infrastructure & Networking


Hmm, I wonder...does "my favourite pirate streaming service" count as avoiding big tech?
Droid-fy is a material F-Droid client though. And it is better.
Completely agree, but I just swapped over to Obtainium and getting releases directly from Github. Highly recommend.

Here is my list, 15 out of 17 (while 2 are not applicable because I don't use anything like that):

I'm not sure if DuckDuckGo and TMap count because they're both just alternatives but still from big tech.

What is your logging setup?
As the title says, what logging and/or alerting setup do you have? I've used graylog in the past, but find it a bit too complex and "heavy". I would like to something a bit more lightweight. Alternatives I've looked into:
- Dozzle - this looks nice, and would have been a perfect fit but it looks like it's only for docker containers, I would like to collect all syslogs and everything in one place
- Grafana Loki - Haven't looked too much into this, but considering replacing Graylog with this. I don't know if it feels less complex so I'm a bit on the fence.
Any other recommendations?
Have a nose on lowendbox.com can find so some nice deals on there. Lots available that are not US based.

LocalAI iOS Client
I’m trying to find an iOS client that lets me point to my self hosted LocalAI instance. Thanks!
- You really don't need to explicitly set all these misc headers. Caddy takes care of 99% of them by default regardless, and for the most part they're really not doing much for you considering these are self-hosted services.
br
is mostly inferior tozstd
.
Your API endpoint doesn't exist, so something isn't configured correctly here;
cli
❯ xhs https://bookmarks.laniecarmelo.tech/api/v1/auth HTTP/2.0 404 Not Found alt-svc: h3=":443"; ma=2592000 content-encoding: gzip content-security-policy: default-src 'self' https: 'unsafe-inline' 'unsafe-eval'; img-src https: data:; font-src 'self' https: data:; frame-src 'self' https:; object-src 'none' content-type: text/html; charset=utf-8 date: Sun, 09 Mar 2025 02:31:59 GMT etag: "55v7hh2i2t1fq" referrer-policy: strict-origin-when-cross-origin server: Caddy strict-transport-security: max-age=31536000; includeSubDomains; preload vary: Accept-Encoding x-content-type-options: nosniff x-powered-by: Next.js x-xss-protection: 1; mode=block
Check the docker config and ensure that 2 webservers aren't being spawned here. One for the front end reverse_proxy 127.0.0.1:3009
and an additional one for the API server on a different port.

@justanotherperson These latest logs are from the service in docker, not the browser.
undefined
linkwarden03/08/202508:02:06 PM[0] code: 'ERR_STREAM_WRITE_AFTER_END'linkwarden03/08/202508:02:06 PM[0] }linkwarden03/08/202508:02:06 PM[0] linkwarden03/08/202508:02:06 PM[0] Node.js v18.18.2linkwarden03/08/202508:02:18 PM[0] Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the clientlinkwarden03/08/202508:02:18 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:02:18 PM[0] at ServerResponse.setHeader (node:_http_outgoing:648:11)linkwarden03/08/202508:02:18 PM[0] at _res.setHeader (/data/node_modules/next/dist/server/base-server.js:306:24)linkwarden03/08/202508:02:18 PM[0] at sendJson (/data/node_modules/next/dist/server/api-utils/node.js:226:9)linkwarden03/08/202508:02:18 PM[0] at apiRes.json (/data/node_modules/next/dist/server/api-utils/node.js:445:31)linkwarden03/08/202508:02:18 PM[0] at users (/data/.next/server/pages/api/v1/users.js:325:43)linkwarden03/08/202508:02:18 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {linkwarden03/08/202508:02:18 PM[0] code: 'ERR_HTTP_HEADERS_SENT'linkwarden03/08/202508:02:18 PM[0] }linkwarden03/08/202508:02:18 PM[0] node:events:495linkwarden03/08/202508:02:18 PM[0] throw er; // Unhandled 'error' eventlinkwarden03/08/202508:02:18 PM[0] ^linkwarden03/08/202508:02:18 PM[0] linkwarden03/08/202508:02:18 PM[0] Error [ERR_STREAM_WRITE_AFTER_END]: write after endlinkwarden03/08/202508:02:18 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:02:18 PM[0] at ServerResponse.end (node:_http_outgoing:1017:15)linkwarden03/08/202508:02:18 PM[0] at ServerResponse.end (/data/node_modules/next/dist/compiled/compression/index.js:22:783)linkwarden03/08/202508:02:18 PM[0] at apiRes.end (/data/node_modules/next/dist/server/api-utils/node.js:441:32)linkwarden03/08/202508:02:18 PM[0] at sendError (/data/node_modules/next/dist/server/api-utils/index.js:165:9)linkwarden03/08/202508:02:18 PM[0] at apiResolver (/data/node_modules/next/dist/server/api-utils/node.js:489:34)linkwarden03/08/202508:02:18 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)linkwarden03/08/202508:02:18 PM[0] at async NextNodeServer.runApi (/data/node_modules/next/dist/server/next-server.js:674:9)linkwarden03/08/202508:02:18 PM[0] at async Object.fn (/data/node_modules/next/dist/server/next-server.js:1141:35)linkwarden03/08/202508:02:18 PM[0] at async Router.execute (/data/node_modules/next/dist/server/router.js:315:32)linkwarden03/08/202508:02:18 PM[0] Emitted 'error' event on ServerResponse instance at:linkwarden03/08/202508:02:18 PM[0] at emitErrorNt (node:_http_outgoing:853:9)linkwarden03/08/202508:02:18 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {linkwarden03/08/202508:02:18 PM[0] code: 'ERR_STREAM_WRITE_AFTER_END'linkwarden03/08/202508:02:18 PM[0] }linkwarden03/08/202508:02:18 PM[0] linkwarden03/08/202508:02:18 PM[0] Node.js v18.18.2linkwarden03/08/202508:04:20 PM[0] Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the clientlinkwarden03/08/202508:04:20 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:04:20 PM[0] at ServerResponse.setHeader (node:_http_outgoing:648:11)linkwarden03/08/202508:04:20 PM[0] at _res.setHeader (/data/node_modules/next/dist/server/base-server.js:306:24)linkwarden03/08/202508:04:20 PM[0] at sendJson (/data/node_modules/next/dist/server/api-utils/node.js:226:9)linkwarden03/08/202508:04:20 PM[0] at apiRes.json (/data/node_modules/next/dist/server/api-utils/node.js:445:31)linkwarden03/08/202508:04:20 PM[0] at users (/data/.next/server/pages/api/v1/users.js:325:43)linkwarden03/08/202508:04:20 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {linkwarden03/08/202508:04:20 PM[0] code: 'ERR_HTTP_HEADERS_SENT'linkwarden03/08/202508:04:20 PM[0] }linkwarden03/08/202508:04:20 PM[0] node:events:495linkwarden03/08/202508:04:20 PM[0] throw er; // Unhandled 'error' eventlinkwarden03/08/202508:04:20 PM[0] ^linkwarden03/08/202508:04:20 PM[0] linkwarden03/08/202508:04:20 PM[0] Error [ERR_STREAM_WRITE_AFTER_END]: write after endlinkwarden03/08/202508:04:20 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:04:20 PM[0] at ServerResponse.end (node:_http_outgoing:1017:15)linkwarden03/08/202508:04:20 PM[0] at ServerResponse.end (/data/node_modules/next/dist/compiled/compression/index.js:22:783)linkwarden03/08/202508:04:20 PM[0] at apiRes.end (/data/node_modules/next/dist/server/api-utils/node.js:441:32)linkwarden03/08/202508:04:20 PM[0] at sendError (/data/node_modules/next/dist/server/api-utils/index.js:165:9)linkwarden03/08/202508:04:20 PM[0] at apiResolver (/data/node_modules/next/dist/server/api-utils/node.js:489:34)linkwarden03/08/202508:04:20 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)linkwarden03/08/202508:04:20 PM[0] at async NextNodeServer.runApi (/data/node_modules/next/dist/server/next-server.js:674:9)linkwarden03/08/202508:04:20 PM[0] at async Object.fn (/data/node_modules/next/dist/server/next-server.js:1141:35)linkwarden03/08/202508:04:20 PM[0] at async Router.execute (/data/node_modules/next/dist/server/router.js:315:32)linkwarden03/08/202508:04:20 PM[0] Emitted 'error' event on ServerResponse instance at:linkwarden03/08/202508:04:20 PM[0] at emitErrorNt (node:_http_outgoing:853:9)linkwarden03/08/202508:04:20 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {linkwarden03/08/202508:04:20 PM[0] code: 'ERR_STREAM_WRITE_AFTER_END'linkwarden03/08/202508:04:20 PM[0] }linkwarden03/08/202508:04:20 PM[0] linkwarden03/08/202508:04:20 PM[0] Node.js v18.18.2linkwarden03/08/202508:08:11 PM[0] Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the clientlinkwarden03/08/202508:08:11 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:08:11 PM[0] at ServerResponse.setHeader (node:_http_outgoing:648:11)linkwarden03/08/202508:08:11 PM[0] at _res.setHeader (/data/node_modules/next/dist/server/base-server.js:306:24)linkwarden03/08/202508:08:11 PM[0] at sendJson (/data/node_modules/next/dist/server/api-utils/node.js:226:9)linkwarden03/08/202508:08:11 PM[0] at apiRes.json (/data/node_modules/next/dist/server/api-utils/node.js:445:31)linkwarden03/08/202508:08:11 PM[0] at users (/data/.next/server/pages/api/v1/users.js:325:43)linkwarden03/08/202508:08:11 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {linkwarden03/08/202508:08:11 PM[0] code: 'ERR_HTTP_HEADERS_SENT'linkwarden03/08/202508:08:11 PM[0] }linkwarden03/08/202508:08:11 PM[0] node:events:495linkwarden03/08/202508:08:11 PM[0] throw er; // Unhandled 'error' eventlinkwarden03/08/202508:08:11 PM[0] ^linkwarden03/08/202508:08:11 PM[0] linkwarden03/08/202508:08:11 PM[0] Error [ERR_STREAM_WRITE_AFTER_END]: write after endlinkwarden03/08/202508:08:11 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:08:11 PM[0] at ServerResponse.end (node:_http_outgoing:1017:15)linkwarden03/08/202508:08:11 PM[0] at ServerResponse.end (/data/node_modules/next/dist/compiled/compression/index.js:22:783)linkwarden03/08/202508:08:11 PM[0] at apiRes.end (/data/node_modules/next/dist/server/api-utils/node.js:441:32)linkwarden03/08/202508:08:11 PM[0] at sendError (/data/node_modules/next/dist/server/api-utils/index.js:165:9)linkwarden03/08/202508:08:11 PM[0] at apiResolver (/data/node_modules/next/dist/server/api-utils/node.js:489:34)linkwarden03/08/202508:08:11 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)linkwarden03/08/202508:08:11 PM[0] at async NextNodeServer.runApi (/data/node_modules/next/dist/server/next-server.js:674:9)linkwarden03/08/202508:08:11 PM[0] at async Object.fn (/data/node_modules/next/dist/server/next-server.js:1141:35)linkwarden03/08/202508:08:11 PM[0] at async Router.execute (/data/node_modules/next/dist/server/router.js:315:32)linkwarden03/08/202508:08:11 PM[0] Emitted 'error' event on ServerResponse instance at:linkwarden03/08/202508:08:11 PM[0] at emitErrorNt (node:_http_outgoing:853:9)linkwarden03/08/202508:08:11 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {linkwarden03/08/202508:08:11 PM[0] code: 'ERR_STREAM_WRITE_AFTER_END'linkwarden03/08/202508:08:11 PM[0] }linkwarden03/08/202508:08:11 PM[0] linkwarden03/08/202508:08:11 PM[0] Node.js v18.18.2linkwarden03/08/202508:14:32 PM[0] Error [ERR_HTTP_HEADERS_SENT]: Cannot set headers after they are sent to the clientlinkwarden03/08/202508:14:32 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:14:32 PM[0] at ServerResponse.setHeader (node:_http_outgoing:648:11)linkwarden03/08/202508:14:32 PM[0] at _res.setHeader (/data/node_modules/next/dist/server/base-server.js:306:24)linkwarden03/08/202508:14:32 PM[0] at sendJson (/data/node_modules/next/dist/server/api-utils/node.js:226:9)linkwarden03/08/202508:14:32 PM[0] at apiRes.json (/data/node_modules/next/dist/server/api-utils/node.js:445:31)linkwarden03/08/202508:14:32 PM[0] at users (/data/.next/server/pages/api/v1/users.js:325:43)linkwarden03/08/202508:14:32 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5) {linkwarden03/08/202508:14:32 PM[0] code: 'ERR_HTTP_HEADERS_SENT'linkwarden03/08/202508:14:32 PM[0] }linkwarden03/08/202508:14:32 PM[0] node:events:495linkwarden03/08/202508:14:32 PM[0] throw er; // Unhandled 'error' eventlinkwarden03/08/202508:14:32 PM[0] ^linkwarden03/08/202508:14:32 PM[0] linkwarden03/08/202508:14:32 PM[0] Error [ERR_STREAM_WRITE_AFTER_END]: write after endlinkwarden03/08/202508:14:32 PM[0] at new NodeError (node:internal/errors:405:5)linkwarden03/08/202508:14:32 PM[0] at ServerResponse.end (node:_http_outgoing:1017:15)linkwarden03/08/202508:14:32 PM[0] at ServerResponse.end (/data/node_modules/next/dist/compiled/compression/index.js:22:783)linkwarden03/08/202508:14:32 PM[0] at apiRes.end (/data/node_modules/next/dist/server/api-utils/node.js:441:32)linkwarden03/08/202508:14:32 PM[0] at sendError (/data/node_modules/next/dist/server/api-utils/index.js:165:9)linkwarden03/08/202508:14:32 PM[0] at apiResolver (/data/node_modules/next/dist/server/api-utils/node.js:489:34)linkwarden03/08/202508:14:32 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:95:5)linkwarden03/08/202508:14:32 PM[0] at async NextNodeServer.runApi (/data/node_modules/next/dist/server/next-server.js:674:9)linkwarden03/08/202508:14:32 PM[0] at async Object.fn (/data/node_modules/next/dist/server/next-server.js:1141:35)linkwarden03/08/202508:14:32 PM[0] at async Router.execute (/data/node_modules/next/dist/server/router.js:315:32)linkwarden03/08/202508:14:32 PM[0] Emitted 'error' event on ServerResponse instance at:linkwarden03/08/202508:14:32 PM[0] at emitErrorNt (node:_http_outgoing:853:9)linkwarden03/08/202508:14:32 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:83:21) {linkwarden03/08/202508:14:32 PM[0] code: 'ERR_STREAM_WRITE_AFTER_END'linkwarden03/08/202508:14:32 PM[0] }linkwarden03/08/202508:14:32 PM[0] linkwarden03/08/202508:14:32 PM[0] Node.js v18.18.2linkwarden03/08/202508:14:32 PM[0] Error: socket hang uplinkwarden03/08/202508:14:32 PM[0] at connResetException (node:internal/errors:720:14)linkwarden03/08/202508:14:32 PM[0] at Socket.socketOnEnd (node:_http_client:525:23)linkwarden03/08/202508:14:32 PM[0] at Socket.emit (node:events:529:35)linkwarden03/08/202508:14:32 PM[0] at endReadableNT (node:internal/streams/readable:1368:12)linkwarden03/08/202508:14:32 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {linkwarden03/08/202508:14:32 PM[0] code: 'ECONNRESET'linkwarden03/08/202508:14:32 PM[0] }linkwarden03/08/202508:14:32 PM[0] Error: socket hang uplinkwarden03/08/202508:14:32 PM[0] at connResetException (node:internal/errors:720:14)linkwarden03/08/202508:14:32 PM[0] at Socket.socketOnEnd (node:_http_client:525:23)linkwarden03/08/202508:14:32 PM[0] at Socket.emit (node:events:529:35)linkwarden03/08/202508:14:32 PM[0] at endReadableNT (node:internal/streams/readable:1368:12)linkwarden03/08/202508:14:32 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {linkwarden03/08/202508:14:32 PM[0] code: 'ECONNRESET'linkwarden03/08/202508:14:32 PM[0] }linkwarden03/08/202508:14:32 PM[0] Error: socket hang uplinkwarden03/08/202508:14:32 PM[0] at connResetException (node:internal/errors:720:14)linkwarden03/08/202508:14:32 PM[0] at Socket.socketOnEnd (node:_http_client:525:23)linkwarden03/08/202508:14:32 PM[0] at Socket.emit (node:events:529:35)linkwarden03/08/202508:14:32 PM[0] at endReadableNT (node:internal/streams/readable:1368:12)linkwarden03/08/202508:14:32 PM[0] at process.processTicksAndRejections (node:internal/process/task_queues:82:21) {linkwarden03/08/202508:14:32 PM[0] code: 'ECONNRESET'linkwarden03/08/202508:14:32 PM[0] }

@catloaf Once I create an account, I plan to turn off registrations. I wanted to be able to access it with an easy to remember domain rather than an IP address and port. That's why I'm exposing it.

@nick Thanks. I thought I redacted all of that.

@justanotherperson All I knowthat all of my other services work fine. Nothing changes in the docker logs but when I click the sign up button this shows up in the console.
Error
POST https://bookmarks.laniecarmelo.tech/api/v1/users 400 (Bad Request) Stack table collapsed
t.js:1 Click to open the network panel and show request for URL: https://bookmarks.laniecarmelo.tech/api/v1/users Failed to load resource: the server responded with a status of 400 () POST https://bookmarks.laniecarmelo.tech/api/v1/users 400 (Bad Request)
[Violation] Added non-passive event listener to a scroll-blocking 'wheel' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952 Stack table collapsed
menu.js:6 [Violation] Added non-passive event listener to a scroll-blocking 'wheel' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952
[Violation] Added non-passive event listener to a scroll-blocking 'wheel' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952 Stack table collapsed
menu.js:6 [Violation] Added non-passive event listener to a scroll-blocking 'wheel' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952
[Violation] Forced reflow while executing JavaScript took 30ms
menu.js:6 [Violation] Added non-passive event listener to a scroll-blocking 'wheel' event. Consider marking event handler as 'passive' to make the page more responsive. See https://www.chromestatus.com/feature/5745543795965952
I can try taking the services out of the wildcard block and see if that helps.

@justanotherperson I did. I went to bookmarks.laniecarmelo.tech, clicked sign up, entered my details, and clicked the sign up button. Nothing happened.

@justanotherperson No error that I can see:
[0] Warning: For production Image Optimization with Next.js, the optional 'sharp' package is strongly recommended. Run 'yarn add sharp', and Next.js will use it automatically for Image Optimization.
[0] Read more: https://nextjs.org/docs/messages/sharp-missing-in-production
[lanie@stormux linkwarden] $ docker logs linkwarden
[0] Warning: For production Image Optimization with Next.js, the optional 'sharp' package is strongly recommended. Run 'yarn add sharp', and Next.js will use it automatically for Image Optimization.
[0] Read more: https://nextjs.org/docs/messages/sharp-missing-in-production

Hi all. Hoping someone in the #SelfHosting community can help. I'm trying to set up #Linkwarden in #Docker behind #Caddy. The service is running, but I'm unable to create a user account. This is what
Hi all. Hoping someone in the #SelfHosting community can help. I'm trying to set up #Linkwarden in #Docker behind #Caddy. The service is running, but I'm unable to create a user account. This is what I see in my browser console when I try:
undefined
register:1 [Intervention] Images loaded lazily and replaced with placeholders. Load events are deferred. See https://go.microsoft.com/fwlink/?linkid=2048113register%3A1 [DOM] Input elements should have autocomplete attributes (suggested: "new-password"): (More info: https://www.chromium.org/developers/design-documents/create-amazing-password-forms) <input data-testid="password-input" type="password" placeholder="••••••••••••••" class="w-full rounded-md p-2 border-neutral-content border-solid border outline-none focus:border-primary duration-100 bg-base-100" value="ty

Which prosody docker image are you using and why?
The official docker image is still at v0.11 and was last updated in 2016. It looks like maybe trunk
is at 0.12, but, as far as I can tell, that a rolling release. My prosody install is too important to go with a rolling release.
Both the alternatives that they point to are similarly old;
- v0.11.13 --- https://github.com/OpusVL/prosody-docker/
- v0.11.x (?) --- https://github.com/unclev/prosody-docker-extended
My server is langishing on the unclev image. I'd like to migrate to something with 0.12 and have a bit more confidence in its resiliance.