


I also have backup accounts on these instances:
https://beehaw.org/u/lodion
https://sh.itjust.works/u/lodion
https://lemmy.world/u/lodion
https://lemm.ee/u/lodion
https://reddthat.com/u/lodion

His term has covered 4 months so far... Jan, Feb, Mar, Apr.

No idea where it's from, but I used it as a kid too. Also munted, though that has fallen out of use for the similar sound to another word.

We've now caught up with LW, should stay in sync barring any issues or massive growth in activities from LW:


Ha just finished watching a movie and was going to post this, you beat me :)


Still on track to be caught up late tomorrow :)


At the rate we're catching up, I think we'll be all caught up Saturday some time:


It's a simple tick box in the admin settings interface. Trivially easy to enable, and presumably disable. Doesn't appear to even require a service restart.

It's not so much about cost, but impact on user experience when the server is busy serving non-logged in users.

It's not usually a concern, but having this in place would have mitigated the recent "attacks" we experienced without me having to do anything.

I'm not talking about making this community private. Its valuable as is.
0.19.11 adds the ability for an instance to not display content to non-logged in users. Non-AZ users would retain access to all AZ homed communities etc.

Go Private?
Lemmy 0.19.11 (which I've just upgraded AZ to) has a new feature to allow regular federation, but require users be logged in to view content.
I'd like to gauge feedback from users on this. It will not add privacy, or limit the propagation of posts/comments etc. But it will limit AZ server resource consumption by bots or users that are not logged in.
Thoughts/concerns on enabling this feature?
Update: thank you all for your thoughts and feedback on this. We'll leave AZ as it is, though may use this feature in future if we need to mitigate attacks or other malicious traffic.

As posted elsewhere, LW admins have enabled concurrent outgoing federation ~12 hours ago which has greatly improved things:


They've turned this on overnight after it came up in an admin chat channel, we're no longer falling behind but are catching up at a good rate. Once caught up, federated activity from LW -> AZ should be pretty snappy and remain that way.


Yep, /c/meta is the perfect place to discuss this. Or anything else meta relating to AZ.

Have reached out to LW admins, they've enabled 2 send threads... looking better now... will keep an eye on it.

You're wrong, I'll leave it at that. Won't be replying any further.

Thanks, I hadn't seen this one. I've updated our nginx config and AZ now passes the test on that page. No idea if it will help with GIFs etc.

With the resources available its not feasible for AZ to develop/deploy custom solutions that can be resolved by remote instances with trivial configuration changes.
I'm not going to address specific parts of your post, suffice to say I disagree on almost everything you said.
As I said previously, if you have a workable solution please do devlop it and submit a PR to the lemmy devs. I'd be happy to try your suggestion should they roll it in.

You're contradicting yourself there. By definition adding an external service is a customization to lemmy. I'm not interested in running un-vetted software from a third party.
This has been discussed previously with a request from a reputable source to batching content from LW. That setup required an additional server for AZ, close to LW. And for LW to send their outgoing federation traffic for AZ to it, which then batched and send to the real AZ server. This offer was declined, though appreciated.
I've been transparent and open about the situation. You seem to think this is the fault of AZ, and we're willfully not taking an action that we should be taking. This is not the case.
As it stands the issue is inherent with single threaded lemmy federation, which is why the devs added the option for multiple concurrent threads. Until LW enable this feature, we'll see delayed content from them when their activity volume is greater than what can be federated with a single thread. To imply this is the fault of the receiving instances is disingenuous at best, and deliberately misleading at worst.

Note I said lemmy AND the activitypub protocol, ie lemmy does not currently have this capability. If it were added to mainline lemmy I'd be open to configuring it, but its not so I can't.
The root cause of the issue is well understood, the solution is available in lemmy already: multiple concurrent outgoing federation connections to remote instances. AZ has had this configured since it was available. LW have not yet enabled this, though they're now running a version that has it available.
Appreciate the offer, but I'm not interested in customising the AZ server configuration more than it already is. If you write it up and submit a PR that the main lemmy devs incorporate, I'd be happy to look at it.

That isn't how lemmy and the activitypub protocol work. Source instance pushes metadata about new content, remote instance then needs to pull it. If we've not received the push yet, we can't pull the additional info.

Issues 23/3/25
Not entirely clear to me what is going on, but we've seen a large influx in traffic from oversea today. This has lead to high CPU and performance issues.
I've put in place a block to what seems to be the source of the traffic, but its not perfect and may cause other issues. If you see/hear of any please let me know here.

Upgrade to lemmy 0.19.8
I'm about to restart services for this upgrade. Shouldn't be down longer than a few minutes.

Upgrade complete to 0.19.6
I'll be working on upgrading aussie.zone to lemmy 0.19.6 today. All going well disruption will be brief, but there may be some performance issues related to back end DB changes required as part of the upgrade.
I'll unpin this once complete.

Alternate Web UI for aussie.zone
I've spun this up for fun, to see how it compares to the base lemmy UI. Give it a whirl, and post any feedback in this thread. Enjoy!
It could go down at any time, as it looks as though the dev is no longer maintining it...
edit: using this https://github.com/rystaf/mlmym
UPDATE Tuesday 12/11: I've killed this off for now. Unclear of why, but was seeing a huge number of requests from this frontend to the lemmy server back end. Today it alone sent ~40% more requests than all clients and federation messages combined.

Nerd Update 25/10/24
Its been 6 months or so... figure its time for another of these. Keep in mind there have been some major config changes in the last week, which has resulted in the oddities below.
Graphs below cover 2 months, except Cloudflare which only goes to 30 days on free accounts.
CPU:

Memory:

Network:

Storage:

Cloudflare caching:

Pictures are broken
I'm in the process of migrating images to a properly configured object storage setup. This involves an offline migration of files. Once complete, I'll start up pict-rs again. Until then, most images will be broken.
All going well this will finish by morning Perth time, and once up and running again may help with the ongoing issues we've had with images.

Emails from Aussie Zone
After some users have had issues recently, I've finally gotten around to putting in place a better solution for outbound email from this instance. It now sends out via Amazon SES, rather than directly from our OVH VPS.
The result is emails should actually get to more people now, rather than being blocked by over-enthusiastic spam filters... looking at you Outlook and Gmail.

REBOOTING
About to reboot the server, hold onto your hats.

Lemmy 0.19.4
Hey all, following the work over the weekend we're now running Lemmy 0.19.4. please post any comments, questions, feedback or issues in this thread.
One of the major features added has been the ability to proxy third party images, which I've enabled. I'll be keeping a closer eye on our server utilisation to see how this goes...

Maintenance
This weekend I'll be working to upgrade AZ to lemmy 0.19.4, which requires changes to some other back end supporting systems.
Expect occasional errors/slowdowns, broken images etc.
Once complete, I'll be making further changes to enable/tweak some of the new features.
UPDATE: one of the back end component upgrades requires dumping and reimporting the entire lemmy database. This will require ~1 hour of total downtime for the site. I expect this to kick off tonight ~9pm Perth time.
UPDATE2: DB dump/re-import going to happen ~6pm Perth time, ie about 10 minutes from this edit.
UPDATGE3: we're back after the postgres upgrade. Next will be a brief outage for the lemmy upgrade itself... after I've had dinner ๐
UPDATE34: We're on lemmy 0.19.4 now. I'll be looking at new features/settings and playing around with them.

Nerd Update 20/4/24
Its been a little while since I posted stuff :)
CPU:

Memory:

Network:

Storage:

Cloudflare caching:

Comments:
Not much has changed in quite a while. I still have a cron-job running to restart Lemmy every day due to memory leaks, hopefully this improves with future updates. Outside of that, CPU, mem

Current issues (lemmy 0.19)
The upgrade to lemmy 0.19 has introduced some issues that are being investigated.. but we currently have no fixes for:
- thumbnails break after a time. Caused by memory exhaustion killing object storage processes.
- messages to/from lemmy instance not yet running 0.19 are not federating. I believe it requires bugfixes by the devs.
~~I've re-enabled 2 hourly lemmy restarts. Hopefully this will help with both issues, though it will result in a disruption to the site around every couple of hours.
When the hourly restarts are disabled I'll unpin this post. As any other issues are identified I'll post them here too. ~~
Update: I've disabled the 2 hourly restart after upgrading to 0.19.2... lets see how this goes...
Update2: no issues seen since the upgrade, looks to have resolved both the memory leak and the federation issues. Hooray :)

Lemmy 0.19 Upgrade
I'll be restarting AZ today for an update to lemmy 0.19.
Upgrade complete.
This is a major upgrade, so I expect there to be some issues. Strap in, enjoy the ride.
Expect:
- further restarts
- bugs
- slowdowns
- logouts
- 2FA being disabled
- possibly issues with images, upgrading pictrs to 0.4 at the same time

Family tourist activities
Hi all,
Looks like I'll be visiting Melbourne in winter 2024 for a wedding with my family. Any must-see tourist attractions you can suggest?
We'll definitely be hitting up Melbourne zoo, and maybe Werribee.
Thinking I'll have to hire a car.. though I'd prefer to stay central and use public transport and Uber around when required.

Upgrade complete
I'm kicking off a storage upgrade on the server, expect it to proceed in the next 15 minutes and be back online shortly after.

Financials August 2023
Here we are, another month down.. still kicking ๐
As usual Iโm stating full dollar figures for simplicity, but theyโre all rounded/approximate to the dollar. Iโm not an accountant and its close enough for our purposes.
Income
$98 AUD thanks to 13 generous donors
Expenses
$45 OVH server fees
$10 ($~6 USD)Wasabi object storage
$39 ($25 USD) Domain registration extended to 6th August 2025
= $94
Balance
+$486 carried forward from July
+$98 income for August
-$94 expenses paid in August
= $490 current balance
Future
Baseline storage on the server is now at ~70% at the low point of the day, time permitting I'll upgrade shortly to ensure sufficient room for growth. Doubling the current storage will cost an additional ~$12 per month.
THANK YOU
to everyone that has contributed to the running costs of the site.
A very special thank you to @[email protected] for eve

Nerd update 2/9/23
CPU:

Memory:

Network:

Storage:

Cloudflare caching:

Summary:
Not much to call out. The storage drop was due to purging a days worth of images, and clearing the entire object storage cache. When I have time, I'll upgrade the VPS to add storage.