previously @jrgd@lemm.ee, @jrgd@kbin.social
Lemmy.zip
previously @jrgd@lemm.ee, @jrgd@kbin.social
Lemmy.zip
1 + 2:
There's not much involved in burning an ISO to a flash drive, booting from it, and installing typically. It is different in booting from one on a Mac. If you have an M-series Mac, you will be restricted mainly to anything with the experimental Asahi Linux kernel. If you have an Intel-based Mac, you should generally be good to go. Whenever booting a Linux installer, you'll generally be able to check out the system before installing. It's a good time to check things like backlight brightness and wireless capabilities are working out of the box on your distro of choice.
Accessing the boot menu on a Mac
3:
OpenSUSE Tumbleweed and Fedora are generally good picks. I recommend going for KDE unless you have a strong preference for how GNOME works. As good as the distros are, I generally recommend staying away from distros like Linux Mint (for now) as their implementation of the newer display system called Wayland is not yet complete for Cinnamon. Desktops like KDE and GNOME have functional implementations and will overall provide a solid experience.
4:
You'll see mixed opinions all over the place with this. Personally, I do sit in the GrapheneOS camp at this point. If you don't want to purchase a secondhand Google phone, I'd wait and see for the partnered device that GrapheneOS devs are in works with a currently undisclosed manufacturer on.
I'll repeat the core points the GrapheneOS devs drone on about other Android OSP distributions, but without the hyperbole the devs constantly put in. Yes, e/OS does generally have security problems, some of which stem from the use of microG, and how microG just has to function on the device. It is a trade-off in security for some privacy gained. If you really don't need anything of Google Play Services at all, you could always go for straight LineageOS without any Google services package installed at that point.
5:
By all means, older laptops can definitely still be functional for lighter or alternative tasks. Even if it's not a good workstation anymore, could be fun to experiment with. Older phones (especially Android devices) really do have a set lifespan that I'd recommend to stop using them as daily drivers. When the manufacturers stop supporting them, they can be horrifically vulnerable devices as exploits are found over time. You might still get use out of it though without using its networking capabilities. It likely still has functional storage, screen, cameras, etc. If you're lucky, you might be able to play around with straight Linux projects like PostmarketOS.
For new stuff, Linux-centric vendors can be nice (though a lot of them seem to just rebadge Clevo laptops with a decent markup) as a guarantee of good hardware support. Most business laptops make for good Linux laptops. I personally bought a Framework 13 a few years back and that's my primary laptop. Though if you want to stay away from United States-based projects, your initial choices are probably a good fit. Additionally, you might lean more toward OpenSUSE than Fedora as well in the same principle.
My set of recommendations:
RPMFusion is recommended to add to your system. It's the best way to use Steam, certain drivers (nvidia, v4l2loopback, etc.) as needed.
SELinux is present, but the default policy sets are unlikely to impede your usage. The SELinux applet (seapplet) is a useful tool for diagnosing on the very rare chance you're finding permission denied somewhere that cannot otherwise be explained.
If you pull most of your software as flatpaks from Flathub already, your day-to-day experience won't be much different from Debian.
Fedora's equivalent to LTS releases would be the downstream LTS releases provided by Redhat, RockyLinux, AlmaLinux, and others. They don't have the same package sets as base fedora, and may need extra repositories to get some of the less essential, but 'core' software back. Ultimately not much of a reason to run them on a desktop workstation for personal use.
Upgrading is pretty seamless. It's as easy as graphical updates now or otherwise using the system upgrade module in dnf. I generally have the policy of waiting 2-4 weeks for any minor bugs that made it into a new release to settle. I have been expediting my upgrades for the past few releases in order to catch bugs before friends and family upgrade their machines and haven't found any large problems regardless.
Fedora doesn't inherently expect a system to upgrade forever without maintenance, with 5 years being a typical target for things that may break. With that said, it is good to read the release notes before upgrading to the next edition, as there can rarely be something (like the recent recommendation and changed default for a larger /boot partition) that may require maintenance on a long-term system before upgrading. That said, you do have time to hold off on upgrading the distro, as the general lifetime of each release is ~13 months, giving 1 month overlap into a release two releases ahead. For instance, Fedora 43 will still be maintained up to a month into Fedora 45's release.
Video capabilities aren't currently enabled on the main instance. According to commenters in the meta issue, it is (mostly?) implemented and if you are selfhosting a Stoat server instance, you can enable it for that instance.
There does appear to be some level of curve within your bed mesh, but the skew of the bed definitely shows that the bed is not level across the X axis. If you have manual bed leveling, work on the adjustment across the X axis before taking new mesh samples. Mesh compensation will definitely do a decent job compensating for a skew even as large as four layers worth, but the dimensional accuracy of the bottom of your parts will take a good hit printing like that.
It does depend on the connection type, but the general rule is not completely, barring some connection types like DSL. Given it sounds like you have Fiber, DOCSIS, or similar; you likely fall under the general rule. That said, you can absolutely tune and test above the typical 10-15% safety margin many guides start with without actually incurring any noticeable bufferbloat. The 10-15% is usually a good value for ISPs that fluctuate heavily in available babdwidth to the customer, but for more consistent connections (or for those that overrate high enough that the bandwidth fluctuations sit out of range for what the customer is actually paying for), you can absolutely get much closer to your rated connection speed, if not meeting or even passing it.
The general process is to tune one value at the time (starting with the bandwidth allocations for your pipes), apply the changes, noting the previous value, and performing a bufferbloat test with Waveform's or others' testing tools. Optionally, (this will drastically slow down the process, but can be worth it) one should actually hammer the network with actual load for a good few hours while testing some real-world applications that are sensitive to bufferbloat. Doing this between tweaked values will help expose how stable or unstable your ISP's connection truly is over time.
Yeah, not having cake sqm is the one thing that will probably kill Opnsense as a choice for some people. That's not to say you cannot get excellent results with fq_codel, because you absolutely can (I actively use both OpenWRT and OPNSense on different network applications personally). It is definitely more work to get good results though. OPNSense's wireguard support has been excellent for a number of years now, and it's exclusively what I use for tunneling in a VPC I rent.
If you're particularly constricted on host hardware and need a lightweight router to manage multiple other VMs on said host, I could definitely see the benefits of running a minimal OpenWRT over OPNSense in that case.
I mean, the mini PCs don't come with a managed switch, and often without good wireless connectivity that most home routers will come equipped with. So in total with Wi-Fi APs and a decent switch, definitely more than €100 in total.
Also unrelated, but if you're running a x86 system with gigabytes of RAM, why not run Opnsense at that point?
Looking up the router, it was allegedly produced in 2024, according to the OpenWRT wiki. Barring any outliers, OpenWRT generally only sunsets hardware when a new version has higher hardware requirements than is provided by a device. The supported devices page lists out the hard requirements as well as recommendations. Currently 8 MiB flash storage is the minimum, with 16+ MiB recommended (for additional functions, user addons, etc.). 64 MiB is the minimum RAM target, with 128+ MiB recommended. According to the router's wiki page, your chosen router exceeds both recommended requirements. Overall, the router should be suitable for a good while not barring any severe hardware or bootloader-level exploitable vulnerabilities are discovered with the device. There is no explicit date of when your router will no longer be supported, but you can check the history of the supported devices page to get the general trend of when OpenWRT bumps up the minimum requirements. For instance, it was just 4/8+ MiB flash storage and 32/64+ MiB RAM in early 2017.
Depending on what you want to do with the router, getting something with more RAM and a stronger CPU could be beneficial for various tasks (e.g. adblock-fast, cake sqm, etc.). Definitely do research on what you want your router to do though before choosing to go with higher specs or not.
With LosslessCut, I've had good success with doing keyframe cuts with h.264 footage in MKV containers. Frame cuts end up in broken outputs pretty much every time. There's also Avidemux, which might be worth a try. More than likely though, if you want frame-precision in your cuts, you'll have to re-encode, at which point you could use something as minimal as Handbrake or a full NLE editor like Kdenlive.
Permanently Deleted
In reference to the article we're discussing, I am not entirely talking about vulnerabilities in the implementations, but specifically about the lack of standard security features allegedly not present by design in D-Bus. Namely things like namespace reservation, access controls, and fully-defined transport encryption implementations.
In an environment where desktop security is starting to be taken seriously (see Wayland, freedesktop protocols), D-Bus is lacking by comparison. Pulling from the article, any userland application that implements its own access to the user D-Bus can just dump the contents of your keychain (browser-stored passwords, Signal encryption keys, user contacts, manually stored secrets, etc.). I'd argue that for any untrustworthy application (deliberately run or not), shouldn't be able to do something like that or otherwise tamper with any application it may feel like.
Flatpak does seem to have ways to limit what applications can access through D-Bus, though I am not entirely sure of the extent of what limits are enforceable. I'll have to read more into Flatpak's D-Bus filtering to know exactly what it can and cannot do.
Additionally, D-Bus policies are indeed a form of access control. Unfortunately there are limitations. The first is that they are statically defined config files. If an application desires D-Bus access restriction, the only way for that to happen is if a D-Bus policy file is installed via package manager with the software. Applications are not allowed control over access to their endpoints through D-Bus. Applications can absolutely build an authentication or access control layer on top of their D-Bus endpoints, though without a defined standard this quickly gets into the 'vendor-specific behavior is encouraged'. (To note, KDE Wallet does this exact thing with an optional access control panel with snitching ability when applications access the user keyring.)
As for the default user session policy (where applications like the user keyring are accessed), things aren't looking that great. At least on OpenSUSE Leap 16, the session policy is left completely open with zero restrictions by default. This does mean that instead of being standard, every application that wants to use D-Bus is largely left to fend for themselves, which I have no doubt meaning that the level of security afforded can vary wildly between application sets (GNOME, KDE, Hyprland, COSMIC, Cinnamon, etc.). I'd argue this shouldn't be the case and applications developers shouldn't have to work around D-Bus in the goal of securely interfacing with it.
Permanently Deleted
To be fair, D-Bus is a protocol. Proper documentation and standards is half of implementation. Without any well-defined standards, a protocol is essentially useless and/or lawless. While not every case of non-compliance is the fault of D-Bus, the general lax nature of how endpoints are intended to be defined as well as the incompleteness for the actual standards applications should adhere to is a significant factor for why many applications are the way they are. In addition to the severe security flaws in D-Bus, this could be written with extensions to the protocol, becoming a new standard. Though if the problems are as deeply rooted as they are, it's not entirely out of the question to create another standard that isn't D-Bus.
Revolt rebranded as Stoat and does have voice chat with discord-style hotseat channels available in beta. I still wouldn't recommend it yet due to client bugs, but it's getting there.
Referencing my comment in the other thread, Facepunch employees keep being disingenuous about this claim. Even if it is true, due to how unplayable Facepunch made the Linux build a short while before they axed it for "a rampant cheating problem", this claim does not have enough evidence. The linked comment goes into some more detail, but it is insane how much the developers keep doubling down on their disinformation.
To note, even if the claim of 'more cheaters than Linux players' at the end of lifecycle is true, it is a blatant lie by omission. I played Rust from 2016 til shortly after the game went out of Early Access. I stopped playing because Facepunch had completely ruined the Linux builds of the game by removing the long-standing OpenGL output and forcing the new (at the time to Unity) and completely untested Vulkan output as the only option on Linux. For anyone unfortunate enough to experience playing Rust at the end of its Linux run, the game would regularly have major graphical glitches and various rendering errors, including graphical artifacting that would be seizure inducing. If you are prone to epilepsy or otherwise sensitive to bright or flashing lights, please do not click this link. To note, the attached video is a mild case of what commonly happened when playing. That is, if the crashes and many hardware just no longer being able to launch the game properly didn't impede that.
Given all of that, I genuinely wouldn't be surprised if the only "people" running the Linux client were actually cheat bots because there is no way many people were actually still playing the absolute rugpull of a game toward the end of its life.
Actually, you don't have to via terminal! For OpenSUSE, you can use YaST to enable Packman and RPMFusion provides instructions to download the primary repo packages in a browser. Additionally, there is a more generic and slightly more technical way of providing repo URLs and managing additional repos from within PackageKit frontends like Discover. There is currently the point against RPMFusion that the Appstream data isn't automatically configured upon update after adding the repos due to a bug in dnf5. Supposedly this is fixed now, but I haven't verified the functionality again in a fresh setup. I'll update this post later if it is indeed fixed.
Edit:
Tested Fedora 43 and Tumbleweed in VMs for quirk updates.
Tumbleweed's third-party repos (NVidia, Packman at least) still don't have Appstream data, meaning packages have to be installed through YaST, but can be updated through PackageKit frontends.
The particular DNF5 bug is fixed and functional, but PackageKit frontends don't actually pull the appropriate packages in (perform group updates). This does mean that unfortunately there is at least one terminal command needed (dnf update @core) before jumping back to GUI and going from there.
So, mostly terminal-free on Fedora and still terminal-free on OpenSUSE, just with little freedom of installer choice.
If you happen to remember, what DE's/WM's did you use back when testing with your NVidia cards (particularly the 2080 and 3070)? I've been trying to gauge a lot of differences in DE usability, and driver versions. In my recent testing, one user on Fedora KDE 42 with the NVidia-open drivers with a 4070 have had a nearly-flawless experience that would be pretty much on par with AMD or Intel. Meanwhile a 1080ti user genuinely had major problems with both KDE and GNOME on the same distro with the standard proprietary drivers.
As for how much the average user needs to use the terminal on modern distros, especially with some of the graphical tools available, it genuinely is very little, if any at all. I think there is more of a problem with how many guides written go for the least common denominator approach of straight terminal commands for every tweak or fix somebody might look up. It is to a point where I might start attempting to write a series of guides and/or short-form videos for a lot of the more common 'how-to' and frequent problems that many users might encounter, both for GNOME and KDE at least.
I definitely forgot about it when writing but was definitely criteria for me when choosing my current desktop distro and the lineage of server distros was having some sort of MAC component (SELinux or AppArmor) with configured policies available in the distro repos. While it could be argued that a MAC component isn't that necessary for desktop, I do believe for the rising marketshare of the Linux desktop that having the second stage of exploit protection will help mitigate some more severe malware attacks.
I do wonder about PikaOS and CachyOS as recommendations for specifically how the packaging and rollback availability is done on them. I'll be taking a look at both later in VMs to see how they function to an end-user. CachyOS seems to rebuild the Arch packages for newer x86 architecture and other optimizations specifically among other tweaks such as the modified kernel. Then there is PikaOS which is based on Debian Sid but apparently has patches on top of. I am not currently sure to what extent the patching is and if the project is attempting to catch breakages and regressions that make it into Sid.
There is the other point I have of more 'niche' distros like PikaOS, CachyOS, and Nobara, Bazzite to a lesser extent. I do wonder of the longevity of many of them. If not from developer burnout, financials, or the other standard culprits but from much of what makes the distros currently unique being absorbed by more mainstream distros. The work that projects like CachyOS, Nobara, and PikaOS are certainly important, but I feel that things like the higher x86 build targets, kernel patches, etc. will eventually make it into the upstream projects as well. PikaOS will probably have a longer lifespan than say CachyOS due to Debian likely will be among the last distros to drop support for older x86_64 processors, but I think the point does stand. Will the current 'testbed' distros still remain in say 5-10 years?
My big thing with recommending Arch and 'direct' derivatives (those that don't repackage the Arch repositories with their own package versions). Is that Arch explicitly recommends for users to always read the latest release notes on the archwiki homepage before any upgrade, due to breakages sometimes being let in. This either means that every user will need to be their own system maintainer and input their judgement into each update or will need snapshots to restore to and the hope that breaking changes will eventually fix themselves out, if they don't want to reconfigure parts of their OS themselves. If direct derivatives implement automatic btrfs system snapshots that can be selected from like OpenSUSE Tumbleweed does, I think such a derivative could be recommended to more experienced computers users in lieu of other distros like Fedora or Tumbleweed.
I definitely think COSMIC will be quite good, based on the progress I've seen. We'll have to see how many of the relevant Wayland protocols are implemented in the stable releases, but it could be a good recommendation given that System76 seem to care about not breaking desktop applications for reasons. I just don't recommend it now because it is still in beta.
Distro Recommendation Discussion (Not a 'What Distro Should I Use?' Post)
Distro Recommendation Discussion (Not a 'What Distro Should I Use?' Post)
In addition to the other reply on the fundamentals of why not in general, maybe we don't recommend daily driving one of DHH's pet projects.
If anyone is out of the loop of who DHH is, tons of people have covered the topic but I think Niccolò Venerandi has quite comprehensive and digestible coverage. If anyone cares to read or watch Nicco's coverage.