

There has never been a low/no ecological impact mining operation. Anyone who tells you this time it will be different is selling you a bridge.
Agreed, the left joystick + right trackball looks like an absolutely incredible setup. The keyboard layout gives me some pause though, those two buttons on the outer bottom edge look uncomfortable/difficult.
Honestly though, I'm in the market for another ergo split and I think I'm going to give this a go.
There's definitely some physical manifestations of your strongest emotions. Strong feelings of fear or anger trigger musclular reactions in your belly, strong feelings of anxiety or tension in your neck, love and contentment in your chest, etc.
Perhaps they were trying to find those physical connections to gauge the emotion or intensity?
No, Richard, it's 'Linux', not 'GNU/Linux'. The most important contributions that the FSF made to Linux were the creation of the GPL and the GCC compiler. Those are fine and inspired products. GCC is a monumental achievement and has earned you, RMS, and the Free Software Foundation countless kudos and much appreciation.
Following are some reasons for you to mull over, including some already answered in your FAQ.
One guy, Linus Torvalds, used GCC to make his operating system (yes, Linux is an OS -- more on this later). He named it 'Linux' with a little help from his friends. Why doesn't he call it GNU/Linux? Because he wrote it, with more help from his friends, not you. You named your stuff, I named my stuff -- including the software I wrote using GCC -- and Linus named his stuff. The proper name is Linux because Linus Torvalds says so. Linus has spoken. Accept his authority. To do otherwise is to become a nag. You don't want to be known as a nag, do you?
(An operating system) != (a distribution). Linux is an operating system. By my definition, an operating system is that software which provides and limits access to hardware resources on a computer. That definition applies whereever you see Linux in use. However, Linux is usually distributed with a collection of utilities and applications to make it easily configurable as a desktop system, a server, a development box, or a graphics workstation, or whatever the user needs. In such a configuration, we have a Linux (based) distribution. Therein lies your strongest argument for the unwieldy title 'GNU/Linux' (when said bundled software is largely from the FSF). Go bug the distribution makers on that one. Take your beef to Red Hat, Mandrake, and Slackware. At least there you have an argument. Linux alone is an operating system that can be used in various applications without any GNU software whatsoever. Embedded applications come to mind as an obvious example.
Next, even if we limit the GNU/Linux title to the GNU-based Linux distributions, we run into another obvious problem. XFree86 may well be more important to a particular Linux installation than the sum of all the GNU contributions. More properly, shouldn't the distribution be called XFree86/Linux? Or, at a minimum, XFree86/GNU/Linux? Of course, it would be rather arbitrary to draw the line there when many other fine contributions go unlisted. Yes, I know you've heard this one before. Get used to it. You'll keep hearing it until you can cleanly counter it.
You seem to like the lines-of-code metric. There are many lines of GNU code in a typical Linux distribution. You seem to suggest that (more LOC) == (more important). However, I submit to you that raw LOC numbers do not directly correlate with importance. I would suggest that clock cycles spent on code is a better metric. For example, if my system spends 90% of its time executing XFree86 code, XFree86 is probably the single most important collection of code on my system. Even if I loaded ten times as many lines of useless bloatware on my system and I never excuted that bloatware, it certainly isn't more important code than XFree86. Obviously, this metric isn't perfect either, but LOC really, really sucks. Please refrain from using it ever again in supporting any argument.
Last, I'd like to point out that we Linux and GNU users shouldn't be fighting among ourselves over naming other people's software. But what the heck, I'm in a bad mood now. I think I'm feeling sufficiently obnoxious to make the point that GCC is so very famous and, yes, so very useful only because Linux was developed. In a show of proper respect and gratitude, shouldn't you and everyone refer to GCC as 'the Linux compiler'? Or at least, 'Linux GCC'? Seriously, where would your masterpiece be without Linux? Languishing with the HURD?
If there is a moral buried in this rant, maybe it is this:
Be grateful for your abilities and your incredible success and your considerable fame. Continue to use that success and fame for good, not evil. Also, be especially grateful for Linux' huge contribution to that success. You, RMS, the Free Software Foundation, and GNU software have reached their current high profiles largely on the back of Linux. You have changed the world. Now, go forth and don't be a nag.
Thanks for listening.
Have the pawns revolt and institute a constitutional monarchy.
Bethesda was notorious back in the day for using uncompressed textures. Not lossless textures, just fully uncompressed bitmaps. One of the first mods after every game release just compressed and dynamically decompressed these to get massive improvements in load times and memory management.
All respect to JetBrains, I've loved several of their IDEs... This was a dumb idea from the start. Way too niche and specialized. Honestly the allure of a purpose built language specific IDE is losing it's lusture as well with modern architectures often blending several languages, configuration frameworks, IaC...
Born in the nineteen hundreds, as they say.
Yeah, but Zelenskyy wasn't wearing a suit or ending every sentence with "thank you", duh. Oh right and he isn't a malignant fascist.
This is canon in the Doc Ock as Spider-Man arc.
2ghz does not measure it's computing power though, only the cycle speed. Two very different things.
An objective measure is a simple benchmark:
Here's a quad core 1.5ghz RISC-V SoC (noted as VisionFive 2) vs a quad core 1.8ghz ARM chip (noted as Raspberry Pi 400).
It's not even remotely close to usable for all but the most basic of tasks https://www.phoronix.com/review/visionfive2-riscv-benchmarks/6
Well that's true if you have a live animal producing your meat. Not sure that applies if the meat is lab grown though?
100% they absolutely were.
Give geneticists 20 years, we'll have lab grown T-Rex in the grocery store
I have been that cinderblock for my dad many a time.
If the women don't find you handsome, at least they can find you handy.
Depends on your goals. For raw tokens per second, yeah you want an Nvidia card with enough(tm) memory for your target model(s).
But if you don't care so much for speed beyond a certain amount, or you're okay sacrificing some speed for economy, AMD RX7900 XT/XTX or 9070 both work pretty well for small to mid sized local models.
Otherwise you can look at the SOC type solutions like AMD Strix Halo or Nvidia DGX for more model size at the cost of speed, but always look for reputable benchmarks showing 'enough' speed for your use case.
So that means the prices that just got hiked will come back down, right? ...Right?
Yeah... I mean, I did hedge by saying "depends on your CPU and your risk profile", but I understand your point and will edit my comment to caution readers before playing with foot finding firearms.
From my understanding it's a mixed bag. Some of those vulnerabilities were little more than theoretical exploits from within high levels of trust, like this one. Important if you're doing a PaaS/IaaS workload like AWS, GCP etc and you need to keep unknown workloads safe, and your hypervisor safe from unknown workloads.
Others were super scary direct access to in-memory processes type vulnerabilities. On Linux you can disable certain mitigations while not disabling others, so in theory you could find your way to better performance at a near zero threat increase, but yes, better safe than sorry.
I apologize for being glib.
Agreed, shouldn't affect performance. But also depends on how they see best to patch the vulnerability. The microcode patch mechanism is the currently understood vector, but might not be the only way to exploit the actual underlying vulnerability.
I remember the early days of Spectre when the mitigation was "disable branch prediction", then later they patched a more targeted, performant solution in.
no performance change
You must be new here.
Joking. In reality it depends.
The first iteration of this comment had a cheeky observation about the performance impact of these CPU mitigations on Linux, some of which have nearly no real world threat to people not running cloud providers.
And while that's true to a degree, tests disabling some or all of the most modern set of mitigations show that most have become highly optimized and the CPUs themselves have iterated over time to increase the performance of the mitigations as well.
And many of these CPU vulnerabilities actually had in the wild use and can still do horrible things with very little surface exposure from your system. Apologies to the people who read the first version of this comment and took the time to rightly push back.
Foxes at Highgate Cemetery in London 🦊
cross-posted from: https://discuss.online/post/16074774
Foxes at Highgate Cemetery in London 🦊
cross-posted from: https://discuss.online/post/16074774