I'm actually exhausted at the hypersexualization of media.
I'm not a prude, I'm glad to live in a time where a sex scene can show up uncensored in a TV show. Adult content made for adults, sure, go nuts.
But must everything now have as much sex as possible to sell? Why must every game, every social media feed, every manga, every book, be filled with excessive porn? The problem is not gooner bait existing, the problem is everything becoming gooner bait
There's an opening for a position you're interested in, so you fill one of these long "everything is in your CV, but you need to retype on our formulary again" registrations.
A week later, they contact you via email. A very enthusiastic person, signing by their own name and no automated HR shenanigans claims to have enjoyed what they saw and you're through to the next step. There's some corporate "we are an amazing workplace for excepcional people" fluff, but nothing terrible, so let's proceed.
You now have a week to type and send a document that is got nothing to do with the position or your technical skills. You need to type a biography, you need to describe how you were in high school, were you social? You need to show proof of your high school grades, then college, you need to give a biographical memoir of your life. But sure, it's Canonical right, great for your career so you proceed.
So then a technical interview comes up. You have a time limit to fill it out, but the questions won't be actually deep enough to test your skills - they would just veto somebody with zero idea of what's going on, so it becomes tedious. A child with an AI Chatbot can probably score enough.
So then you move on to an IQ test, with baffling things such as tests of reaction time (if I ever needed fast reaction times in my field of study, ring the bell because a zombie apocalypse just ruptured our office building).
You're tired of the bs, but they email you three times in a row telling you about the deadlines for completion. Now somebody wants to speak with you, and guess what, they haven't checked your CV, or your biography, or your results from the tests, so get used to explaining everything again.
You'll have quite a few meetings like this, always moving up to the "higher ups" that are equally unaware of who you are, until you reach a VP. And then they put you on hold... so hope things work out, because they actually can leave you in hold forever, answer that the position is actually no longer available, or finally hire you. They have KPIs that incentivize having "candidates being evaluated" which means keeping you on limbo at the end of the process is a great result on their dashboards. Oh, and don't trust the "oh we loved you, you're in, let's sign next week" because the probability of not signing is still high.
After experiencing Canonical's recruitment process, which they claim to be extremely proud of, I can only imagine that if the other departments operate with a similar mindset the entire company is a non-sensical hell.
I can't help but to feel like all these monthly headlines of "insiders claim AI became sentient" or "They are TOO AFRAID of the NEWEST HIDDEN MODEL they're testing" is just a marketing campaign.
The idea is creating the illusion that they're constantly on the brink of delivering the new incredible model that finally thinks, reasons and executes like a sentient being. And of course, they're nowhere near that (and apparently LLMs will never reach that point anyway) so they need to keep creating these headlines so CEOs head just the title during lunch and think "man, I'm glad my company is paying for Claude, we don't want to be left out when this becomes public".
Just use any LLM. Use Claude. It's ridiculous, you can detect the patterns each of them use to write in five minutes, you can identify their flawed "logic", their limitations, the fact their output is liquid ass, they're just bad. Look at the hyped GPT 5 release - which is as dumb and annoying as GPT 4, only with a few safeguards built in and a shift in tone. That's it, that's the "new model". You can show me benchmarks of the new model being 45% better than the previous, but then you put it through any other test that isn't a well known publicly available benchmark and it fails catastrophically, because it's dumb and the training set is preparing it for that specific test.
So I don't give a shit how many senior employees at OpenAI, Anthropic, Google, Twitter, end up resigning - this is a marketing stunt. They collect their generous compensation for leaving, they retire and live happily, and the company gets a fresh new headline talking about how amazing their new model is to the point it scares humans, which is all they need to get a new round of funding. They're not profitable, so it doesn't matter if the product ends up existing, they just want to leverage FOMO from investors forever.
Somehow, I really am not interested in how any LLM responds to ethical problems, much like I don't care about a rock's thoughts on the weather, or a ocean wave's views on sandwiches.
When you speak about the absurdity that Trump is online, americans quickly come back with the "nooo he doesn't represent the true americans, he just so happened to get elected" card.
Then a chart like this comes out. The fact this guy is got anything above 2% approval, anywhere, means you're a broken society. The fact that there's a lot of fucking green here is beyond ludicrous. The fact the average approval is above 30% says that is, indeed, he does represent you. No excuses.
For those unfamiliar with the saga of Clawdbot, er Moltbot, no, wait, OpenClaw (it keeps changing names), it's an open-source, vibe-coded agentic AI platform
Mozilla really baffles me with some decision making. They can see something like Proton become huge with 5% of Mozilla's brand awareness, and still, they cancel even the most basic of projects like their dedicated password manager.
I mean, it's not hard to prove it will without resorting to predicting the future.
Just look at the current ROI they'd have to achieve in a very short time to sustain their spending, then look at how much money they generate now, then look at any previous industry in modern history. The amount of money these companies need to get is ridiculous, and they need to do it faster than anything else has done in history.
https://www.youtube.com/watch?v=eN-oYyd9fWg