705,809 births and about 1.61 million deaths, leading to a net population decrease of 899,845. Wikipedia says the total population is about 123.4 million, so they lost about 0.73%.
- Posts
- 0
- Comments
- 79
- Joined
- 4 mo. ago
- Posts
- 0
- Comments
- 79
- Joined
- 4 mo. ago
This looks like someone took regular expressions, expanded them to a full programming language, and used Unicode to deal with the explosion of required symbols. I have a hard enough time reading my own regular expressions. I can't imagine writing full programs like this.
This and putting lightning in rocks to do math.
The result of all this may be catastrophic. Should a worst-case scenario ever occur — a cyberattack, a natural disaster, an internet outage — there may be no human workers left with the skills that once kept food on the shelves.
Very nerdy of me, but this reminds me of a Stargate SG-1 episode "the Sentinel." The team travels to a planet whose civilization relies on fully automated technology. The people don't have to operate or maintain it (normally), so their society has completely forgotten how. In the episode, one set of antagonists comes in and sabotages their defense system, and another set sees the opportunity and invades. The protagonists have to then figure out the defense system and fix it.
We don't live in a TV series. There aren't benevolent outsiders who will swoop down and save our systems in the nick of time when they break down. We're headed in a bad direction.
Pittman’s decision on Tuesday came after a series of pretrial rulings penalizing lawyers for the defense. In December, he ordered three defense attorneys to each pay a $500 fine for filing aggressive motions for discovery. He also nearly blocked George Lobb, an attorney, from representing one of the defendants, saying he had not met the residency requirements to practice in the district. Lobb eventually withdrew from the federal case and Clayton replaced him.
After declaring the mistrial, Pittman gave a short speech decrying partisan division in the country, saying he was “absolutely disgusted” by it and that “we have to find a way to turn down the anger”.
I'm noticing a bias, and it ain't from the jury.
Change the problem from 3 doors to a million. Kids pick a door, and the host opens 999,998 doors, leaving theirs and one other door closed. One of the closed doors is the winner. Do they want to switch now?
This will probably create a Constitutional crisis
Pretty certain a judge doing something the constitution allows won't create a constitutional crisis. We're already in the middle of one. People are either propagandized enough they don't believe it, or aren't in a position to do much about it.
I didn't want to be a buzz kill, but if that's supposed to be the top of the vase around its neck, that would mean it climbed into the vase and got its head stuck trying to get out?
First, don't need to respond to arguments made in bad faith. There's no net positive outcome possible for the other person who is coming to the conversation in good faith.
Second, not having their own solution does not invalidate anything critical they say about the one under discussion. There doesn't have to be a "better" solution to justify not implementing something that has no positive impact on the targeted problem and severe negative impacts elsewhere.
How does the age inference model work?
We leverage an advanced machine learning model developed at Discord to predict whether a user falls into a particular age group based on patterns of user behavior and several other signals associated with their account on Discord. We only use these signals to assign users to an age group when our confidence level is high; when it isn't, users go through our standard age assurance flow to confirm their age. We do not use your message content in the age estimation model.
Completely opaque explanation of how they use AI to guess your age with a claim that message content is not used. With no independent way to actually verify that claim, I don't trust them at all.
That's their "safety" category in his rankings. They talk about moderation tools and risks like bad actors posting illicit content quite a bit, actually.
I'm not super familiar with BASIC. Can you explain how the definition of One works? If variables have random values when declared but not set due to memory garbage, I would expect ZeroBit to be "true" half the time and "false" the other half. When it's false, I can reason through it, but when it's true I don't see it. I imagine it has something to do with the details of ABS()?
Determinism means performing the same way every time it is run with the same inputs. It doesn't mean it follows your mental model of how it should run. The article you cite talks about aggressive compiler optimizing causing unexpected crashes. Unexpected, not unpredictable. The author found the root cause and addressed it. Nothing there was nondeterministic. It was just not what the developer expected, or personally thought was an appropriate implementation, but it performed the same way every time. I think you keyed on the word "randomly" and missed "seemed to," which completely changes the meaning of the sentence.
LLMs often act truly nondeterministically. You can create a fresh session and feed it exactly the same prompt and it will produce a different output. This unpredictability is poison for producing a quality product that is maintainable with dynamic LLM code generation in the pipeline.
It's a lot harder to perpetuate historical knowledge when you don't get support from the educational system. The government sets educational standards and subject matter, so it's not surprising they de-emphasize the record of their own actions against the public they are teaching.
Universities are more independent (but definitely not completely, and they come with their own set of problems), so students there tend to be more exposed to topics like this. But then you get political movements villianizing universities.
Bluesky is one, single platform. It stores the complete data for any given user post in its databases and provides that through its data stream and APIs. This means every different client someone writes has access to all the same data as every other client, because they're all going through Bluesky. This also means if Bluesky doesn't support some feature, no clients can either.
The architecture of the Fediverse is different. Forgetting ActivityPub for a moment, Mastodon is one platform and Pixelfed is another. This means each one has its own data model, internal storage architecture, and streams/APIs. Because they were built for different purposes, they support different features. I don't use either, but I expect there are image-related features in Pixelfed that are just not possible in a Mastodon client, not because someone hasn't written a client capable of it, but because Mastodon doesn't have the internal data storage nor API to support it in any client.
Where ActivityPub comes in is a unified stream language. When a post pops up on a platform, that platform has the complete data and translates as much as it can into an ActivityPub message to send to other platforms. Some platforms haven't figured out yet how to pack all of their relevant data into an ActivityPub message, so some data may be lost in the sending. And different platforms may not support storing all the data in a given ActivityPub message they receive, especially if it's from a feature they don't provide, so some data may be lost in the receiving.
Ultimately this means even with ActivityPub linking things together, the data flow isn't perfect/complete. So different data is available to any even theoretical Mastodon client compared to a Pixelfed client because the backend platforms are different. Their APIs expose different data in different, often incompatible ways, so even if someone wrote an image-focused client for Mastodon, it wouldn't be possible to do everything an image-focused client for Pixelfed could do, because the backend platforms focus on different things.
I think she's saying she could have allocated the GPUs to Azure to game the metrics, but Microsoft chose to allocate them to internal projects, which is a form of self-investment. She's not saying they made the wrong decision, she's saying their decision in this longer-term investment makes the short-term metrics worse.
The walled garden (micro services in an isolated network) is the first line of defense. In case a malicious actor finds a way into that network, the second line of defense would be to authenticate the service-service traffic, so the micro services reject direct requests from clients they aren't expecting.
Before blaming me for not reading the article, maybe read my whole comment? I listed 2 effects. Yes, mean versus median is one. The other is a cognitive bias related to how humans estimate percentages.
Medians showing the same effect but reduced is exactly what I would expect when you account for one of the two phenomena but not the other.
This is mostly an example of a kind of survey bias I'm having trouble finding the name for plus a counterintuitive effect of averaging.
For the bias, when people are asked to estimate a percentage, they tend to estimate by large fractions of the whole, like by quarters, fifths, or tenths. This means you'll see survey estimates closer to 50% than the real value, with the effect more pronounced for real values closer to 100% or 0%.
For the averaging phenomenon, when looking at the averaged responses across all questions of a survey, you can quite easily get a collection that wouldn't make sense as a set of responses for "the average" (that is, the typical) person. You can have 3 different responders who each think California, Texas, or Florida has more people than they actually do, and then when you average those responses it looks like all responders think all three of those states have more people than they do, even when no one response was biased that way.
With these two together, this survey makes the average (statistical mean) American look much less informed than the typical (statistical median) American.
The article discusses that IP-based limiting doesn't work as well as it used to. Because of NATs, proxies, etc., IP addresses are a lot more ephemeral and flexible, so they've seen the same big perpetrators adapt and change IPs when rate-limited. I expect we will start to see support for anonymous downloads go away in the next several months in many major OSS registries.