Skip Navigation
theluddite

I write about technology at theluddite.org

Posts
66
Comments
400
Joined
2 yr. ago
  • Honestly I should just get that slide tattooed to my forehead next to a QR code to Weizenbaum's book. It'd save me a lot of talking!

  • I agree with you so strongly that I went ahead and updated my comment. The problem is general and out of control. Orwell said it best: "Journalism is printing something that someone does not want printed. Everything else is public relations."

  • These articles frustrate the shit out of me. They accept both the company's own framing and its selectively-released data at face value. If you get to pick your own framing and selectively release the data that suits you, you can justify anything.

  • I am once again begging journalists to be more critical of tech companies.

    But as this happens, it’s crucial to keep the denominator in mind. Since 2020, Waymo has reported roughly 60 crashes serious enough to trigger an airbag or cause an injury. But those crashes occurred over more than 50 million miles of driverless operations. If you randomly selected 50 million miles of human driving—that’s roughly 70 lifetimes behind the wheel—you would likely see far more serious crashes than Waymo has experienced to date.

    [...] Waymo knows exactly how many times its vehicles have crashed. What’s tricky is figuring out the appropriate human baseline, since human drivers don’t necessarily report every crash. Waymo has tried to address this by estimating human crash rates in its two biggest markets—Phoenix and San Francisco. Waymo’s analysis focused on the 44 million miles Waymo had driven in these cities through December, ignoring its smaller operations in Los Angeles and Austin.

    This is the wrong comparison. These are taxis, which means they're driving taxi miles. They should be compared to taxis, not normal people who drive almost exclusively during their commutes (which is probably the most dangerous time to drive since it's precisely when they're all driving).

    We also need to know how often Waymo intervenes in the supposedly autonomous operations. The latest we have from this, which was leaked a while back, is that Cruise (different company) cars are actually less autonomous than taxis, and require >1 employee per car.

    edit: The leaked data on human interventions was from Cruise, not Waymo. I'm open to self-driving cars being safer than humans, but I don't believe a fucking word from tech companies until there's been an independent audit with full access to their facilities and data. So long as we rely on Waymo's own publishing without knowing how the sausage is made, they can spin their data however they want.

    edit2: Updated to say that ournalists should be more critical in general, not just about tech companies.

  • David Graeber's Debt: The First 5000 Years. We all take debt for granted. It's fascinating to learn how differently we've thought about it over the millenia and how much of our modern world makes more sense when understood through its lens.

  • No need to apologize for length with me basically ever!

    I was thinking how you did it in the second paragraph, but even more stripped down. The algorithm has N content buckets to choose from, then, once it chooses, the success is how much of the video the user watched. Users have the choice to only keep watching or log off for simplicity. For small N, I think that @[email protected] is right on that it's the multi-armed bandit problem if we assume that user preferences are static. If we introduce the complexity that users prefer familiar things, which I think is pretty fair, so users are more likely to keep watching from a bucket if it's a familiar bucket, I assume that exploration gets heavily disincentivized and exhibits some pretty weird behavior, while exploitation becomes much more favorable. What I like about this is that, with only a small deviation from a classic problem, it would help explain what you also explain, which is getting stuck in corners.

    Once you allow user choice beyond consume/log off, I think your way of thinking about it, as a turn based game, is exactly right, and your point about bin refinement is great and I hadn't thought of that.

  • Yeah I really couldn't agree more. I really harped on the importance of other properties of the medium, like brevity, when I reviewed the book #HashtagActivism, and how those too are structurally right wing. There's a lot of scholars doing these kinds of network studies and imo they way too often emphasize user-user dynamics and de-emphasize, if not totally omit, the fact that all these interactions are heavily mediated. Just this week I watched a talk that I thought had many of these same problems.

  • I knew you were the person to call :)

  • Thanks!

    I feel enlightened now that you called out the self-reinforcing nature of the algorithms. It makes sense that an RL agent solving the bandits problem would create its own bubbles out of laziness.

    You're totally right that it's like a multi-armed bandit problem, but maybe with so many possibilities that searching is prohibitively expensive, since the space of options to search is much bigger than the rate that humans can consume content. In other ways, though, there's a dissimilarity because the agent's reward depends on its past choices (people watch more of what they're recommended). It would be really interesting to know if anyone has modeled a multi-armed bandit problem with this kind of self-dependency. I bet that, in that case, the exploration behavior is pretty chaotic. @[email protected] this seems like something you might just know off the top of your head!

    Maybe we can take advantage of that laziness to incept critical thinking back into social media, or at least have it eat itself.

    If you have any ideas for how to turn social media against itself, I'd love to hear them. I worked on this post unusually long for a lot of reasons, but one of them was trying to think of a counter strategy. I came up with nothing though!

  • Technology @lemmy.world
    theluddite @lemmy.ml

    Permanently Deleted

  • Yup. Silicon-washing genocidal intention is almost certainly the most profitable use of AI we've come up with so far.

  • Luddite @lemmy.ml
    theluddite @lemmy.ml

    Permanently Deleted

  • taps the sign

  • I'd say that's mostly right, but it's less about opportunities, and more about design. To return to the example of the factory: Let's say that there was a communist revolution and the workers now own the factory. The machines still have them facing away from each other. If they want to face each other, they'll have to rebuild the machine. The values of the old system are literally physically present in the machine.

    So it's not that you can do different things with a technology based on your values, but that different values produce technology differently. This actually limits future possibilities. Those workers physically cannot face each other on that machine, even if they want to use it that way. The past's values are frozen in that machine.

  • No problem!

    Technology is constrained by the rules of the physical world, but that is an underconstraint.

    Example: Let's say that there's a factory, and the factory has a machine that makes whatever. The machine takes 2 people to operate. The thing needs to get made, so that limits the number of possible designs, but there are still many open questions like, for example, should the workers face each other or face away from each other? The boss might make them face away from each other, that way they don't chat and get distracted. If the workers get to choose, they'd prefer to face each other to make the work more pleasant. In this way, the values of society are embedded in the design of the machine itself.

    I struggle with the idea that a tool (like a computer) is bad because is too general purpose. Society hence the people and their values define how the tool is used. Would you elaborate on that? I’d like to understand the idea.

    I love computers! It's not that they're bad, but that, because they're so general purpose, more cultural values get embedded. Like in the example above, there are decisions that aren't determined by the goals of what you're trying to accomplish, but because computers are so much more open ended than physical robots, there are more decisions like that, and you have even more leeway in how they're decided.

    I agree with you that good/evil is not a productive way to think about it, just like I don't think neutrality is right either. Instead, I think that our technology contains within it a reflection of who got to make those many design decisions, like which direction should the workers sit. These decisions accumulate. I personally think that capitalism sucks, so technology under capitalism, after a few hundred years, also sucks, since that technology contains within it hundreds of years of capitalist decision-making.

  • I didn't find the article particularly insightful but I don't like your way of thinking about tech. Technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that the pens example doesn't help us explain. Similarly, society shapes the way that we make technology. Technology is constrained by the rules of the physical world, but that is an underconstraint. The leftover space (i.e. the vast majority) is the process through which we embed social values into the technology. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them.

    This way of thinking helps explain why computer technology specifically is so awful: Computers are shockingly general purpose in a way that has no parallel in physical products. This means that the underconstraining is more pronounced, so social values have an even more outsized say in how they get made. This is why every other software product is just the pure manifestation of capitalism in a way that a robotic arm could never be.

    edit to add that this argument is adapted from Andrew Feenberg's "Transforming Technology"

  • I'd recommend renting a car (or driving here) and going from the south to the northeast kingdom (northeastern most part of the state and also the most rural part). It's a small state so it won't take that long. Burlington is a nice town, but imo totally fine to skip. Vermont's real charm is its small towns and their breweries, farms, restaurants, etc.

    If you like dairy, try the local milk and ice cream at different farms that make, process, and sell on site. Lots of small dairies here have milk from breeds of cows you've probably never tried (Jerseys mainly but other kinds too). It's much tastier and creamier, and varies from farm to farm. Any brewery that has a bar is probably worth your time, and, when it comes to food/drink, we generally punch well above our weight for such a small place. Our maple syrup is, of course, legendary. Pro tip from someone who boils their own: The darker stuff is better, and the smaller the operation, the better the syrup, because bigger operations use fancy machines to extract water, whereas small ones rely entirely on boiling, so that syrup spends a lot more time cooking.

    If you like hiking, you'll drive by lots of good hiking in the process, but the better hiking is in the whites in New Hampshire or in the Adirondacks in NY, though those are worse places in general ;).

    Happy to answer specific questions.

  • The exact same stuff on FPF, especially the pets. I kind of hate how heartbreaking that part can be, honestly.

    Anyway, that's a wonderful tradition. I hope that you're able to keep it going forever.

  • Yes, and worse, even if they are true to that vision, other bigger players will be offering huge piles of cash to buy the thing. There will always be a perpetual temptation in its current structure. Just look at another beloved Vermont brand, Ben and Jerry's, now owned by unilever.

  • I'm too old to know what emojis mean.

  • Yes absolutely. It's just a mailing list! There are bajillions of functioning and wonderful mailing lists all around the world, for neighborhood activities or otherwise. If you wanted to right now, you could make a mailing list and drop off a flyer with a QR code at all your neighbors' houses. You'd have your own version of this set up in an afternoon, so long as you and other volunteers can find the time to moderate it. My advice to anyone who wants to start one that's a little more formal, like this one, with paid moderators and staff, is to build your values into its structure. Do you want it to serve the community? Then the community should own it. Think about who you want to serve and make sure that it's who the company will always be accountable to.

  • No don't! I'm glad you posted it! I do think that the story of FPF is worth telling because it actually is really useful and pleasant. The internet doesn't inherently make us into assholes, but companies on the internet design their products to bring out the worst in us.

  • Luddite @lemmy.ml
    theluddite @lemmy.ml

    Dávila's "Blockchain Radicals" argues that the left ought to embrace blockchain. Here's my 2 part review. The first critiques the book's approach to argumentation, and the second examines Dávila's own Breadchain Cooperative.

    This is my longest post yet because the theory the book presents is palatable to developers. It does to political theory what tech people always do: Confidently assume their skills apply in a field they don't bother to understand. The consequences are predictable. This, then, is an intervention directed at that mode of thinking, an examination of how bad theory leads to bad practice, and, most importantly, an attempt to stop would-be activists from getting caught up in this mess.

    tl;dr Breadchain's use of the term "cooperative" is fraudulent, and it is, structurally, a grift, whatever his intentions might be.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Though wrapped in the aesthetic of science, this paper is a pure expression of the AI hype's ideology, including its reliance on invisible, alienated labor. Its data was manufactured to spec to support the authors' pre-existing beliefs, and its conclusions are nothing but a re-articulation of their arrogance and ideological impoverishment.

    Permacomputing @lemmy.sdf.org
    theluddite @lemmy.ml
    Technology @lemmy.ml
    theluddite @lemmy.ml

    Did Twitter Make Us Better? A Critical Review of the Book "#HashtagActivism"

    #HashtagActivism is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context, but its analysis overlooks that Twitter is actually a heavy-handed intermediary. It imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    The book "#HashtagActivism" is a robust and thorough defense of its namesake practice. It argues that Twitter disintermediated public discourse, analyzing networks of user interactions in that context. But the book overlooks that Twitter is actually a heavy-handed intermediary. Twitter imposes strict requirements on content, like a character limit, and controls who sees what and in what context. Reintroducing Twitter as the medium and reinterpreting the analysis exposes serious flaws. Similarly, their defense of hashtag activism relies almost exclusively on Twitter engagement data, but offers no theory of change stemming from that engagement. By reexamining their evidence, I argue that hashtag activism is not just ineffective, but its institutional dynamics are structurally conservative and inherently anti-democratic.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Regulating tech is hard, in part because computers can do so many things. This makes them useful but also complicated. Companies hide in that complexity, rendering undesirable behavior illegible to regulation: Regulating tech becomes regulating unlicensed taxis, mass surveillance, illegal hotels, social media, etc.

    If we actually want accountable tech, I argue that we should focus on the tech itself, not its downstream consequences. Here's my (non-environmental) case for rationing computation.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Capture Platforms

    Until recently, platforms like Tinder and Uber couldn't exist. They need the intimate data that only mobile devices can provide, which they use to mediate human relationships. They never own anything. In some ways, this simplifies their task, because owning things is hard, but human activities are complicated, making them illegible to computers. As tech companies become more powerful and push deeper into our lives, here's a post about that tension and its consequences.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Are Tech Stocks Overvalued?

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    A Response to Futurism's "CEOs Could Easily Be Replaced With AI, Experts Argue" and Similar Articles

    I've seen a few articles like this one from Futurism: "CEOs Could Easily Be Replaced With AI, Experts Argue." I totally get the appeal, but these articles are more anti-labor than anti-CEO. Because CEOs can't actually be disciplined with threats of automation, these articles further entrench an inherently anti-labor logic, telling readers that losing our livelihoods to automation is part of some natural order, rather than the result of political decisions that benefit capital.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Why Is There an AI Hype?

    Lots of skeptics are writing lots of good things about the AI hype, but so far, I've encountered relatively few attempts to explain why it's happening at all. Here's my contribution, mostly based Philp Agre's work on the (so-called) internet revolution, which focuses less on the capabilities of the tech itself, as most in mainstream did (and still do), but on the role of a new technology in the ever-present and continuous renegotiation of power within human institutions.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    A Response to Mark Rober's Apologia for the Military-Industrial Complex in "Vortex Cannon vs Drone"

    The video opens with Rober standing in front of a fancy-looking box, saying:

    Hiding inside this box is an absolute marvel of engineering you might just find protecting you the next time you're at a public event that's got a lot of people.

    When he says "protecting you," the video momentarily cuts to stock footage of a packed sports stadium, the first of many "war on terror"-coded editorial decisions, before returning to the box, which opens and releases a drone. This is no ordinary drone, he says, but a particularly heavy and fast drone, designed to smash "bad guy drones trying to do bad guy things." He explains how "it's only a matter of time" before these bad guys' drones attack infrastructure "or worse," cutting to a photo of a stadium for the third time in just 30 seconds.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Mass Protests and the Danger of Social Media

    In "If We Burn," Vincent Bevins recaps the mass protests of the 2010s. He argues that they're communicative acts, but power has no way of negotiating with or interpreting them. They're "illegible."

    Here's a "yes and" to Bevins. I argue that social media companies have a detailed map of all protesters' connections, communications, topics of interests, locations, etc., such that, to them, there has never been a more legible form of social organization, giving them too much power over ostensibly leaderless movements.

    I also want to plug Bevins's book, independently of my post. It's extremely well researched. For many of the things that he describes, he was there, and he productively challenges many core values of the movements in which I and any others probably reading this have participated.

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    The TikTok "Ban" and the Missing Leftist Response

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Nature's Folly: A Response to Nature's "Loneliness and suicide mitigation for students using GPT3-enabled chatbots"

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Daylight Savings and the Case for the Pre-Julian Calendar

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Reddit Will License Its Data to Train LLMs, So We Made a Firefox Extension That Lets You Replace Your Comments With Any (Non-Copyrighted) Text

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    Need, Want, and Agency: Mapping the Digital User Experience

    Luddite @lemmy.ml
    theluddite @lemmy.ml

    A Response to Nature's "Google AI has better bedside manner than human doctors — and makes better diagnoses"