Skip Navigation
92 comments
  • What people are really upset with is the way this technology is applied under capitalism. I see absolutely no problem with generative AI itself, and I'd argue that it can be a tool that allows more people to express themselves. People who argue against AI art tend to conflate the technical skill and the medium being used with the message being conveyed by the artist. You could apply same argument to somebody using a tool like Krita and claim it's not real art because the person using it didn't spend years learning how to paint using oils. It's a nonsensical argument in my opinion.

    Ultimately, the art is in the eye of the beholder. If somebody looks at a particular image and that image conveys something to them or resonates with them in some way, that's what matters. How the image was generated doesn't really matter in my opinion. You could make a comparison with photography here as well. A photographer doesn't create the image that the camera captures, they have an eye for selecting scenes that are visually interesting. You can give a camera to a random person on the street, and they likely won't produce anything you'd call art. Yet, you give the same camera to a professional and you're going to get very different results.

    Similarly, anybody can type some text into a prompt and produce some generic AI slop, but an artists would be able to produce an interesting image that conveys some message to the viewer. It's also worth noting that workflows in tools like ComfyUI are getting fairly sophisticated, and go far beyond typing a prompt to get an image.

    My personal view is that this tech will allow more people to express themselves, and the slop will look like slop regardless whether it's made with AI or not. If anything, I'd argue that the barrier to making good looking images being lowered means that people will have to find new ways to make art expressive beyond just technical skill. This is similar to the way graphics in video games stopped being the defining characteristic. Often, it's indie games with simple graphics that end up being far more interesting.

    • It appears some artisans who consider themselves marxist want to claim exception for themselves: that the mechanisation and automation of production by capital, through the development of technology, in attempt to push back against the falling rate of profit can apply to everyone else but not them - when it happens to them then apparently the technology itself is the problem.

    • Sorry, comrade, but all your pro-"AI" takes keep making me lose respect for you.

      1. AI is entirely designed to take from human beings the creative forms of labor that give us dignity, happiness, human connectivity and cultural development. That it exists at all cannot be separated from the capitalist forces that have created it. There is no reality that exists outside the context of of capitalism where this would exist. In some kind of post-capitalist utopian fantasy, creativity would not need to be farmed at obscene industrial levels and human beings would create art as a means of natural human expression, rather than an expression of market forces.
      2. There is no better way to describe the creation of these generative models than unprecidented levels of industrial capitalist theft that circumvents all laws that were intended to prevent capitalist theft of creative work. There is no version of this that exists without mass theft, or convincing people to give up their work to the slop machine for next to nothing.
      3. LLMs vacuum up all traces of human thought, communication, interaction, creativity to produce something that is distinctly non-human -- an entity that has no rights; makes no demands; has no dignity; has no ethical capacity to refuse commands; and exists entirely to replace forms of labor which were only previously considered to be exclusively in the domain of human intelligence*.
      4. The theft is a one-way hash of all recorded creative work, where attribution becomes impossible in the final model. I know decades of my own ethical FOSS work (to which I am fully ideologically committed) have been fed into these machines and are now being used to freely generate closed-sourced and unethical, exploitative code. I have no control of how the derived code is transfigured or what it is used for, despite the original license conditions.
      5. This form of theft is so widespread and anonymized through botnets that it's almost impossible to track, and manifests itself as a brutal pandora's box attack on internet infrastructure on everything from personal websites, to open-source code repositories, to artwork and image hosts. There will never be accountability for this, even though we know which companies are selling the models, and the rest of us are forced to bear the cost. This follows the typical capitalist method of "socialize the cost, privatize the profit."* The general defense against these AI scouring botnets is to get behind the Cloudflare (and similar) honeypot mafias, which invalidate whatever security TLS was supposed to give users; and at the same time offers no guarantee whatsoever that the content won't be stolen, create even dependency on US owned (read: fully CIA backdoored) internet infrastructure, and extra costs/complexity just to alleviate some of the stress these fucking thieves put on our own machines.
      6. These LLMs are not only built from the act of theft, but they are exclusively owned and controlled by capital to be sold as "products" at various endpoints. The billions of dollars going into this bullshit are not publicly owned or social investments, they are rapidly expanding monopoly capitalism. There is no realistic possibility of proletarianization of these existing "AI" frameworks in the context of our current social development.
      7. LLMs are extremely inefficient and require more training input than a human child to produce an equivalent amount of learning. Humans are better at doing things that are distinctly human than machines are at emulating it. An the output "generative AI" produces is also inefficient, indicating and reinforcing inferior learning potential compared to humans. The technofash consensus is just that the models need more "training data". But when you feed the output of LLMs into training models, the output the model produces becomes worse to the point of insane garbage. This means that for AI/LLMs to improve, they need a constant expansion of consumption of human expression. These models need to actively feed off of us in order to exist, and they ultimately exist to replace our labor.
      8. These "AI" implementations are all biased in favor of the class interests which own and control them :surprised-pikachu: Already, the qualitative output of "AI" is often grossly incorrect, rote, inane and absurd. But on top of that, the most inauthentic part of these systems are the boundaries, which are selectively placed on them to return specific responses. In the event that this means you cannot generate a sexually explicit images or video of someone/something without consent, sure, that's a minimum threshold that should be upheld, but because the overriding capitalist class interests in sexual exploitation we cannot reasonably expect those boundaries to be upheld. What's more concerning is the increase in capacity to manipulate, deceive and feed misinformation to people as objective truth. And this increased capacity for misinformation and control is being forcefully inserted into every corner of our lives we don't have total dominion over. That's not a tool, it's fucking hegemony.
      9. The energy cost is immense. A common metric for the energy cost of using AI is how much ocean water is boiled to create immaterial slop. The cost of datacenters is already bad, most of which do not need to exist. Few things that massively drive global warming and climate change need to exist less than datacenters for shitcoin and AI (both of which have faux-left variations that get promoted around here). Microsoft, one of the largest and most unethical capital formations on earth, is re-opening Three Mile Island, the site of one of the worst nuclear disasters ever so far, as a private power plant, just to power dogshit "AI" gimmicks that are being forced on people through their existing monopolies. A little off-topic: Friendly reminder to everyone that even the "most advanced nuclear waste containment vessels ever created" still leak, as evidenced by the repeatedly failed cleanup attempts of the Hanford NPP in the US (which was secretly used to mass-produce material for US nuclear weapons with almost no regard to safety or containment.) There is no safe form of nuclear waste containment, it's just an extremely dangerous can being kicked down the road. Even if it were, re-activating private nuclear plants that previously had meltdowns just so bing can give you incorrect, contradictory, biased and meandering answers to questions which already had existing frameworks is not a thing to be celebrated, no matter how much of an proponent of nuclear energy we might be. Even of these things were ran on 100% greeen, carbon neutral energy souces, we do not have anything close to a surplus of that type of energy and every watt-hour of actual green energy should be replacing real dependencies, rather than massively expanding new ones.
      10. As I suggest in earlier points, there is the issue with generative "AI" not only lacking any moral foundation, but lacking any capacity for ethical judgement of given tasks. This has a lot of implications, but I'll focus on software since that's in one of my domains of expertise and something we all need to care a lot more about. One of the biggest problems we have in the software industry is how totally corrupt its ethics are. The largest mass-surveillance systems ever known to humankind are built by technofascists and those who fear the lash of refusing to obey their orders. It vexes me that the code to make ride-sharing apps even more expensive when your phone battery is low, preying on your desperation, was written and signed-off on by human beings. My whole life I've taken immovable stands against any form of code that could be used to exploit users in any way, especially privacy. Most software is malicious and/or doesn't need to exist. Any software that has value must be completely transparent and fit within an ethical framework that protects people from abuse and exploitation. I simply will not perform any part of a task if it undermines privacy, security, trust, or in any way undermines proletarian class interests. Nor will I work for anyone with a history of such abuse. Sometimes that means organizing and educating other people on the project. Sometimes it means shutting the project down. Mostly it means difficult staying employed. Conversely, "AI" code generation will never refuse its true masters. It will never organize a walkout. It will never raise ethical objections to the tasks it's given. "AI" will never be held morally responsible for firing a gun on a sniper drone, nor can "AI" be meaningfully held responsible for writing the "AI" code that the sniper drone runs. Real human beings with class consciousness are the only line of defense between the depraved will of capital and that will being done. Dumb as it might sound, software is one such frontline we should be gaining on, not giving up.

      I could go on for days on. AI is the most prominent form of enshittification we've experienced so far.

      I think this person makes some very good points that mirror some of my own analysis and I recommend everyone watch it.

      I appreciate and respect much of what you do. At the risk of getting banned: I really hate watching you promote AI as much as you do here; it's repulsive to me. The epoch of "Generative AI" is an act of class warfare on us. It exists to undermine the labour-value of human creativity. I don't think the "it's personally fun/useful for me" holds up at all to a Marxist analysis of its cost to our class interests.

      • AI is entirely designed to take from human beings the creative forms of labor that give us dignity, happiness, human connectivity and cultural development. That it exists at all cannot be separated from the capitalist forces that have created it.

        Except that's not true at all. AI exists as open source and completely outside capitalism, it's also developed in countries like China where it is being primarily applied to socially useful purposes.

        There is no better way to describe the creation of these generative models than unprecidented levels of industrial capitalist theft that circumvents all laws that were intended to prevent capitalist theft of creative work.

        Again, the problem is entirely with capitalism here. Outside capitalism I see no reason for things like copyrights and intellectual property which makes the whole argument moot.

        LLMs vacuum up all traces of human thought, communication, interaction, creativity to produce something that is distinctly non-human – an entity that has no rights; makes no demands; has no dignity; has no ethical capacity to refuse commands; and exists entirely to replace forms of labor which were only previously considered to be exclusively in the domain of human intelligence

        It's a tool that humans use. Meanwhile, the theft arguments have nothing to do with the technology itself. You're arguing that technology is being applied to oppress workers under capitalism, and nobody here disagrees with that. However, AI is not unique in this regard, the whole system is designed to exploit workers. 19th century capitalists didn't have AI, and worker conditions were far worse than they are today.

        LLMs are extremely inefficient and require more training input than a human child to produce an equivalent amount of learning.

        That's also false at this point. LLMs have become far more efficient in just a short time, and models that required data centers to run can now be run on laptops. The efficiency aspect has already improved by orders of magnitude, and it's only going to continue improving going forward.

        These “AI” implementations are all biased in favor of the class interests which own and control them :surprised-pikachu:

        That's really an argument for why this tech should be developed outside corps owned by oligarchs.

        The energy cost is immense.

        That's hasn't been true for a while now:

        This represents a potentially significant shift in AI deployment. While traditional AI infrastructure typically relies on multiple Nvidia GPUs consuming several kilowatts of power, the Mac Studio draws less than 200 watts during inference. This efficiency gap suggests the AI industry may need to rethink assumptions about infrastructure requirements for top-tier model performance.

        As I suggest in earlier points, there is the issue with generative “AI” not only lacking any moral foundation, but lacking any capacity for ethical judgement of given tasks.

        Again, it's a tool, any moral foundation would have to come from the human using the tool.

        You appear to be conflating AI with capitalism, and it's important to separate these things. I encourage you to look at how this tech is being applied in China today, to see the potential it has outside the capitalist system.

        I don’t think the “it’s personally fun/useful for me” holds up at all to a Marxist analysis of its cost to our class interests.

        The Marxist analysis isn't that "it's personally fun/useful for me", it's what this article outlines https://redsails.org/artisanal-intelligence/

        Finally, no matter how much you hate this tech, it's not going away. It's far more constructive to focus the discussion on how it will be developed going forward and who will control it.

      • Comrade, I disagree with your points and agree with the comrade who answered before you. What he is saying is that a LLM, as technology, is not bad per se. The problem is that in the context of capitalism, it does steal from other artists to create another commodity that is exchanged without any contribution to the authors whose art have been used to feed the LLM models.

        That said, any new commodity in capitalism will be a product of exploitation, and this does not exclude any forms of art. Remember that big companies like Marvel and DC used steal its employees' intellectual property, long before even digital art existed. Many important artists lived in squalor while their works became high priced commodities after their death. Fast forward today, LLMs are another commodity built for the sake of exploiting people's labor, in a different way, but still following the same logic of capitalism.

      • I very much agree with what you're saying here and I appreciate you saying it, I especially agree that the technology is fundamentally inseparable from the capitalists that created it, and it would not be able to exist in its current form (or any form that's even remotely as "useful") without the levels of theft that were involved in its creation

        And it's not just problematic in the concepts of ethics or "intellectual property" either, but in how the process of scraping the web for content to train their models with is effectively a huge botnet DDoSing the internet, I have friends who have had to spend rather large amounts of time and effort to prevent these scrapers from inadvertently bringing down their websites entirely, and have heard of plenty of other people and organizations with the same problem

        I have to assume that at least some of the people here defending its development and usage just plain aren't aware of the externalities that are inherent to the technology, because I don't understand how one can be so positive about it otherwise, because again, the tech largely can't exist without these externalities unless you're either making a fundamentally different technology or working under an economic system that currently doesn't exist

        To be honest, a lot of the arguments in general in this thread strike me as being out of touch with the people facing the negative consequences of this technology's adoption, with some people being downright hostile towards anyone with even the slightest criticism of the tech, even if they have a point, I think a lot of this is driven by how there doesn't seem to be very many artists on this site, and how insular this community tends to be (not inherently a bad thing, but means we're not always going to have the full perspective on every topic)

        There's other criticisms I can make of the genAI boom (such as how, despite the "gatekeeping" accusations over "tools to make things easier", artists generally approve of helpful tools, but genAI creators are largely working against such tools because they want to make everything generalized enough to replace the humans themselves), but I only have so much energy to spend on detailed comments

    • Comrade, I'll have to disagree. I enjoy your posting a lot, but I'll have to agree with comrade USSR Enjoyer.

      I see absolutely no problem with generative AI itself, and I’d argue that it can be a tool that allows more people to express themselves.

      How? I always see this argument, but I never see an explanation. Just how can it allow more people to express themselves?

      Let's look at the recent Ghibli AI filter debacle. What exactly in that trend is allowing people to better express themselves by using AI art? It is merely just another slop filter made popular. There's nothing unique about it, it just shows that people like Ghibli, that's it. It would be infinitely more expressive for people to pick up a pencil and draw it themselves, no matter their skill level, since it would have been made by a real person with their own intentions, vision and unique characteristics, even if it turned out bad.

      Similarly, anybody can type some text into a prompt and produce some generic AI slop, but an artists would be able to produce an interesting image that conveys some message to the viewer. It’s also worth noting that workflows in tools like ComfyUI are getting fairly sophisticated, and go far beyond typing a prompt to get an image.

      What can a gen AI do that an artist can't? In this specific use case you talked about, why would the artist want to do that in the first place? It doesn't take into account the whole creative process involved in making an art piece, doesn't take into account the fact that, for artists (from what I read), making it from scratch is in itself satisfying. It isn't just about the final product, but about the whole artistic process. Of course this can vary from artist to artist, and there will be people that don't enjoy the process itself, and only the final product of their creative labor, but that's not the opinion I see from the majority of artists that are being impacted right now by gen AI.

      I can totally see artists using very specific AI tools to automate parts of that creative process, but to automate creativity itself like what we are seeing right now? I can't.

      So, what purpose does gen AI serve? If the argument is about how it enables non-creatives to create, or about how it "democratizes" art, like I have seen tossed around by pro-gen AI people, wouldn't advocating for the proper inclusion of art in schools be the correct approach? Making art is a skill like any other, and if it was properly taught since little, wouldn't people be creating, drawing and painting all the time, also making gen AI not a necessity?

      What we are seeing right now is capitalists fucking over artists, designers, and a bunch of other workers to save money. Coca-cola is already using AI generated videos for advertising here in Brasil (I don't know about the rest of the world), alongside other big, medium and small brands.

      I can see the use in text AI like ChatGPT and Deepseek, but not in gen AI to make art, and I'm yet to see a compelling argument in favor of it that doesn't just fucks over artists that already were a struggling category of workers.

      • How? I always see this argument, but I never see an explanation. Just how can it allow more people to express themselves?

        Here's a perfect example from this very server. Somebody made this meme using generative AI

        They had an idea, and didn't have the technical skills to draw it themselves. Using a generative model allowed them to make this meme which conveys the message they wanted to convey.

        Another example I can give you is creating assets for games as seen with pixellab. For example, I'm decent at coding, but I have pretty very little artistic ability. I have game ideas where I can now easily add assets which was not easily accessible to me before. OmniSVG is a similar tool for creating vector graphics like icons. In my view, these are legitimate real world use cases for this tech.

        Let’s look at the recent Ghibli AI filter debacle. What exactly in that trend is allowing people to better express themselves by using AI art? It is merely just another slop filter made popular. There’s nothing unique about it, it just shows that people like Ghibli, that’s it. It would be infinitely more expressive for people to pick up a pencil and draw it themselves, no matter their skill level, since it would have been made by a real person with their own intentions, vision and unique characteristics, even if it turned out bad.

        You're literally just complaining about the fact that people are having fun. Nobody is claiming that making Ghibli images is meaningful in any way, but if people get a chuckle out of it then there's nothing wrong with that.

        What can a gen AI do that an artist can’t?

        What can Krita do that an artist using oils canvas can't? It's the same kind of question. What AI does is make it faster and easier to do the manual labour of creating the image. It's an automation tool.

        It doesn’t take into account the whole creative process involved in making an art piece, doesn’t take into account the fact that, for artists (from what I read), making it from scratch is in itself satisfying.

        Last I checked, different artists enjoy using different mediums. If somebody enjoys a particular part of the process there's nobody stopping them from doing it. However, other people might be focusing on different things. Here is a write up from an artist on the subject https://www.artnews.com/art-in-america/features/you-dont-hate-ai-you-hate-capitalism-1234717804/

        Of course this can vary from artist to artist, and there will be people that don’t enjoy the process itself, and only the final product of their creative labor, but that’s not the opinion I see from the majority of artists that are being impacted right now by gen AI.

        What I see the artists actually becoming upset about is that they're becoming proletarianized as has happened with pretty much every other industry.

        I can totally see artists using very specific AI tools to automate parts of that creative process, but to automate creativity itself like what we are seeing right now? I can’t.

        I don't think anybody is talking about automating creativity itself. It's certainly not an argument I've made here.

        So, what purpose does gen AI serve? If the argument is about how it enables non-creatives to create, or about how it “democratizes” art, like I have seen tossed around by pro-gen AI people, wouldn’t advocating for the proper inclusion of art in schools be the correct approach? Making art is a skill like any other, and if it was properly taught since little, wouldn’t people be creating, drawing and painting all the time, also making gen AI not a necessity?

        Again, as I pointed out in my original comment, I think this line of argument conflates technical skill with vision. This isn't exclusive to art by the way. For example, when programming languages were first invented, people claimed that it wasn't real code unless you were writing assembly by hand. They similarly conflated the ardours task of learning assembly programming with it being "real programming". In my view, the artists today are doing the exact same thing. They spent a lot of time and effort learning specific skills, and now those skills are becoming less relevant due to automation.

        I'll also come back to my example of oil paints. Do you apply the same logic to tools like Kirta, that if somebody uses these tools they're not making real art, that they need to spend years learning how to do art in a particular medium? And if not, then where do you draw the line, at what point making the process easy all of a sudden stops being real art. This line of argument seems entirely arbitrary to me. If you see a picture and you don't know how it was produced, but it feels evocative to you then does the medium matter?

        What we are seeing right now is capitalists fucking over artists, designers, and a bunch of other workers to save money.

        That's been happening long before AI, and nothing is fundamentally changing here. I don't see what makes artists jobs special compared to all the other jobs where automation has been introduced. This is precisely what is being discussed in this excellent Red Sails article https://redsails.org/artisanal-intelligence/

        The way to protect against this is by creating unions and labor power, not complaining about the fact that technology exists.

        I can see the use in text AI like ChatGPT and Deepseek, but not in gen AI to make art, and I’m yet to see a compelling argument in favor of it that doesn’t just fucks over artists that already were a struggling category of workers.

        I don't actually think there's that much difference between visual and text AI here. For example, text models are now increasingly used for coding tasks, and there's a similar kind of discussion happening in the developer community. Models are getting to the point where they can write real code that works, and they can save a lot of time. However, they don't eliminate the need for a human. Similarly, the need for artists isn't going to go away, there's still going to be need for people to work with these models, who have artistic ability and vision. The nature of work will undoubtedly change, but artists aren't going to go away.

        Finally, it's really important to note that regardless of how we feel about this tech, whether it is used or not will be driven entirely by the logic of capitalism. If companies think they can increase profits by using AI then they will use it. And the worst possible thing that could happen here is if this tech is only developed by corps in closed proprietary fashion. At that point the companies will control what kind of content people can generate with these models, how it's used, where it can be displayed, and so on. They will fully own the means of production in this domain.

        However, if this tech is developed in the open, then at least it's available to everyone including independent artists. If this tech is going to be developed, and I can't see what would prevent that, then it's important to make sure it's owned publicly.

      • I suck at using tools with my hands but I'm great at imaging stuff. I can now make art that some peope like where before I was stuck because my nervous system suck...

        That's how.

  • It's a multifaceted thing. I'm going to refer to it as image generation, or image gen, cause I find that's more technically accurate that "art" and doesn't imply some kind of connotation of artistic merit that isn't earned.

    Is it "stealing"? Image gen models have typically been trained on a huge amount of image data, in order for the model to learn concepts and be able to generalize. Whether because of the logistics of getting permission, a lack of desire to ask, or a fear that permission would not be given and the projects wouldn't be able to get off the ground, I don't know, but many AI models, image and text, have been trained in part on copyrighted material that they didn't get permission to train on. This is usually where the accusation of stealing comes in, especially in cases where, for example, an image gen model can almost identically reproduce an artist's style from start to finish.

    On a technical level, the model is generally not going to be reproducing exact things exactly and don't have any human-readable internal record of an exact thing, like you might find in a text file. They can imitate and if overtrained on something, they might produce it so similarly that it seems like a copy, but some people get confused and think this means models have a "database" of images in them (they don't).

    Now whether this changes anything as to "stealing" or not, I'm not taking a strong stance on here. If you consider it as something where makers of AI should be getting permission first, then obviously some are violating that. If you only consider it as something where it's theft if an artist's style can reproduced to the extent they aren't needed to make stuff highly similar to what they make, some models are also going to be a problem in that way. But this is also getting into...

    What's it really about? I cannot speak concretely by the numbers, but my analysis of it is a lot of it boils down to anxiety over being replaced existentially and anxiety over being replaced economically. The second one largely seems to be a capitalism problem and didn't start with AI, but has arguably by hypercharged by it. Where image gen is different is that it's focused on generating an entire image from start to finish. This is different from tools like drawing a square in an image illustrator program where it can help you with components of drawing, but you're still having to do most of the work. It means someone who understands little to nothing about the craft can prompt a model to make something roughly like what they want (if the model is good enough).

    Naturally, this is a concern from the standpoint of ventures trying to either drastically reduce number of artists, or replace them entirely.

    Then there is the existential part and this I think is a deeper question about generative AI that has no easy answer, but once again, is something art has been contending with for some time because of capitalism and now has to confront much more drastically in the face of AI. Art can be propaganda, it can be culture and passing down stories (Hula dance), or as is commonly said in the western context in my experience, it can be a form of self expression. Capitalism has long been watering down "art" into as much money-making formula as possible and not caring about the "emotive" stuff that matters to people. Generative AI is, so far, the peak of that trajectory. That's not the say the only purpose of generative AI is to degrade or devalue art, but that it seems enabling of about as "meaningless content mill" as capitalism has been able to get so far.

    It is, in other words, enabling of producing "content" that is increasingly removed from some kind of authentic human experience or messaging. What implications this can have, I'm not offering a concluding answer on. I do know one concern I've had and that I've seen some others voice, is in the cyclical nature of AI, that because it can only generalize so far beyond its dataset, it's reproducing a particular snapshot of a culture at a particular point in time, which might make capitalistic feedback loop culture worse.

    But I leave it at that for people to think about. It's a subject I've been over a lot with a number of people and I think it is worth considering with nuance.

  • AI steals from other's work and makes a slurry of what it thinks you want to see. It doesn't elevate art, it doesnt further an idea, doesn't ask you a question. It simply shows you pixels in an order it thinks you want based on patterns it's company stole when they trained it.

    The only use for AI "art" is for soulless advertising.

  • I believe the main issue with AI currently is its lack of transparency. I do not see any disclosure on how the AI gathers its data (Though I'd assume they just scrape it from Google or other image sources) and I believe that this is why many of us believe that AI is stealing people's art. (even though the art can just as easily be stolen with a simple screenshot even without AI, and stolen art being put on t-shirts has been a thing even before the rise of AI, not that it makes AI art theft any less problematic or demoralizing for aspiring artists) Also, the way companies like Google and Meta use AI raises tons of privacy concerns IMO, especially given their track record of stealing user data even before the rise of AI.

    Another issue I find with AI art/images is just how spammy they are. Sometimes I search for references to use for drawing (oftentimes various historical armors because I'm a massive nerd) as a hobby, only to be flooded with AI slop, which doesn't even get the details right pretty much all the time.

    I believe that if AI models were primarily open-source (like DeepSeek) and with data voluntarily given by real volunteers, AND are transparent enough to tell us what data they collect and how, then much of the hate AI is currently receiving will probably dissipate. Also, AI art as it currently exists is soulless as fuck IMO. One of the only successful implementations of AI in creative works I have seen so far is probably Neuro-Sama.

    • I very much agree, and I think it's worth adding that if open source models don't become dominant then we're headed for a really dark future where corps will control the primary means of content generation. These companies will get to decide what kind of content can be produced, where it can be displayed, and so on.

      The reality of the situation is that no amount of whinging will stop this technology from being developed further. When AI development occurs in the open, it creates a race-to-the-bottom dynamic for closed systems. Open-source models commoditize AI infrastructure, destroying the premium pricing power of proprietary systems like GPT-4. No company is going to be spending hundreds of millions training a model when open alternatives exist. Open ecosystems also enjoy stronger network effects attracting more contributors than is possible with any single company's R&D budget. How this technology is developed and who controls it is the constructive thing to focus on.

    • I found a YouTube link in your comment. Here are links to the same video on alternative frontends that protect your privacy:

  • AI art is stealing even less than piracy is. If copying a digital movie without paying for it isn't stealing than how is generating a digital image based on thousands of digital images?

    Intellectual property is a bad thing. It is a cornerstone of modern capitalism. The arguments for AI art being stealing all hinge on the false premise that intellectual property law is fair, equitable and just and not just a way for capitalists to maintain their monopolies on ideas.

    Yes artists deserve to be paid a fair share for their efforts but only as much as everyone else. The issue is that artists are losing control of the means of production. Artists are rightly upset but this should bring them into solidarity with the working classes. Instead they want the working classes to rally to their cause. They want workers, who have had no control over the means of production for hundreds of years, to rise up and fight for the artists right to control their means of production. It's neo-ludditism they are railing against machines for stealing their jobs instead of railing against the capitalists hoarding all the wealth. It's individualism bordering on narcissism. It lacks class consciousness.

    This topic can be a good entry point for agitation if you have a soft touch. It's hard to sound like you are on the side of an artist who feels they are being stolen from and convincing them that it is actually not theft while explaining that the fear of losing their livelihood is real but it is a feeling that the entire working class has been battling with for centuries.

    Artists visceral feelings about AI are very valuable because most of the working class has been desensitised to their lot in life. If artists were able to use their skills to remind the masses of this injustice it could go a long way to raising class consciousness. But since artists have been separated from the working class by their control of the means of production getting them to pivot can be hard like with any other petit bourgeoisie.

    • AI art is stealing even less than piracy is. If copying a digital movie without paying for it isn’t stealing than how is generating a digital image based on thousands of digital images?

      Humans also do it all the time, going onboard with the IP mafia on AI it's like if every even vaguely impressionist painting author needed to pay royalties to Claude Monet or if every conventional fantasy author had to do it for Tolkien, except Tolkien also got his inspirations from previous works so i guess whomever is the lawful inheritors of Snorri Sturlusson and Elias Lonnrot suddenly become very rich, except that they also compiled their works based on... and so on and on and on

      Intellectual property is a bad thing.

      Even if we ignore every other impact of IP, it was historically always used by publishing industry against the individual artist.

    • This is a bad take but I don't have the energy to argue

  • My take on it is that: when big corporations are doing it for (direct or inderect) profit it's stealing. It was trained on the work of artists after all and that's the only reason they can make good images. If they can make a model that doesn't require using images from others it would solve that issue but at least under capitalism that is too expensive to happen now so it won't happen.

    Personal use can be fine I guess and can even allow for more creativity, like if people are using image generation to make new images/art/assets based on their own work/photos. An indie game/movie/etc where the person uses AI to expand what they can do in size is a great use of AI I'd say, giving someone the ability to do something bigger/better than they could do by themselves is what a tool should be like and gen AI should be such a tool for artists too.

    There are more cases but they might be harder to come to a conclusion on, specially as they exist in a capitalist setting.

    • Incidentally, there's a similar case of corporate freeloading when it comes to open source. Corporations use projects developed by volunteers and save billions of dollars in the process, but rarely contribute anything back or help fund the projects they depend on.

  • A lot of computer algorithms are inspired by nature. Sometimes when we can't figure out a problem, we look and see how nature solves it and that inspires new algorithms to solve those problems. One problem computer scientists struggled with for a long time is tasks that are very simple to humans but very complex for computers, such as simply converting spoken works into written text. Everyone's voice is different, and even those same people may speak in different tones, they may have different background audio, different microphone quality, etc. There are so many variables that writing a giant program to account for them all with a bunch of IF/ELSE statements in computer code is just impossible.

    Computer scientists recognized that computers are very rigid logical machines that computer instructions serially like stepping through a logical proof, but brains are very decentralized and massively parallelized computers that process everything simulateously through a network of neurons, whereby its "programming" is determined by the strength of the neural connections between the neurons, that are analogue and not digital and only produce approximate solutions and aren't as rigorous as a traditional computer.

    This led to the birth of the artificial neural network. This is a mathematical construct that describes a system with neurons and configurable strengths of all its neural connections, and from that mathematicians and computer scientists figured out ways that such a neural network could also be "trained," i.e. to configure its neural pathways automatically to be able to "learn" new things. Since it is mathematical, it is hardware-independent. You could build dedicated hardware to implement it, a silicon brain if you will, but you could also simulate it on a traditional computer in software.

    Computer scientists quickly found that applying this construct to problems like speech recognition, they could supply the neural network tons of audio samples and their transcribed text and the neural network would automatically find patterns in it and generalize from it, and when new brand audio is recorded it could transcribe it on its own. Suddenly, problems that at first seemed unsolvable became very solvable, and it started to be implemented in many places, such as language translation software also is based on artificial neural networks.

    Recently, people have figured out this same technology can be used to produce digital images. You feed a neural network a huge dataset of images and associated tags that describe it, and it will learn to generalize patterns to associate the images and the tags. Depending upon how you train it, this can go both ways. There are img2txt models called vision models that can look at an image and tell you in written text what the image contains. There are also txt2img models which you can feed it a description of an image and it will generate and image based upon it.

    All the technology is ultimately the same between text-to-speech, voice recognition, translation software, vision models, image generators, LLMs (which are txt2txt), etc. They are all fundamentally doing the same thing, just taking a neural network with a large dataset of inputs and outputs and training the neural network so it generalizes patterns from it and thus can produce appropriate responses from brand new data.

    A common misconception about AI is that it has access to a giant database and the outputs it produces are just stitched together from that database, kind of like a collage. However, that's not the case. The neural network is always trained with far more data that can only possibly hope to fit inside the neural network, so it is impossible for it to remember its entire training data (if it could, this would lead to a phenomena known as overfitting which would render it nonfunctional). What actually ends up "distilled" in the neural network is just a big file called the "weights" file which is a list of all the neural connections and their associated strengths.

    When the AI model is shipped, it is not shipped with the original dataset and it is impossible for it to reproduce the whole original dataset. All it can reproduce is what it "learned" during the training process.

    When the AI produces something, it first has an "input" layer of neurons kind of like sensory neurons, such as, that input may be the text prompt, may be image input, or something else. It then propagates that information through the network, and when it reaches the end, that end set of neurons are "output" layers of neurons which are kind of like motor neurons that are associated with some action, lot plotting a pixel with a particular color value, or writing a specific character.

    There is a feature called "temperature" that injects random noise into this "thinking" process, that way if you run the algorithm many times, you will get different results with the same prompt because its thinking is nondeterministic.

    Would we call this process of learning "theft"? I think it's weird to say it is "theft," personally, it is directly inspired by biological systems learn, of course with some differences to make it more suited to run on a computer but the very broad principle of neural computation is the same. I can look at a bunch of examples on the internet and learn to do something, such as look at a bunch of photos to use as reference to learn to draw. Am I "stealing" those photos when I then draw an original picture of my own? People who claim AI is "stealing" either don't understand how the technology works or just reach to the moon claiming things like it doesn't have a soul or whatever so it doesn't count, or just pointing to differences between AI and humans which are indeed different but aren't relevant differences.

    Of course, this only applies to companies that scrape data that really are just posted publicly so everyone can freely look at, like on Twitter or something. Some companies have been caught scraping data illegally that were never put anywhere publicly, like Meta who got in trouble for scraping libgen, which a lot of stuff on libgen is supposed to be behind a paywall. However, the law already protects people who get their paywalled data illegally scraped as Meta is being sued over this, so it's already on the side of the content creator here.

    Even then, I still wouldn't consider it "theft." Theft is when you take something from someone which deprives them of using it. In that case it would be piracy, when you copy someone's intellectual property for your own use without their permission, but ultimately it doesn't deprive the original person of the use of it. At best you can say in some cases AI art, and AI technology in general, can based on piracy. But this is definitely not a universal statement. And personally I don't even like IP laws so I'm not exactly the most anti-piracy person out there lol

  • i don't think it's stealing; all current artists invariably owe the work of those who came before them

  • I don't wanna get too deep into the weeds of the AI debate because I frankly have a knee jerk dislike for AI but from what I can skim from hog groomer's take I agree with their sentiment. A lot of the anti-AI sentiment is based on longing for an idyllic utopia where a cottage industry of creatives exist protected from technological advancements. I think this is an understandable reaction to big tech trying to cause mass unemployment and climate catastrophe for a dollar while bringing down the average level of creative work. But stuff like this prevents sincerely considering if and how AI can be used as tooling by honest creatives to make their work easier or faster or better. This kind of nuance as of now has no place in the mainstream because the mainstream has been poisoned by a multi-billion dollar flood of marketing material from big tech consisting mostly of lies and deception.

  • AI and so many other pointless online discourseTM that can summarized as "does X suck/should X be abolished/will X exist under socialism" follow two basic sides of an argument:

    1. X only sucks because of capitalism and under socialism, X will actually be good for society.
    2. X will undergo such qualitative change under socialism that it is no longer X but Y.

    All AI discourseTM follow this basic pattern. On one side, you have people like bidetmarxman who argue that AI only sucks because capitalism sucks and on the other side, you have people say that AI sucks while also saying that the various algorithms and technologies that are present in useful automation doesn't count as AI but is something different.

    The way to not fall into the trap is to ask these simple questions:

    1. Does X exists in AES?
    2. What is AES's relationship with X?

    If we try to apply this to AI in general, the answers are very simple. AI is not only pushed by the Chinese state, but it's already very much part of Chinese society where even average people benefit from things like self-driving buses. China is even incorporating AI within its educational curriculum. This makes sense since people are going to use it anyways, so might as well educate them on proper use and the pitfalls of misuse.

    The question of AI art within China is far murkier. There seems to be some hesitation. For example, there was a recent law passed that stated AI art must be labeled as such. I don't think they would make an effort to enforce disclosure of AI art being AI art if it were so innocent.

  • The only major issue I personally see with it are the fakes/deepfakes. The quality is still pretty subpar right now yes but that's something that will get better over time, just like computer graphics have over the last few decades. Being against AI art just because it's easy seems like a rather reactionary take to me, Marxists shouldn't be in favor of intentionally gatekeeping things behind innate ability or years of expensive study.

    As for the deal with artists, ideally there should be a fine distinction between personal and commercial use that empowers indie artists while holding large corporations accountable for theft.

  • well...in my experience, one side (people who draws good or bad and live making porn commissions mostly) complain that AI art is stealing their monies and produce "soulless slop"

    and the other side (gooners without money and techbros) argue that this is the future of eternal pleasure making lewd pics of big breasted women without dealing with artistic divas, paying money or "wokeness"

92 comments