For what it’s worth I’m with you on this one. There’s a lot of robot-bashing on the left when fundamentally the complaints are about the commodification of art, not about the tools used to make it. It’s frankly unmarxist to stand against AI, at least generally. I really hope no one here will die on the hill that “intellectual property” is real.
IP and copywrite arent the same. IP is a way for companies to own the idea behind a work, a character, setting, etc.
Copywrite (copyright? Idk) is a protection from plagarism. We as leftists support the person who does labor getting the value from that labor. Copywrite when used right is just protecting that idea.
If you write a book someone cant come along and photo copy it and start selling your book under their own name for example.
I am not “anti-AI” despite what the person you responded to tried to make it sound like. I am anti plagarism of hard working artists by huge companies. I would be perfectly fine with an artist for example feeding an AI model exclusively their own work to train it, or public domain works, and then using it to help them with tedious parts of drawing or something like that.
My point of mentioning that AI needs human made art as input to work with is that its essentially a fancy photo copier with extra abilites. But companies are acting as if the AI is “making” things on its own and stealing all the labor value that went into the art it trained on for themselves.
Copyright is one of the 4 types of intellectual property. Your misguided defense of the individual author strengthens publishing companies instead, since they own the means of production to copy and have the lawyers to litigate such violations.
Also you misunderstand how the technology works. Generative AI does not function by copying the data it was trained on, but by using the trends it noticed in that data to piece together something original. Examine the code of whichever LLM and you will never find any books, pictures, or movies stored within. It’s a sophisticated network of associations and dissociations.
Now you might then argue that these generalized statistics also constitute plagiarism, but consider what that entails. If mimicry is criminal, should it then be illegal for artists to imitate another’s style? Should musicians be able to patent chord progressions and leitmotifs? Should genres be property?
Your stance against AI is boxed within the existing bourgeois framework of creative ownership which I hope you agree is awful. I understand the precarity that this tech creates for artists but expanding IP will empower, not weaken, the companies that exploit them.
You seem to be missing the entire point. An artist makes a work, a company takes that work without paying them, feeds it to an AI, and produces other works which they can use as their own.
Examine the code of whichever LLM
The exact mechanisms behind how it works does not matter. Not to mention the fact that not even the people who make LLMs know how their code works. So telling me to examine the code is ridiculous.
This is not about the resulting work being similar it is about the original work by the artist being used to train the AI without their consent, and without compensating them.
When i said the AI is essentially a photo copier i wasn’t talking about the technology behind it. That should have been obvious. I was talking about the material reality of what happens.
Photo copier: Original work is scanned -> new work is created from it.
Generative AI: Original work is scanned -> new work is created from it.
They are the same in this regard. Obviously i was not implying that they are the same mechanically.
The part that matters is that the original work is where the labor value is put in. It takes labor to create the original work, but does not take labor to produce the new work. Be that on a photo copier making copies, or on an AI generating stuff.
To pretend as if the AI is just the same as some other artist mimicing a style is to show you have no understanding of the labor theory of value, or you simply do not care for it.
If another artist is mimicing a style they are putting in their own labor to do so. They are adding labor value themselves. They are also using the original work in a consentual manner. When an artist puts out work they are consenting to others viewing it and perhaps taking inspiration from it. What they are NOT consenting to is that work being scraped from the internet, fed into an AI, and used to pump out unlimited new works for someone elses profit. Just as they are not cosenting to someone photo copying their work and doing the same thing.
To try and argue that I’m the one supporting a bourgeois framework when you are the one who is seemingly completely ignoring where the actual value here comes from (the labor) is comical.
You continue to argue against things i never said aswell. Implying i advocated for expaning IP, and ignoring the fact i very clearly made a distinction that i don’t support plagarism done by companies. Then implying that whatever i would setup in place of the current system, which i never specified, would somehow benefit companies instead of artists. Funny how you just seem to imagine things I think or say when they arent true. Then argue against those instead of what i actually said. Isn’t there a word for that?
You forgot one part in the LTV. When a thing can be created with less labor, all of those things will have less labor value.
If an original gets photocopied 1000x, all of the copies will be worthless, as there was no labor put in. But the copies are indistinguishable from the original, so the original is now also worthless.
Copyright is taking something worthless (copies) and trying to artificially make it worth something, by suppressing people’s ability to share.
Now that LLMs exist, making something that could also be made by a LLM has the same labor value as when someone uses a LLM to create it (very little).
Well done for highlighting the LTV the way you did; it is an excellent examination of the defense against proprietorship. (I was deliberating against commenting because it may feel like ganging up on the relative OP here but I felt an upvote was not enough)
Would you approve of AI if companies bought licenses from the artists in their training data? Is this the underlying issue for you, that capitalists aren’t playing by their own rules? This is a reasonable grievance but it’s hardly communist.
Regarding labor, don’t you want people to work less? Yes the machine takes less manpower than a human. That’s potentially liberatory. Of course under bourgeois rule, this technology is used to suppress the wages of artists. But that’s true for everything, which is why the problem is capitalism itself and why we shouldn’t cede control of this new technology to capitalists.
Also I don’t mean to put words in your mouth. I asked those questions to get you to think about how the anti-plagiarism laws you want for AI would manifest in real life. And I said that you’re an advocate for expanding intellectual property because you’re implying that artists should have more protections against having their work copied. When an artist’s work cannot be copied without the right granted to you, then they hold the copyright, a form of IP. This is shortsighted because those who are most able to defend their IPs and who have the most IPs to defend are not solo artists, but corporations. Broaden copyright laws and you’re directly giving power to Disney and the like.
P.S. chill out, damn. You’re being snarky as hell when both me are memorable have been formal with you. I’m not trying to dunk on you and this isn’t reddit.
Also I don’t mean to put words in your mouth. I asked those questions to get you to think about how the anti-plagiarism laws you want for AI would manifest in real life.
Making the abstract concrete is always a good dialectial materialist approach. It is how we should often dissect a problem as marxists. For anyone still lurking and interested in further reading:
For me, it’s one of those “oh a lot of people talk when they don’t know what they’re talking about” moments. Sometimes you don’t see it until people start talking about a subject you know pretty well and have a spent a lot of time around. Mind you, I don’t mean to imply I’m some expert on the subject matter and others should defer to me and take me at my word on all of it (I’m not, like, an actual ML engineer, which is a whole other level of familiarity). But I do have quite a bit of hobbyist knowledge of it and have had many discussions with people, with nuanced and varying perspectives on generative AI, among people who use it (which is an important part, because people sometimes talk about the subject like anyone who uses AI is a mindless tech bro cultist praying for the singularity to save humanity and that’s simply not the case).
So yeah, I appreciate the nuance on it. And it’s something I always try to encourage when AI comes up. Though sadly, as you can see from the mood of voting and posting in this thread and others where AI has come up, there tends to be a significant amount of reactive passion outside of communities where people actually use AI and are relatively okay with it. Consistently, I see plenty of thoughtful takes on AI among people who use AI, with mixed feelings about its problems and its strengths, and mixed feelings on what they are okay on using it for or not using it for and why, and that should tell you all you need to know about the nature of AI. That like anything new and disruptive, it can be for better or worse, and needs evaluation along the way. And that’s where we come in, making sure we find a way to take part in what happens with it, not let the capitalists dictate how it goes down. But to do that, we have to understand it properly. We can’t do surface level moralistic evaluations and call it a day. Even setting aside the effectiveness of that as a way to engage with technology, we don’t have the organized power and messaging for that to actually mean anything anyway.
I hesitate to post this cause I’m afraid it’s going to sound like I’m insulting people in this thread, but I guess it’s somewhat of a vent post in a way. It is genuinely tiring that when AI comes up, outside of very niche communities with measured support of it, it starts feeling like people who are normally on the same side are ready to throw hands. I’m not exaggerating when I say that for me, it is stressful to engage with in this community.
It for sure seems like this topic sucks the theory out of comrades and turns them into mini Mickey mice, ready to kill to protect the sanctity of their IP. It’s either that or they’ll suddenly embrace idealism because pictures are only meaningful when they’re metaphysically imbued with human spirit or whatever.
In real life, this doesn’t bother me because I’m surrounded by libs. But it is aggravating how common reaction is on here and hexbear. I don’t understand how avid pirates can be so attached to intellectual property laws.
Good points. I’m ngl, it’s kind of disillusioning. I know there are problems with the western left, but it’s one thing to understand it in the abstract and it’s another thing to see it so starkly in action, that people can be reduced to this over a single issue. It would be more understandable if it was an issue like the many we see under capitalism and imperialism that involve direct and obvious violence. Instead, we see people popping off about what is an under-investigated automation process going on, as to the extent of its societal impact and effects. We know there are problems with AI at least in the short-term, some of which is easily observable, but we can also easily observe some benefits, again, at least in the short-term. Contending with this as “scientific socialists” does have a certain conscious ideological bias (such as in favor of the working class), which I emphasize to say that it is not purely “objective” or something, but be that as it may, it also needs to be grounded in investigation, not merely navel-gazing.
And there is a noticeable lack of investigation relative to the amount of passionate creeds about AI. On this subject, it is especially noticeable to me because I have done investigation, even if informally, and it makes it obvious by contrast when others have not. Some I can only guess could perceive this as a disrespect to them if they think they are informed and I am not being fair to them, but to that I say, “if the shoe fits” as the saying goes. If someone has done the investigation, they should be able to back up their words with more than pure theory and should not be taking personal offense to the accusation that some people aren’t investigating. Marx didn’t merely write Capital via navel-gazing and then call it a day. He observed revolutionary movements, their successes and failures, and adjusted theory based on that. As did others who followed.
Takes on AI, like any other topic, need to be informed by an understanding of what is actually happening with AI in substance and not only via a cursory read of mainstream headlines and the opining of a primarily online reaction (I say primarily online because from everything I’ve personally seen and heard from others, the whole thing of AI being so controversial appears to be a primarily online thing and it’s more common that people in RL simply don’t care much about it one way or another, if they are even aware of its development). Even if everyone did investigate the substance of what is happening with AI, or at least read up on the investigations of others, there would still be disagreements of course, but I suspect they would be more measured and impersonal disagreements, as the discussions tend to be in spaces where AI is a shared hobby of a kind.
I feel similarly disillusioned about Westerners’ revolutionary potential and this AI “debate” is not helping. It feels like such a basic aspect of marxism.
Everytime one wants to clarify it is capitalism and not the technology people do mental gymnastics which ends up being usually some combination of defense of proprietorship and mysticising creatvitity (including the quality of AI output; if the output was high enough quality would that they mean they would then support AI?)
There’s zero self awareness of how Nietschean they sound and how effectively they are saying the automation of other people’s jobs is fine but not theirs because of some inherent superiority that they bring to the table of humanity (usually artisanship). There is no examination of their own fear of proletarianization.
It really feels their marxism, at least in this field, is vibes based.
There could be lots of interesting discussions, for example how we could seize the means of production of AI, or how it could be used to organise or create agitprop, or help progress towards DOTP but all of that is lost because people refuse to leave their liberal myopic bubble.
(The downvotes I’m not bothered about. Lemmygrad is a place for learning and pushing my understanding of theory. I want someone to show me the error of ways so I can learn but this one is such an easy “dunk” against reactionary takes. My own field (I don’t want to say what to keep anonimity) is under threat so I am less sympathetic when other so-called marxists refuse to expand their horizon.)
There’s zero self awareness of how Nietschean they sound and how effectively they are saying the automation of other people’s jobs is fine but not theirs because of some inherent superiority that they bring to the table of humanity (usually artisanship). There is no examination of their own fear of proletarianization.
You know, when you put it this way, it makes me wonder how much of it is fueled by a quiet elitism that people are not even being conscious of. The OP pic I think touches on this, albeit unintentionally; the implication is that the annoying parts of existence are fine to automate, but the cool parts shouldn’t be, and that the annoying parts are icky manual labor and that the cool parts are expressing yourself. This I don’t think is inherently elitist on the face of it, but it does imply a very particular view about the world, which isn’t necessarily shared by everyone and one that arguably derives, at least in part, from certain elitist societal structures. It’s wealthy people who, automation or no, can have other people do the icky parts and then they do the fun parts. And the idea of others being able to do this too because of AI leaves out the ugly and inconvenient reality that if you automate the “icky” jobs, but you don’t address class/caste issues along with it, what you get is a bunch of people who were already on the lower rungs of class and caste, who are now out of work and have no replacement job.
Nowhere in the online creeds about AI do I recall seeing mention of this problem, but plenty is said about “art.” The unspoken implication seems to be that it’s fine to leave the factory worker types high and dry, but don’t you dare come for the “creatives.” It is at times talked about almost like the history of automation started with the AI transformer model proof of concept paper Attention Is All You Need and suddenly “creatives” rose to the occasion, all of a sudden realizing what is wrong with automation.
It would be more understandable, I think, if people who suddenly feel so strongly about “AI” were consistently speaking up about automation of “icky” jobs in the class strata as well. This does not appear to occur though and it doesn’t come across to me like an intentional, selective blindness. It comes across, such as in the concept of OP pic, like it simply doesn’t occur to people because they are so used to viewing “icky” jobs as this inherently unwanted thing that of course it’s fine to want to automate them because “no one really wants to be doing it anyway.” Which would be fine if there was an actual answer for what those people are supposed to do with their lives, that allows them to have food and shelter in a capitalist world. I mean, there was a period when the big thing was “learn to code”, now the coding field is saturated to hell with code bootcamps and other such stuff. It is more competitive than ever, which is probably better for the employer and not so much for the employee. Now automation is coming for coding too.
The lesson should not be that “X is the one untouchable field and others are okay to automate.” The lesson should be that there is no “safe” job to hide out from a system like capitalism and exist outside of its problems. That we need to organize with each other about it and stop pretending we can be one of the “elites” on the edges, as a spectator.
The OP pic I think touches on this, albeit unintentionally; the implication is that the annoying parts of existence are fine to automate, but the cool parts shouldn’t be, and that the annoying parts are icky manual labor and that the cool parts are expressing yourself.
There is a certainly an undercurrent of this; labour aristocratic mores.
For what it’s worth I’m with you on this one. There’s a lot of robot-bashing on the left when fundamentally the complaints are about the commodification of art, not about the tools used to make it. It’s frankly unmarxist to stand against AI, at least generally. I really hope no one here will die on the hill that “intellectual property” is real.
IP and copywrite arent the same. IP is a way for companies to own the idea behind a work, a character, setting, etc.
Copywrite (copyright? Idk) is a protection from plagarism. We as leftists support the person who does labor getting the value from that labor. Copywrite when used right is just protecting that idea.
If you write a book someone cant come along and photo copy it and start selling your book under their own name for example.
I am not “anti-AI” despite what the person you responded to tried to make it sound like. I am anti plagarism of hard working artists by huge companies. I would be perfectly fine with an artist for example feeding an AI model exclusively their own work to train it, or public domain works, and then using it to help them with tedious parts of drawing or something like that.
My point of mentioning that AI needs human made art as input to work with is that its essentially a fancy photo copier with extra abilites. But companies are acting as if the AI is “making” things on its own and stealing all the labor value that went into the art it trained on for themselves.
Copyright is one of the 4 types of intellectual property. Your misguided defense of the individual author strengthens publishing companies instead, since they own the means of production to copy and have the lawyers to litigate such violations.
Also you misunderstand how the technology works. Generative AI does not function by copying the data it was trained on, but by using the trends it noticed in that data to piece together something original. Examine the code of whichever LLM and you will never find any books, pictures, or movies stored within. It’s a sophisticated network of associations and dissociations.
Now you might then argue that these generalized statistics also constitute plagiarism, but consider what that entails. If mimicry is criminal, should it then be illegal for artists to imitate another’s style? Should musicians be able to patent chord progressions and leitmotifs? Should genres be property?
Your stance against AI is boxed within the existing bourgeois framework of creative ownership which I hope you agree is awful. I understand the precarity that this tech creates for artists but expanding IP will empower, not weaken, the companies that exploit them.
You seem to be missing the entire point. An artist makes a work, a company takes that work without paying them, feeds it to an AI, and produces other works which they can use as their own.
The exact mechanisms behind how it works does not matter. Not to mention the fact that not even the people who make LLMs know how their code works. So telling me to examine the code is ridiculous.
This is not about the resulting work being similar it is about the original work by the artist being used to train the AI without their consent, and without compensating them.
When i said the AI is essentially a photo copier i wasn’t talking about the technology behind it. That should have been obvious. I was talking about the material reality of what happens.
Photo copier: Original work is scanned -> new work is created from it.
Generative AI: Original work is scanned -> new work is created from it.
They are the same in this regard. Obviously i was not implying that they are the same mechanically.
The part that matters is that the original work is where the labor value is put in. It takes labor to create the original work, but does not take labor to produce the new work. Be that on a photo copier making copies, or on an AI generating stuff.
To pretend as if the AI is just the same as some other artist mimicing a style is to show you have no understanding of the labor theory of value, or you simply do not care for it.
If another artist is mimicing a style they are putting in their own labor to do so. They are adding labor value themselves. They are also using the original work in a consentual manner. When an artist puts out work they are consenting to others viewing it and perhaps taking inspiration from it. What they are NOT consenting to is that work being scraped from the internet, fed into an AI, and used to pump out unlimited new works for someone elses profit. Just as they are not cosenting to someone photo copying their work and doing the same thing.
To try and argue that I’m the one supporting a bourgeois framework when you are the one who is seemingly completely ignoring where the actual value here comes from (the labor) is comical.
You continue to argue against things i never said aswell. Implying i advocated for expaning IP, and ignoring the fact i very clearly made a distinction that i don’t support plagarism done by companies. Then implying that whatever i would setup in place of the current system, which i never specified, would somehow benefit companies instead of artists. Funny how you just seem to imagine things I think or say when they arent true. Then argue against those instead of what i actually said. Isn’t there a word for that?
You forgot one part in the LTV. When a thing can be created with less labor, all of those things will have less labor value.
If an original gets photocopied 1000x, all of the copies will be worthless, as there was no labor put in. But the copies are indistinguishable from the original, so the original is now also worthless.
Copyright is taking something worthless (copies) and trying to artificially make it worth something, by suppressing people’s ability to share.
Now that LLMs exist, making something that could also be made by a LLM has the same labor value as when someone uses a LLM to create it (very little).
Well done for highlighting the LTV the way you did; it is an excellent examination of the defense against proprietorship. (I was deliberating against commenting because it may feel like ganging up on the relative OP here but I felt an upvote was not enough)
Would you approve of AI if companies bought licenses from the artists in their training data? Is this the underlying issue for you, that capitalists aren’t playing by their own rules? This is a reasonable grievance but it’s hardly communist.
Regarding labor, don’t you want people to work less? Yes the machine takes less manpower than a human. That’s potentially liberatory. Of course under bourgeois rule, this technology is used to suppress the wages of artists. But that’s true for everything, which is why the problem is capitalism itself and why we shouldn’t cede control of this new technology to capitalists.
Also I don’t mean to put words in your mouth. I asked those questions to get you to think about how the anti-plagiarism laws you want for AI would manifest in real life. And I said that you’re an advocate for expanding intellectual property because you’re implying that artists should have more protections against having their work copied. When an artist’s work cannot be copied without the right granted to you, then they hold the copyright, a form of IP. This is shortsighted because those who are most able to defend their IPs and who have the most IPs to defend are not solo artists, but corporations. Broaden copyright laws and you’re directly giving power to Disney and the like.
P.S. chill out, damn. You’re being snarky as hell when both me are memorable have been formal with you. I’m not trying to dunk on you and this isn’t reddit.
Making the abstract concrete is always a good dialectial materialist approach. It is how we should often dissect a problem as marxists. For anyone still lurking and interested in further reading:
https://redsails.org/artisanal-intelligence/
https://polclarissou.com/boudoir/archive.html
Yet another red sails banger. 🔥🔥🔥
For me, it’s one of those “oh a lot of people talk when they don’t know what they’re talking about” moments. Sometimes you don’t see it until people start talking about a subject you know pretty well and have a spent a lot of time around. Mind you, I don’t mean to imply I’m some expert on the subject matter and others should defer to me and take me at my word on all of it (I’m not, like, an actual ML engineer, which is a whole other level of familiarity). But I do have quite a bit of hobbyist knowledge of it and have had many discussions with people, with nuanced and varying perspectives on generative AI, among people who use it (which is an important part, because people sometimes talk about the subject like anyone who uses AI is a mindless tech bro cultist praying for the singularity to save humanity and that’s simply not the case).
So yeah, I appreciate the nuance on it. And it’s something I always try to encourage when AI comes up. Though sadly, as you can see from the mood of voting and posting in this thread and others where AI has come up, there tends to be a significant amount of reactive passion outside of communities where people actually use AI and are relatively okay with it. Consistently, I see plenty of thoughtful takes on AI among people who use AI, with mixed feelings about its problems and its strengths, and mixed feelings on what they are okay on using it for or not using it for and why, and that should tell you all you need to know about the nature of AI. That like anything new and disruptive, it can be for better or worse, and needs evaluation along the way. And that’s where we come in, making sure we find a way to take part in what happens with it, not let the capitalists dictate how it goes down. But to do that, we have to understand it properly. We can’t do surface level moralistic evaluations and call it a day. Even setting aside the effectiveness of that as a way to engage with technology, we don’t have the organized power and messaging for that to actually mean anything anyway.
I hesitate to post this cause I’m afraid it’s going to sound like I’m insulting people in this thread, but I guess it’s somewhat of a vent post in a way. It is genuinely tiring that when AI comes up, outside of very niche communities with measured support of it, it starts feeling like people who are normally on the same side are ready to throw hands. I’m not exaggerating when I say that for me, it is stressful to engage with in this community.
It for sure seems like this topic sucks the theory out of comrades and turns them into mini Mickey mice, ready to kill to protect the sanctity of their IP. It’s either that or they’ll suddenly embrace idealism because pictures are only meaningful when they’re metaphysically imbued with human spirit or whatever.
In real life, this doesn’t bother me because I’m surrounded by libs. But it is aggravating how common reaction is on here and hexbear. I don’t understand how avid pirates can be so attached to intellectual property laws.
Good points. I’m ngl, it’s kind of disillusioning. I know there are problems with the western left, but it’s one thing to understand it in the abstract and it’s another thing to see it so starkly in action, that people can be reduced to this over a single issue. It would be more understandable if it was an issue like the many we see under capitalism and imperialism that involve direct and obvious violence. Instead, we see people popping off about what is an under-investigated automation process going on, as to the extent of its societal impact and effects. We know there are problems with AI at least in the short-term, some of which is easily observable, but we can also easily observe some benefits, again, at least in the short-term. Contending with this as “scientific socialists” does have a certain conscious ideological bias (such as in favor of the working class), which I emphasize to say that it is not purely “objective” or something, but be that as it may, it also needs to be grounded in investigation, not merely navel-gazing.
And there is a noticeable lack of investigation relative to the amount of passionate creeds about AI. On this subject, it is especially noticeable to me because I have done investigation, even if informally, and it makes it obvious by contrast when others have not. Some I can only guess could perceive this as a disrespect to them if they think they are informed and I am not being fair to them, but to that I say, “if the shoe fits” as the saying goes. If someone has done the investigation, they should be able to back up their words with more than pure theory and should not be taking personal offense to the accusation that some people aren’t investigating. Marx didn’t merely write Capital via navel-gazing and then call it a day. He observed revolutionary movements, their successes and failures, and adjusted theory based on that. As did others who followed.
Takes on AI, like any other topic, need to be informed by an understanding of what is actually happening with AI in substance and not only via a cursory read of mainstream headlines and the opining of a primarily online reaction (I say primarily online because from everything I’ve personally seen and heard from others, the whole thing of AI being so controversial appears to be a primarily online thing and it’s more common that people in RL simply don’t care much about it one way or another, if they are even aware of its development). Even if everyone did investigate the substance of what is happening with AI, or at least read up on the investigations of others, there would still be disagreements of course, but I suspect they would be more measured and impersonal disagreements, as the discussions tend to be in spaces where AI is a shared hobby of a kind.
I feel similarly disillusioned about Westerners’ revolutionary potential and this AI “debate” is not helping. It feels like such a basic aspect of marxism.
Everytime one wants to clarify it is capitalism and not the technology people do mental gymnastics which ends up being usually some combination of defense of proprietorship and mysticising creatvitity (including the quality of AI output; if the output was high enough quality would that they mean they would then support AI?)
There’s zero self awareness of how Nietschean they sound and how effectively they are saying the automation of other people’s jobs is fine but not theirs because of some inherent superiority that they bring to the table of humanity (usually artisanship). There is no examination of their own fear of proletarianization.
It really feels their marxism, at least in this field, is vibes based.
There could be lots of interesting discussions, for example how we could seize the means of production of AI, or how it could be used to organise or create agitprop, or help progress towards DOTP but all of that is lost because people refuse to leave their liberal myopic bubble.
(The downvotes I’m not bothered about. Lemmygrad is a place for learning and pushing my understanding of theory. I want someone to show me the error of ways so I can learn but this one is such an easy “dunk” against reactionary takes. My own field (I don’t want to say what to keep anonimity) is under threat so I am less sympathetic when other so-called marxists refuse to expand their horizon.)
You know, when you put it this way, it makes me wonder how much of it is fueled by a quiet elitism that people are not even being conscious of. The OP pic I think touches on this, albeit unintentionally; the implication is that the annoying parts of existence are fine to automate, but the cool parts shouldn’t be, and that the annoying parts are icky manual labor and that the cool parts are expressing yourself. This I don’t think is inherently elitist on the face of it, but it does imply a very particular view about the world, which isn’t necessarily shared by everyone and one that arguably derives, at least in part, from certain elitist societal structures. It’s wealthy people who, automation or no, can have other people do the icky parts and then they do the fun parts. And the idea of others being able to do this too because of AI leaves out the ugly and inconvenient reality that if you automate the “icky” jobs, but you don’t address class/caste issues along with it, what you get is a bunch of people who were already on the lower rungs of class and caste, who are now out of work and have no replacement job.
Nowhere in the online creeds about AI do I recall seeing mention of this problem, but plenty is said about “art.” The unspoken implication seems to be that it’s fine to leave the factory worker types high and dry, but don’t you dare come for the “creatives.” It is at times talked about almost like the history of automation started with the AI transformer model proof of concept paper Attention Is All You Need and suddenly “creatives” rose to the occasion, all of a sudden realizing what is wrong with automation.
It would be more understandable, I think, if people who suddenly feel so strongly about “AI” were consistently speaking up about automation of “icky” jobs in the class strata as well. This does not appear to occur though and it doesn’t come across to me like an intentional, selective blindness. It comes across, such as in the concept of OP pic, like it simply doesn’t occur to people because they are so used to viewing “icky” jobs as this inherently unwanted thing that of course it’s fine to want to automate them because “no one really wants to be doing it anyway.” Which would be fine if there was an actual answer for what those people are supposed to do with their lives, that allows them to have food and shelter in a capitalist world. I mean, there was a period when the big thing was “learn to code”, now the coding field is saturated to hell with code bootcamps and other such stuff. It is more competitive than ever, which is probably better for the employer and not so much for the employee. Now automation is coming for coding too.
The lesson should not be that “X is the one untouchable field and others are okay to automate.” The lesson should be that there is no “safe” job to hide out from a system like capitalism and exist outside of its problems. That we need to organize with each other about it and stop pretending we can be one of the “elites” on the edges, as a spectator.
There is a certainly an undercurrent of this; labour aristocratic mores.
https://lemmygrad.ml/post/7917393/6397188