Elon Musk‘s X failed to block a California law that requires social media companies to disclose their content-moderation policies.
U.S. District Judge William Shubb rejected the company’s request in an eight-age ruling on Thursday.
“While the reporting requirement does appear to place a substantial compliance burden on social medial companies, it does not appear that the requirement is unjustified or unduly burdensome within the context of First Amendment law,” Shubb wrote, per Reuters.
The legislation, signed into law in 2022 by California Gov. Gavin Newsom, requires social media companies to publicly issue their policies regarding hate speech, disinformation, harassment and extremism on their platforms. They must also report data on their enforcement of these practices.
Am I correct in saying that the law is a transparency law and not a moderation law?
It would appear so but anything to do with digital spaces are murky.
As we kind of treat digital space the way we do physical space aince the digital space is owned the people who own it get to set the rules and policies which govern the space… But just like a shopping mall can’t eject you for the sole reasoning of you being a specific race certain justifications within moderation policies are theoretically grounds for constitutional protections.
However it is a fucking mess to try and use a court to actually enforce the laws like we do in physical spaces. Like here in Canada uttering threats and performing hate speech to a crowd and scribbing swastikas on things for instance are illegal. But do that over a video game chat or some form of anonymizing social media and suddenly you’re dealing with citizens of other countries with different laws, a layer of difficulty in determining the source that would require a warrant to obtain and even if both people are Canadian you would need a court date, documentation that the law was appropriately followed in obtaining all your evidence, proving guilt, deciding where the defendant must physically show up to defend themselves and even if they do prove assault by uttering threats or hate speech violations… They would probably just get a fine or community service.
Nobody has time for that.
So if you want to enforce the protections of these laws either you hold the platform responsible for internal policing of the law and determine whether it is discharging it’s duty properly by giving citizens a means to check for and report violations of it’s own internal policies for later reveiw and give them means to pursue civil cases… Or you go hands off and create means to give a platform’s users means to check and make informed choices based on their own personal standards and ethical principles. Every moderation policy leaves a burden on someone but the question is who.
So it might be a transparency law but it also opens the door for applying -
Constitutionalcivil rights law protections to users by holding the business accountable if there are glaring oversights in their digital fifedoms…but such laws are basically inert until someone tries to challenge them.That’s fascinating. Thanks for taking the time to explain all this. TIL
Thanks!
deleted by creator
In Canada it’s partial mix of protections granted from Charter rights and expanded by the Human Rights Act to apply more universally but in the US you’re right, it’s covered just under the civil rights act I think?
I may have slipped into common error by mentioning constitutional affairs where it doesn’t belong.
As well as what the other comment says, it also allows people/businesses to see if their moderation is appropriate for them and decide to use or not to use the platform depending on that. Transparency can cause moderation.
It would appear so but anything to do with digital spaces are murky.
As we kind of treat digital space the way we do physical space aince the digital space is owned the people who own it get to set the rules and policies which govern the space… But just like a shopping mall can’t eject you for the sole reasoning of you being a specific race certain justifications within moderation policies are theoretically grounds for constitutional protections.
However it is a fucking mess to try and use a court to actually enforce the laws like we do in physical spaces. Like here in Canada uttering threats and performing hate speech to a crowd and scribbing swastikas on things for instance are illegal. But do that over a video game chat or some form of anonymizing social media and suddenly you’re dealing with citizens of other countries with different laws, a layer of difficulty in determining the source that would require a warrant to obtain and even if both people are Canadian you would need a court date, documentation that the law was appropriately followed in obtaining all your evidence, proving guilt, deciding where the defendant must physically show up to defend themselves and even if they do prove assault by uttering threats or hate speech violations… They would probably just get a fine or community service.
Nobody has time for that.
So if you want to enforce the protections of these laws either you hold the platform responsible for internal policing of the law and determine whether it is discharging it’s duty properly by giving citizens a means to check for and report violations of it’s own internal policies for later reveiw and give them means to pursue civil cases… Or you go hands off and create means to give a platform’s users means to check and make informed choices based on their own personal standards and ethical principles. Every moderation policy leaves a burden on someone but the question is who.
So it might be a transparency law but it also opens the door for applying Constitutional protections to users by holding the business accountable if there are glaring oversights in their digital fifedoms…but such laws are basically inert until someone tries to challenge them.
no, it’s transparency about moderation… :
under AB 587, a “social media company” that meets the revenue threshold must provide to the California AG: A copy of their current terms of service. Semiannual reports on content moderation. The semiannual reports must include: (i) how the terms of service define certain categories of content (e.g., hate speech, extremism, disinformation, harassment and foreign political interference); (ii) how automated content moderation is enforced; (iii) how the company responds to reports of violations of the terms of service; and (iv) how the company responds to content or persons violating the terms of service. The reports must also provide detailed breakdowns of flagged content, including: the number of flagged items; the types of flagged content; the number of times flagged content was shared and viewed; whether action was taken by the social media company (such as removal, demonetization or deprioritization); and how the company responded.
It’s baffling that some people are convinced that he’s fighting the good fight for them. The absolute donuts.
Oh no, how unfortunate. Poor Elon!
If you don’t put /s then that’s your real opinion. People actually hold that stance.
I think it was obvious enough that this wasn’t serious
You’re 100% right. Ignore that dork.
“We just want you to be honest with us.”
“What? That’s outrageous! We’d never do another dime of business if we aren’t allowed to lie!”
And comes after the EU decided to actually start enforcing its moderation laws! #BrusselsEffect
What is an “eight-age ruling?”
Eight-page is what I’m assuming it’s supposed to say.
Ah, thanks
Government officials and the rich need to guarantee everything is underage before interacting with it
deleted by creator
What’s up with his face did he have a lightsaber battle with Mace Windu?
“Pure evil” —the Boer