Sometime this month, Reddit will go public at a valuation of $6.5bn. Select Redditors were offered the chance to buy stock at the initial listing price, which it hasn’t announced yet but is expected to be in the range of $31-34 per share. Regardless of the actual price,
I think the advancement of LLMs, which culminated in the creation of ChatGPT, is this generation’s Eternal September. In a couple of decades, we’ll talk about how the internet “used to be” before free, public websites were abandoned because our CAPTCHAs could no longer filter out bots and device attestation and continuous mictopayments became the only way to keep platforms spam free.
Even when Microsoft and OpenAI stop hemorrhaging money by giving away stuff like ChatGPT for basically free, the spam farms will run this stuff on their own soon. I expect a wave of internet users to get upset and call paying for used services “enshittification”, because people don’t realise how much running these AI models actually costs.
I think this will also start the transition of not only AI being sold like Netflix or like mobile data caps, but also to an “every company that doesn’t get the most expensive AI will start lagging behind” economy. After all, AI only needs to cost a little less than the manpower it’s replacing. Any internet facing company needs good AI to outwit the AI trying to abuse cheap or free services (like trials) that they may offer.
We’re probably lucky that AI spammers haven’t discovered the Fediverse yet, but if the Fediverse does actually become big enough for mainstream use, we’ll see Twitter level reaction spam in no time, and no amount of CAPTCHAs will be able to stop it.
Part of what makes Twitter, Reddit, etc. such easy targets for bot spammers is that they’re single-point-of-entry. You join, you have access to everyone, and then you exhaust an account before spinning up 10 more.
The Fediverse has some advantages and disadvantages here. One significant advantage is that – particularly if, when the dust finally settles, it’s a big network of a large number of small sites – it’s relatively easy to cut off nodes that aren’t keeping the bots out. One disadvantage, though, is that it can create a ton of parallel work if spam botters target a large number of sites to sign up on.
A big advantage, though, is that most Fediverse sites are manually moderated and administered. By and large, sites aren’t looking to offload this responsibility to automated systems, so what needs to get beaten is not some algorithmic puzzle, but human intuition. Though, the downside to this is that mods and admins can become burned out dealing with an unending stream of scammers.
We had a bunch of Japanese teenagers run scripts on their computers and half the Fediverse was full of spam. If someone really cared about spamming, this shit wouldn’t stop as quickly.
No Fediverse tools have sufficient spam prevention measures right now. The best we have is individually blocking every server, but there are thousands of servers that can be abused by a very basic account creation + spam script.
Manal moderation will lead to small/single user instances getting barred from participating, leading back to centralisation on a few vetted servers. We need automated tools, across all parts of the Fediverse, or the network will be in a constant flux between waves of spam and overbearing defederation to fight the spam waves. Especially once spammers start bypassing CAPTCHAs.
The upside of that attack is that instance Admins had to raise their game and now most of the big instances are running anti-spam bots and sharing intelligence. Next time we’ll be able to move quickly and shut it all down, where this time we were rather scrambling to catch up. Then the spammers will evolve their attack and we’ll raise our game again.
It’s true that the toolset isn’t here now, and the network is actually very fragile at the moment.
It’s also true that platform builders don’t seem to want to deal with these kinds of tools, for raisins.
But it’s also true that temporary blocks are both effective and not that big of a deal.
I’m not sure why you’d think that manual moderation will lead to small instances getting barred, though. Unless you’re predicting that federation will move to whitelisting, rather than blacklisting? That’s historically been the tool of corporate services, not personal or community ones.
Lemmy has been using whitelist based federation right up until people started moving over from Reddit, so it’s not exactly a new approach.
With new domains costing anywhere between $3 and nothing at all, setting up thousands of spam servers isn’t that difficult or expensive. There’s already a tool that’s designed to allow bypassing blocks automatically by simply feeding it a second domain. If spammers actually cared about the Fediverse, they’d be all over it in no time.
But the big danger right now is that free, open servers, big or small, don’t have much in the way of verification or hot prevention. Some instances don’t have any protection at all (which the Japanese spam wave abused), others are using basic CAPTCHAs that copilot will happily solve for you. On centralised services this problem can be fixed temporarily by using technologies like strict device attestation (rip Linux/custom ROM/super cheap phone users), but in a decentralised environment this won’t work. Then there are the many, many servers that never received patches, and still have the Mastodon account takeover vulnerability, for instance.
Small servers will have to prove themselves to the servers they want to federate with, or abuse will be too easy.
I don’t think temporary blocks are a solution. Right now, the attacks focused on tiny servers with one or a couple of users, but with the rise of AI I don’t think the bigger servers will be able to stop dedicated spammers. Right now the spam wave is over, mostly because a few of the Japanese kids got arrested/had their parents find out. Right up until the very end, Lemmy and Mastodon were full of spam.
I don’t want this recentralisation to happen, but I think the Fediverse will end up like email: strict, often arbitrary spam prevention systems that make running your own very difficult. After all, email is the original federated digital network, and it’s absolutely full of stupid restrictions and spam. ActivityPub may have signatures to authenticate users, something that even DKIM still lacks, but the “short message + picture” nature of most Fediverse content make it very difficult to write good spam detection rules for. Maybe someone will create some kind of AI solution, who knows, but I expect deliverability to become as problematic as with email, or maybe even worse.
I can’t think of a good solution here. Our best bet may he hoping that people won’t be too dickish, or to keep the Fediverse out of the mainstream so all the spammers go to Threads and Bluesky first.
If it really ramps up, we could share block lists too, like with ad blockers. So if a friend (or nth-degree friend) blocks someone, then you would block them automatically.
That work has already started with Fediseer. It’s not automatic, but it’s really easy, which is probably the best we’ll get for a while.
I am so tired of this bullshit. Every time I’ve turned around, for the past thirty years now, I’ve seen some variation on this same basic song and dance.
Yet somehow, in spite of supposedly being burdened with so much expense and not given their due by a selfish, ignorant public, these companies still manage to build plush offices on some of the most expensive real estate on the planet and pay eight- or even nine-figure salaries to a raft of executive parasites.
When they start selling assets and cutting executive salaries, or better yet laying them off, then I’ll entertain the possibility that they need more revenue. Until then, fuck 'em.
These companies collect investment money from either investors or other parts of the company that do make money. They give away their product for free to create a user base, and figure out proper monetization later.
When the economy takes a dive and borrowing money costs money again (for years, banks had negative interests for huge loans, which means they paid you to take their money) the funds of venture capitalists suddenly dry up and companies like Netflix and Uber suddenly need to raise prices
Nine figure salaries are nothing compared to how much training AI costs. The same goes for most services, to be honest.
I don’t get where the entitlement comes from, to be honest. Why should companies keep giving away shit for free? They’re neither governments nor charities. These companies are flushing billions down the drain giving away free stuff to get marker share and attract more money they can put into free services, until they can grow no more. That’s unsustainable and impossible to compete with fairly.
It’s good for the internet to cost money. If customers need to pay for the stuff they’re using, we maintain the possibility for fair competition. Without competition, billionaires and hedge funds control the internet. If you demand everything to come for free, you’re only playing into Google’s/Facebook’s/Microsoft’s/Apple’s hands.
What “entitlement?”
I don’t expect anyone to start a web site or service or to give me or anyone else access to it at all, much less for free.
I’m just making the very narrow point that when a company chooses to do all of that, and manages to make enough money to build a plush corporate headquarters on some of the most expensive real estate on the planet and pay its executives millions or even tens or hundreds of millions of dollars, then starts crying about not making enough money, that’s self-evident bullshit.
If anybody’s acting"entitled" in that scenario, it’s the greedy corporate weasels who spend billions on their own privilege, then expect us to cover their asses when they come up short.
I was thinking about this the other day. We might have to move to a whitelist federation model with invite-only instances at some point.
The downside of that approach is that AI can pretend to be humans wanting to join quite well. It’s possible to set up a lobster.rs like system where there’s a tree of people you’ve invited so admins can cull entire spam groups at once, but that also has its downsides (i.e. it’s impossible to join if none of your friends have already joined, or if you don’t want to attach your online socials to your friends).
It’s a trade off that we’ll probably have to take unless we want to deanonymize the internet.
I don’t think that’s a perfect system anyway though, spammers could create a massive tree of fake accounts and just only use a small proportion of them for spam
Use a number of compromised user accounts to set this up and it becomes a nightmare
And that is how you get singular point of view echo chamber.
Most of the internet is made up of echo chambers now even though anyone and everyone can access a majority of it. I don’t think being selective in who we allow into communities worsens the pre-existing echo chamber issue. If anything it may help to be more selective. It can sometimes be impossible to tell the difference between trolls, bots, and real people, so I feel like we assume every person we disagree with is a troll or bot. The issue with that is that we may be outright dismissing real opinions. In theory, everyone in a selective community is a real person who is expressing their true thoughts and feelings.
Instead of being this gen’s September 1993, I feel like the changes being sped up by the introduction of generative models are finally forcing us into October 1993. As in: they’re reverting some aspects of the internet to how they used to be.
That spells tragedy of the commons for those companies. They ruining themselves will probably have a mixed impact on us [Internet users in general].