What do you mean by “this stuff?” Machine learning models are a fundamental part of spam prevention, have been for years. The concept is just flipping it around for use by the individual, not the platform.
What do you mean by “this stuff?” Machine learning models are a fundamental part of spam prevention, have been for years. The concept is just flipping it around for use by the individual, not the platform.
If by reliably you mean 99% certainty of one particular review, yeah I wouldn’t believe it either. 95% confidence interval of what proportion of a given page’s reviews are bots, now that’s plausible. If a human can tell if a review was botted you can certainly train a model to do so as well.
Cool it with the universal AI hate. There are many kinds of AI, detecting fake reviews is a totally reasonable and useful case.
If you read carefully this is actually very similar to the Steam news. I doubt Valve or GOG care, but generally the games are “sold” by the publisher as non transferable licenses for you to play them. So the part that matters isn’t up to them.
Note the versions, none of the results give you the official operators page for the current version, 16. They give 9, which went EOL in 2021.
Codeberg is run off of donations, they have no service contract revenue. Nobody, much less a volunteer, wants to commit to a 5 or 10 year service plan like that, it’s not sustainable for a small project from a non profit.
CLAs can be abusive, but not necessarily. Apache Foundation contributors need to sign CLAs, which essentially codify in contract form the terms of the Apache 2.0 license. It’s a precaution, in case some jurisdiction doesn’t uphold the passive licensing scheme used otherwise. There’s also a relicensing clause, but that’s restricted to keeping in spirit, they can’t close the source.
Arch for stuff I have physical access to. Nothing’s ever gone wrong, so it’s worth it for the immediate updates and consistency with my other systems. For VPS I use Debian though, occasionally the unstable/Sid branch if I really need the latest updates. There are almost always Debian images available on a VPS.
Forgejo is my go to, I ran it in a GCP micro instance, which has 768 MB ram and a piddling processor. One of my friends works for a company that had all their devs run a local instance in addition to the main repo, it was that light.
Gitea is the former go to, but gitea was hijacked and stolen from the community by a for profit company. Forgejo is currently a drop in replacement fork, but with added privacy features, future federation options, and a reputable parent organization.
Architecture emulation for current gen games is exceptionally unlikely right now. At a fundamental level, wine/proton doesn’t change the instructions the code describes, rather it translates the input and output. It’s a reimplementation of the same instructions in Windows. For architecture crossing you’d either have to create virtual hardware, which adds tremendous overhead, or recompile the binary. Recompilation is theoretically possible, but for x86_64 to ARM64, for games no less, it’s beyond the realm of mortals. It’s like how some jokes can’t be translated between languages; the structure and vocabulary is just too different.
Sideberry, its like Tree Style Tabs but IMO is much more configurable and refined. It’s honestly changed the way I use browsers, being able to bookmark entire trees of tabs, toggle between tab sets, and manually load/unload trees and groups. I legitimately worry about the extension api changing and disallowing it.
I use netdata, it’s very good at digesting thousands of metrics to sharing actionable. The cloud portion is proprietary, but you can toggle off the data collection. I did turn on the cloud portion though, I get email notifications when something breaks. Might sound counter to the self hosted mantra, but a self hosted monitoring system isn’t very helpful when your own systems go down.
Key detail in the actual memo is that they’re not using just an LLM. “Wallach anticipates proposals that include novel combinations of software analysis, such as static and dynamic analysis, and large language models.”
They also are clearly aware of scope limitations. They explicitly call out some software, like entire kernels or pointer arithmetic heavy code, as being out of scope. They also seem to not anticipate 100% automation.
So with context, they seem open to any solutions to “how can we convert legacy C to Rust.” Obviously LLMs and machine learning are attractive avenues of investigation, current models are demonstrably able to write some valid Rust and transliterate some code. I use them, they work more often than not for simpler tasks.
TL;DR: they want to accelerate converting C to Rust. LLMs and machine learning are some techniques they’re investigating as components.