SpellGPT is an archdemon, and it’s pronounced as written, not as an acronym.
Presumably it has money, sex and power as bait instead of on-demand musical fanfics and light conversation, because fiction isn’t that dumb.
Formerly u/CanadaPlus101 on Reddit.
SpellGPT is an archdemon, and it’s pronounced as written, not as an acronym.
Presumably it has money, sex and power as bait instead of on-demand musical fanfics and light conversation, because fiction isn’t that dumb.
I think vibe casting would be when you summon an imp for a few minutes with instructions to work on the big spell, summon another imp and tell it to do the same.
Likely to produce a much more literal development hell.
The thing being, it’s kind of an inflexible blackbox technology, and that’s easier said than done. In one fell swoop we’ve gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it’s ironically still beyond our reach to fully use.
From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we’re back to an AI winter. I suppose it’s possible a new architecture and/or training scheme will come along, but it doesn’t seem imminent.
Is there a chance that’s right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn’t actually have a long term memory of any kind (at least outside of the training phase).
Agreed. The started out trying to make artificial nerves, but then made something totally different. The fact we see the same biases and failure mechanisms emerging in them, now that we’re measuring them at scale, is actually a huge surprise. It probably says something deep and fundamental about the geometry of randomly chosen high-dimensional function spaces, regardless of how they’re implemented.
Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.
I wouldn’t say none. What the axons, dendrites and synapses are doing is very well understood down to the molecular level - so that’s the input and output part. I’m aware knowledge of the biological equivalents of the other stuff (ReLU function and backpropagation) is incomplete. I do assume some things are clear even there, although you’d have to ask a neurologist for details.
I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that’s not good enough, it’s easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you’re more interested in ignoring any empirical evidence, though.
Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there’s no notion of time, it’s not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it’s dynamic; they can peak at any time and downstream neurons can begin to fire “early”.
They do seem to be equivalent in some way, although AFAIK it’s unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.
You know, I’d be interested to know what the critical size you can get to with that approach is before it becomes useless.
Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.
You got the “originality” part there, right? I’m talking about tasks that never came close to being in the training data. Would you like me to link some of the research?
Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.
Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It’s true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.
I mean, there’s about a billion ways it’s been shown to have actual coherent originality at this point, and so it must have understanding of some kind. That’s how I know I and other humans have understanding, after all.
What it’s not is aligned to care about anything other than making plausible-looking text.
That doesn’t sound viby to me, though. You expect people to actually code? /s
You’d have a point if this was an artist community, but coding AI as it exists does not work that well.
I’d give a better example, but most of the technologies that didn’t actually work are lost to history. Hmm, maybe reapeating crossbows and that giant 40-reme boat that the one Greek king built?
They’re hard to get for anything besides the internet.
Has anyone else noticed a sudden surge of ads for AI-powered website building tools?
Yeah, not surprised. An experienced software engineer in the US won’t have to do unskilled labour unless there’s something else massive going on with them.
I couldn’t actually tell you what all the Gates foundation does. Greedwashing exists, but as you say I don’t think Gates is doing it.
It gives away way too much to serve either purpose well.
I mean, according to this, the plan is to not be a billionaire. If his net life transaction ends up being bilking Western technophobes to pay for mosquito nets and clean water that’s cool.
What about just the bandwidth and storage requirements? Any news on that?
A task it couldn’t have seen in the training data, I mean.
Obviously, that goes for the natural intelligences too, so it’s not really a fair thing to require.