• 14 Posts
  • 62 Comments
Joined 6 years ago
cake
Cake day: August 24th, 2019

help-circle
  • the code is often over weirdly overengineered

    With the GPT + deepseek combo it fixes that problem (in my layman’s eyes); deepseek has a problem of overengineering especially if you don’t scope it correctly but if it works on an initial script, it will mostly stay within it. It’s also able to simplify GPT code.

    For the life of me I have never been able to learn javascript, so when I need to code something for ProleWiki I just throw the problem at them. You can look into https://en.prolewiki.org/wiki/MediaWiki:Common.js our script file for what they came up with - the color scheme picker, the typeface picker, and the testimonials code. Color picker is a bit overengineered because we have a theme that should only be available for one month out of the year and it was simpler to hardcode it in.

    I still have to think about the problem, if I don’t scope it well enough they will come up with whatever they feel like. For the color schemes when I had the first custom theme working I told them to refactor the code so that we could add new themes in an array, and they came up with everything. So now I can add unlimited themes by just adding a line in the array.

    Potential simplifications that they don’t think about (and hence why it’s good to know how the generated code works) is that there’s no need for className and LabelId in the array, it could generate that from the value. But eh, it works.

    edit - using it as a “copilot” is also a good use, though I find that sometimes it just utterly fails. It’s still an RNG machine at heart. But if you get it to work, it can really help unlock your other skills and not get stuck on one part of the process.


  • I mean, AI as a word has been used in tons of different ways. We still say “NPC AI” in video games and that’s just a whole bunch of if statements, no LLM involved. And on the other end of the spectrum we still talk about AI in movies like I, Robot, with fully sentient machines. My line when I say “AI” with no qualifier is neural networks, the parameters that we hear so much about.

    And I don’t think GenAI is “the” grift like the tweet implies, because the AI they describe (machine learning, ML) is the exact same - neural networks with trained models. They talk about the Kinect using ML - you can do machine learning without the neural network - but was the Kinect not “wasteful”, “unethical”, and “useless”, to use their words? It was an expensive system that worked okay (on some tech demos) but barely had 3 games. The EyeToy for the PS2 was more fun.

    A lot of the conversation around Generative AI surrounds image generation and I feel that’s harmfully reductive. It centers a lot around artists and the purity of their art (as if they’re the only people impacted by AI) when there’s so much more to talk about; GenAI can do code - there was a whole discussion around it here the other day, and maybe it’s not super great code, but it can do code nonetheless and for people who don’t code and need something, it gets the job done.

    Yes, there is also a whole lot of stuff you can do with AI without LLMs. In fact, I’m not sure how LLMs specifically became so ubiquitous because you can do neural network AI stuff without ever needing an LLM. I remember back when AI became big (2022 or so), China announced they’d used an AI to map the wiring on a new ship model. What took an engineer one year to do was done by the AI in 24 hours.

    GenAI including LLMs have hard limitations that I think are, conversely, overlooked. LLM AI will not do everything, but it can get you part of the way there. The grift is moreso tech companies trying to pretend their toy is a panacea. When asked about AI making stuff up in an interview, OpenAI’s CTO “well you know, it’s very human in that regard [emphasis mine], because when we don’t know something, what do we do? We make stuff up”. They admit it themselves that they have a bullshit generator. But when it works, it works - you can use GPT 4o or o4 or whatever the new model is called as a tutor, for example, for photoshop, guitar, or whatever other hobby you have. It works great! You can ask it any question you have like a tutor, instead of being limited to what the page cares to tell you about! And yes I could ask someone, but: a- people are not necessarily available the moment I have a question and b- google is crap now and if you ask on most forums they will tell you to google it. So chatGPT it is. We just have to take into account that it might be making stuff up to the point that you need to double-check, and that OpenAI clearly has no plans to fix that (not that they even could).

    For coding my choice nowadays is start with chatGPT then pass it over to deepseek once I have the prototype, it works great.

























  • The part about cold fusion was strange and I completely occluded it in my original article (the OP). I think he had to mention it because he had to find a way for these nukes, if there were indeed nukes used on Gaza, to be conspicuous. Cold fusion would allow for payloads that, like he said, would be no bigger than a baseball bat.

    But the findings stand on their own. For example I don’t believe Busby is lying when he said he analyzed air vent samples and soil samples and found what he found. They definitely require further investigation and Al Mayadeen was looking for more vehicle air ventilation filters and long hair samples from people and vehicles that have been around “Israeli” bomb craters to analyze through another researcher.