25+ yr Java/JS dev
Linux novice - running Ubuntu (no windows/mac)

  • 0 Posts
  • 662 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle





  • She knows not to trust it. If the AI had suggested “God did it” or metaphysical bullshit I’d reevaluate. But I’m not sure how to even describe that to a Google search. Sending a picture and asking about it is really fucking easy. Important answers aren’t easy.

    I mean I agree with you. It’s bullshit and untrustworthy. We have conversations about this. We have lots of conversations about it actually, because I caught her cheating at school using it so there’s a lot of supervision and talk about appropriate uses and not. And how we can inadvertently bias it by the questions we ask. It’s actually a great tool for learning skepticism.

    But some things, a reasonable answer just to satisfy your brain is fine whether it’s right or not. I remember in chemistry I spent an entire year learning absolute bullshit about chemistry only for the next year to be told that was all garbage and here’s how it really works. It’s fine.


  • I don’t buy into it, but it’s so quick and easy to get an answer, if it’s not something important I’m guilty of using LLM and calling it good enough.

    There are no ads and no SEO. Yeah, it might very well be bullshit, but most Google results are also bullshit, depending on subject. If it doesn’t matter, and it isn’t easy to know if I’m getting bullshit from a website, LLM is good enough.

    I took a picture of discolorations on a sidewalk and asked ChatGPT what was causing them because my daughter was curious. Metal left on the surface rusts and leaves behind those streaks. But they all had holes in the middle so we decided there were metallic rocks missed into the surface that had rusted away.

    Is that for sure right? I don’t know. I don’t really care. My daughter was happy with an answer and I’ve already warned her it could be bullshit. But curiosity was satisfied.