• 3 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle









  • It’s funny because, I’m probably the minority, but I strongly prefer JetBrains IDEs.

    Which ironically are much more “walled gardens”: closed-source and subscription-based, with only a limited subset of parts and plugins open-source. But JetBrains has a good track record of not enshittifying and, because you actually pay for their product, they can make a profitable business off not doing so.



  • I replaced it with online docs, Github Issues, Reddit, and Stack Overflow.

    Many languages/libraries/tools have great documentation now, 10 years ago this wasn’t the case, or at least I didn’t know how to find/read documentation. 10 years ago Stack Overflow answers were also better, now many are obsolete due to being 10 years old :).

    Good documentation is both more concise and thorough than any QA or ChatGPT output, and more likely to be accurate (it certainly should be in any half-decent documentation, but sometimes no).

    If online documentation doesn’t work, I try to find the answer on Github issues, Reddit, or a different forum. And sometimes that forum is Stack Overflow. More recently I’ve started to see most questions where the most upvoted answer has been edited to reflect recent changes; and even when an answer is out-of-date, there’s usually a comment which says so.

    Now, I never post on Stack Overflow, nor do I usually answer; there are way too many bad questions out there, most of the good ones already have answers or are really tricky, and the community still has its rude reputation. Though I will say the other stack exchange sites are much better.

    So far, I’ve only used LLMs when my question was extremely detailed so I couldn’t search it, and/or I ran out of options. There are issues like: I don’t like to actually write out the full question (although I’m sure GPT works with query terms, I’ll probably try that); GPT4’s output is too verbose and it explain basic context I already know so it’s just filler; and I still have a hard time trusting GPT4, because I’ve had it hallucinate before.

    With documentation you have the expectation that the information is accurate, and with forums you have other people who will comment if the answer is wrong, but with LLMs you have neither.