what is my purpose?
what is my purpose?
I don’t understand any of these analogies at all
For practical purposes, it’s probably good enough. You could write a program to check whether it’s non-repeating up to N digits, so just set N high enough that it will last you for a few thousand releases…
Apologies, citizen. You may pass. I am but a humble autocop.
Return to your hovel, citizen
I learned MIPS as an undergrad. Pretty neat little RISC architecture.
I have a similar story. I started a new job and inherited a ball of mud written in Python while the creator was out for a few weeks. When he got back, he was grumpy about my changes. I guess he preferred it with more bugs 🤷♂️
Get out of my office
I mean technically I could write an interpreter that assigns semantics to HTML constructs.
Aha, I didn’t realize compromising availability was sufficient for the CVE definition of security vulnerability. Projects I’ve worked on have typically excluded availability, though that may not be the norm.
And I see your point about some exploits being highly asymmetric in the attacker’s favor, compared to classic [D]DoS.
The chances of the coin flip yielding heads are roughly 50%, if coins don’t not exist.
Maybe I’m misunderstanding you, but DoS is exactly the same thing as “denial of service”.
My point is that memory leaks can only degrade availability; they are categorically distinct from security vulnerabilities.
I had to look it up to check my memory. Yup! https://about.gitlab.com/blog/2015/06/05/how-gitlab-uses-unicorn-and-unicorn-worker-killer/
I don’t think memory leaks could ever amount to a security vulnerability, but it just feels yucky. I guess I shouldn’t cast stones, I write C++ at work.
Git kinda has it? Have you seen git notes? https://git-scm.com/docs/git-notes
I used to host a Gitlab instance at work. It was dog slow so I started digging into it and discovered they had a serious memory leak in some of their “unicorns,” aka Ruby tasks. Instead of fixing the source of the leak they tacked on a “unicorn killer” that periodically killed tasks. The tasks were supposed to be atomic anyway, so this is technically fine (and maybe a good thing in the long run for correctness a la Netflix’s Chaos Monkey) but I found myself kind of disgusted by the solution. I dropped it and went for a much sparser Git repo web server.
In this case, the models are given part of the text from the training data and asked to predict the next word. This appears to work decently well on the pre-2023 internet as it brought us ChatGPT and friends.
This paper is claiming that when you train LLMs on output from other LLMs, it produces garbage. The problem is that the evaluation of the quality of the guess is based on the training data, not some external, intelligent judge.
the test environment
The test environment? I don’t miss the web dev world. It’s so nice to be able to run end-to-end tests entirely locally.
Oh dang, sorry about that. I’ve used rclone with great results (slurping content out of Dropbox, Google Drive, etc.), but I never actually tried the Google Photos backend.
You could try using rclone’s Google Photos backend. It’s a command line tool, sort of like rsync but for cloud storage. https://rclone.org/
Am I being dense? I don’t get it.