• 0 Posts
  • 45 Comments
Joined 1 year ago
cake
Cake day: June 14th, 2023

help-circle

  • Because if you disable browser autocomplete, what’s obviously going to happen is that everyone will have a text file open with every single one of their passwords in so that they can copy-paste them in. So prevent that. But what happens if you prevent that is that everyone will choose terrible, weak passwords instead. Something like September2025! probably meets the ‘complexity’ requirement…


  • addie@feddit.uktoProgrammer Humor@programming.devPsychopath Dev
    link
    fedilink
    arrow-up
    49
    arrow-down
    4
    ·
    edit-2
    2 months ago

    A bit like when we renamed all the master/slave terminology using different phrasing that’s frankly more useful a lot of the time, I think it’s about time we got rid of this “child” task nonsense. I suggest “subtask”. Then we can reword these books into something that no-one can make stupid jokes about any more, like “how to keep your subs in line” and “how to punish your subs when they’ve misbehaved”.


  • Well now. When we’ve been enforcing password requirements at work, we’ve had to enforce a bizarre combination of “you must have a certain level of complexity”, but also, “you must be slightly vague about what the requirements actually are, because otherwise it lets an attacker tune a dictionary attack against you”. Which just strikes me as a way to piss off our users, but security team say it’s a requirement, therefore, it’s a requirement, no arguing.

    “One” special character is crazy; I’d have guessed that was a catch-all for the other strange password requirements:

    • can’t have the same character more than twice in a row
    • can’t be one of the ten-thousand most popular passwords (which is mostly a big list of swears in russian)
    • all whitespace must be condensed into a single character before checking against the other rules

    We’ve had customers’ own security teams asking us if we can enforce “no right click” / “no autocomplete” to stop their users in-house doing such things; I’ve been trying to push back on that as a security misfeature, but you can’t question the cult thinking.


  • It’s a simple alphabet for computing because most of the early developers of computing developed using it and therefore it’s supported everywhere. If the Vikings had developed early computers then we could use the 24 futhark runes, wouldn’t have upper and lower case to worry about, and you wouldn’t need to render curves in fonts because it’s all straight lines.

    But yeah, agreed. Very widely spoken. But don’t translate programming languages automatically; VBA does that for keywords and it’s an utter nightmare.


  • If you move past the ‘brute force’ method of solving into the ‘constraints’ level, it’s fairly easy to check whether there are multiple possible valid solutions. Using a programming language with a good sets implementation (Python!) makes this easy - for each cell, generate a set of all the values that could possibly go there. If there’s only one, fill it in and remove that value from all the sets in the same row/column/block. If there’s no cells left that only take a unique value, choose the cell with the fewest possibilities and evaluate all of them, recursively. Even a fairly dumb implementation will do the whole problem space in milliseconds. This is a very easy problem to parallelize, too, but it’s hardly worth it for 9x9 sodokus - maybe if you’re generating 16x16 or 25x25 ‘alphabet’ puzzles, but you’ll quickly generate problems beyond the ability of humans to solve.

    The method in the article for generating ‘difficult’ puzzles seems mighty inefficient to me - generate a valid solution, and then randomly remove numbers until the puzzle is no longer ‘unique’. That’s a very calculation-heavy way of doing it, need to evaluate the whole puzzle at every step. It must be the case that a ‘unique’ sodoku has at least 8 unique numbers in the starting puzzle, because otherwise there will be at least two solutions, with the missing numbers swapped over. Preferring to remove numbers equal to values that you’ve already removed ought to get you to a hard puzzle faster?



  • You can write an unmaintainable fucking mess in any language. Rust won’t save you from cryptic variable naming, copy-paste code, a complete absence of design patterns, dreadful algorithms, large classes of security issues, unfathomable UX, or a hundred other things. “Clean code” is (mostly) a separate issue from choice of language.

    Don’t get me wrong - I don’t like this book. It manages to be both long-winded and facile at the same time. A lot of people seem to read it and take the exact wrong lessons about maintainability from it. I think that it would mostly benefit from being written in pseudocode - concentrating on any particular language might distract from the message. But having a few examples of what a shitfest looks like in a few specific languages might help



  • My old job had a lot of embedded programming - hard real-time Z80 programming, for processors like Z800s and eZ80s to control industrial devices. Actually quite pleasant languages to do bit-twiddling in, and it’s great to be able to step through the debugger and see that what the CPU is running is literally your source code, opcode by opcode.

    Back when a computers were very simple things - I’m thinking a ZX Spectrum, where you can read directly from the input ports and write directly into the framebuffer, no OS in your way just code, then assembly made a lot of sense, was even fun. On modem computers, it is not so fun:

    • x64 is just a fucking mess

    • you cannot just read and write what you want, the kernel won’t let you. So you’re going to be spending a lot of your time calling system routines.

    • 99% of your code will just be arranging data to suit the calling convention of your OS, and doing pointless busywork like stack pointer alignment. Writing some macros to do it for you makes your code look like C. Might as well just use C, in that case.

    Writing assembly makes some sense sometimes - required for embedded, you might be writing something very security conscious where timing is essential, or you might be lining up some data for vectorisation where higher-level languages don’t have the constructs to get it right - but these are very small bits of code. You would be mad to consider “making the whole apple pie” in assembly.




  • I used to work with a Greek guy called Argyros Argyros - cool guy, but suspect he was an outlier. Named after his dad, so certainly some people are named that way. Icelandic for instance would traditionally use “Given Name” “Patronym from father” - Magnus Magnusson was quite famous in the UK; Björk Guðmundsdóttir might be the most famous internationally, but she’s not a “double”. There’s quite a few cultures - Hungarian, Chinese, Japanese, … - that write their names as “Family Name” “Given Name” as opposed to the other way around, if that’s what you mean?



  • There’s also the financial risk to be considered. A mainstream film release from 1970 might have been produced by fifty people, cast and crew combined. The crew for Barbie as per the image above was close to a thousand people. That’s expensive. Have to put in twenty times the ante to be in the game, and all the payoff is in established properties that you already know have an audience? It would be foolish to do otherwise.

    Like you say, if people actual did what they said they wanted, and go and take a punt on the new stuff rather than going to watch the same-old, then it would be different. But you can’t complain about it when that’s what you spend your money on.


  • That’s almost exactly the problem. English uses helper words exclusively for future tense, and indeed, helper words like ‘to’ to form an infinitive. ‘Will’ is the helper word to show that something is a fact, that it is definite - grammatically, it is indicative. (The sun will rise tomorrow.) ‘Would’ is the helper word to show that something is an opinion, or dependent on something else - grammatically, it is subjunctive. (If you push that, it would fall; if it was cheaper, I would buy it.)

    Spanish has both helper words for future tense (conjugations of ‘ir’, analogous to ‘going to’, often used in speech) and straight-up conjugations for future tense (doesn’t exist in English; often used in writing). It also conjugates verbs differently if they’re indicative, subjunctive, or imperative (asking or telling someone to do something). This is how Spanish manages to have fifty-odd ways to conjugate every verb, which is very confusing to English speakers who make do with three ways and helper words.

    Translating a ‘future tense sentence’ for Duolingo requires you to have psychic powers about whether something is fact or opinion, which helper words are wanted, and so on, and it usually comes down to guessing between multiple ‘correct’ answers, which Duo will reject all but one of.


  • Absolutely this. I’d have argued that ‘every day’ is a more idiomatic translation than ‘daily’, and what native speakers would say, but that’s irrelevant. English tends to emphasise the end of sentences as the most important part, so all these translations are correct depending on the nuance that you intend:

    • Daily in Hamburg, many ships arrive (as opposed to eg. cars, or few ships)
    • Daily, many ships arrive in Hamburg / Many ships arrive daily in Hamburg (as opposed to eg. Bremen)
    • Many ships arrive in Hamburg daily (as opposed to eg. weekly)

    Wouldn’t question any of those constructions as a native speaker. In fact, original responders’ example was why I gave up on Duolingo myself originally, some years ago. Translating ‘future tense’ sentences from Spanish into English or back again is always going to be a matter of opinion, since English doesn’t have the verb conjugations that Spanish does. Guessing the ‘sanctified answer’ is tedious, when a lot of the time it’s not even the most natural form of a sentence.


  • Ah, I thought that some of these were being posted with the location of the studio - thought some of our bands might have recorded in the States once they got bigger, might have been some better-equipped studios over there. But it makes sense that it just a mistake. Priest are from Birmingham and Steel was recorded in Ascot, so middle and South of England. 100% British as you like.