Usually it’s a bunch of different string hashes of the text content. They could be different hashing algorithms, but it’s more common to take a single hash algorithm and simply create a bunch of hash functions that operate on different parts of the data.
If it’s not text data, there’s a whole bunch of other hashing strategies but I only ever saw bloom filters used with text.
People aren’t misunderstanding the issue. Third party cookie support is being dropped by all browsers. Chrome is also dropping them, but replacing them with topics. Sure, topics is less invasive than third party cookies, but it is still more invasive than the obvious user friendly approach of not having an invasive tracker built into your browser. No other major browser vendor is considering supporting topics. So they’re doing an objectively user unfriendly thing here. This is the shit that happens when the world’s largest internet advertising company also controls the browser.
A classic use for them is spam filtering.
Suppose you have a set of spam detection systems/rules which are somewhat expensive to execute, eg a ML model or keyword blocklist. Spam tends to come in waves, and frequently it can be as simple as reposting the same message dozens of times.
Once your systems determine a piece of content is spam (or you manually flag content), it’s a good idea to insert the content into a bloom filter. This means that future posts of the identical content will be flagged without needing to execute the expensive checks, especially if there’s a surge of content stressing your systems.
Since it’s probabilistic, you can’t use this unless you have some sort of manual reviewing queue or system, as it’s possible for false positives to be flagged. However, you can also run more intensive checks once you’ve flagged content, to detect false positives.
The false positives can also be a feature, not a bug: with careful choice of hash functions, your bloom filter can actually detect slightly modified content, since most of the hashes may still be the same.
I’ve worked at companies which use this strategy so it’s very real world.
I’d argue that’s not true. That’s what the extern keyword is for. If you do , you don’t get the actual
printf
function defined by the preprocessor. You just get an extern declaration (though extern is optional for function signatures). The preprocessed source code that is fed to cc
is still not complete, and cannot be used until it is linked to an object file that defines printf
. So really, the unnamed “C preprocessor output language” can access functions or values from elsewhere.
I know this is a joke, but assuming you’re the author, then you’re under no obligation to follow the license. Only people to whom you transmitted the code are bound by its terms.
They probably know what it is, but it’s a bad point if they’re trying to paint DAGs as esoteric CS stuff for the average programmer. I needed to use a topological sort for work coding 2 weeks ago, and any time you’re using a build system, even as simple as Make, you’re using DAGs. Acting like it’s a tough concept makes me wonder why I should accept the rest of the argument.
Can’t say I have a strong feeling about Gradle though 🤷♀️
Can’t speak for the whole country but my employment is at-will, meaning it can be terminated by either side at any moment with no notice.
It is considered polite and relatively standard to give two weeks’ notice prior to leaving your job, but there’s no requirement in any of the jobs I’ve had.
Of course, employers don’t have that same “polite standard” of two weeks, it’s not unheard of for people to be fired on the spot. Though it’s definitely unusual. For broader layoffs, it’s pretty common to get several weeks of notice and pay.
It’s a cathartic, but not particularly productive vent.
Yes, there are stupid lines of time.sleep(1)
written in some tests and codebases. But also, there are test setUp()
methods which do expensive work per-test, so that the runtime grew too fast with the number of tests. There are situations where there was a smarter algorithm and the original author said “fuck it” and did the N^2 one. There are container-oriented workflows that take a long time to spin up in order to run the same tests. There are stupid DNS resolution timeouts because you didn’t realize that the third-party library you used would try to connect to an API which is not reachable in your test environment… And the list goes on…
I feel like it’s the “easy way out” to create some boogeyman, the stupid engineer who writes slow, shitty code. I think it’s far more likely that these issues come about because a capable person wrote software under one set of assumptions, and then the assumptions changed, and now the code is slow because the assumptions were violated. There’s no bad guy here, just people doing their best.
I have an air compressor which is powered by the 12V DC outlet in a car. They are quite cost effective and easy to buy. I use it all the time to refill my tires. Much better than some odd exhaust pressure solution.