

Well, I doubt they’ll release one for my clippers since they’re discontinued, so that inspired me to go ahead and model a variable-depth one for myself. Based on some of the comments here, I thickened the comb blades to make them print more easily.
Well, I doubt they’ll release one for my clippers since they’re discontinued, so that inspired me to go ahead and model a variable-depth one for myself. Based on some of the comments here, I thickened the comb blades to make them print more easily.
They havent released one for the razor I have, but honestly I might try modeling them myself. Doesn’t seem impossible, and I’ve been waning a deeper comb than they sell.
No no of course not, but it’s a compatibility layer for windows inside linux.
That…is wine.
…and for anyone like me who was unsure, yes it works equivalently for AMD. I think Intel as well, but I’m not sure about that.
Well, you will have excess solar power during the day, so just keep it plugged in to the solar while solar is available. Then, just unplug the laptop in the evening until you get to 15-20%.
Trying to force the laptop to discharge while plugged in is colossally more trouble than it’s worth.
I would assume that they left the MX off of laptop GPUs, since they’re all MX cards, until recently. Regardless, the “card of the right approximate era” thing should work, unless there are specific patches for your card, which is unlikely.
It seems to me that the offending dialog would only be triggered if you did a full fresh install. During the previous iteration of the testing, they probably had a VM somewhere with it installed; since the underlying packages were already present, the dialog would never have popped up.
Yup. Even for technical writing, markdown with embedded LaTeX is great in most cases, thanks largely to Pandoc and its ability to convert the markdown into pure LaTeX. There are even manuscript-focused Markdown editors, like Zettlr.
Ubuntu 16.04, dual booted on my laptop before I knew how much of a hassle that could be! Fortunately, never had any of the infamous issues.
A new iteration of open-source drivers for NVIDIA cards which aims to work better and be more feature-complete. Original announcement post here which explains a bit better.
There are currently 252 Catholic cardinals, but only 135 are eligible to cast ballots as those over the age of 80 can take part in debate but cannot vote.
You’re telling me the Catholic church has more term limits than the US Supreme Court?
Maybe the graph mode of logseq?
Not somebody who knows a lot about this stuff, as I’m a bit of an AI Luddite, but I know just enough to answer this!
“Tokens” are essentially just a unit of work – instead of interacting directly with the user’s input, the model first “tokenizes” the user’s input, simplifying it down into a unit which the actual ML model can process more efficiently. The model then spits out a token or series of tokens as a response, which are then expanded back into text or whatever the output of the model is.
I think tokens are used because most models use them, and use them in a similar way, so they’re the lowest-level common unit of work where you can compare across devices and models.
Agreed! I’m just not sure TOPS is the right metric for a CPU, due to how different the CPU data pipeline is than a GPU. Bubbly/clear instruction streams are one thing, but the majority type of instruction in a calculation also effects how many instructions can be run on each clock cycle pretty significantly, whereas in matrix-optimized silicon its a lot more fair to generalize over a bulk workload.
Generally, I think its fundamentally challenging to generate a generally applicable single number to represent CPU performance across different workloads.
I mean, sure, but largely GPU-based TOPS isn’t that good a comparison with a CPU+GPU mixture. Most tasks can’t be parallelized that well, so comparing TOPS between an APU and a TPU/GPU is not apples to apples (heh).
Obsidian isn’t open source, but it’s so solid I almost don’t care…
I just came across the lines in the OpenSuse 42 .bashrc in to connect to palm pilots today…what a flashback.
They could have at least renamed it to Radeon Operational Compute method or something…
Yeah, mint uses synaptic. Works well in my experience.