Epi/mono facts: "f is mono" means "(f .) is injective"; "f is epi" means "(. f) is injective". "f is split mono" means "(. f) is surjective"; "f is split epi" means "(f .) is surjective". Split X implies X. Split mono ∧ epi = split epi ∧ mono = iso.
Reading an unfamiliar program is so much easier if, at the top of each file, there's a comment describing what the file is about, and what the high-level ideas are. This is one of the most valuable kinds of comments and I don't know why it isn't more common.
For Richardson's theorem style reasons, integrals of even very simple functions can have weird behavior. Here's an example I came up with in college. Bonus points if anyone has a simpler proof than the one I found.
If I had to make up a task that's trivial to do in a cross-platform way, it would be something like "read in some files and emit another file based on their contents". This seems so easy that being cross-platform should be automatic. And yet cross-compiling is often a nightmare?
My answer: In the two-generals problem, the deadline arrives, and each general has to decide "yes" or "no". In atomic commit protocols, a transaction participant is also allowed to say "I don't know yet", and wait to hear from other participants before it's sure.
An AC power grid made way more sense in the 20th century, but switching mode DC->DC converters have gotten *so* insanely cheap, small, and efficient, especially with GaN FETs. Would a DC power grid (say, ~20 kV distribution, ~300 volts in your house) make more sense in 2025?
Apparently when machine learning people say "convolution" they usually mean "cross-correlation"? It was confusing trying to make sense of the expression I was seeing!
I think I want to switch back to a laptop running GNU quotient Linux. What laptops are people recommending these days? I mostly just want a long battery life, and a large screen, and otherwise want it to be relatively slim. I don't need or want a discrete GPU or anything fancy.
Is there a rank-select bitmap algorithm that I should have in my mind as "canonical" (reasonably simple and practical)? I know there are a bunch of them but I don't really know how any of them work in detail, and I vaguely remember seeing some pretty complicated constructions.
A similar puzzle in memory models: Peterson's algorithm implements two-process flicker-free mutual exclusion of writes to shared state, but it's implemented on hardware units communicating via shared state (voltages on wires) that don't natively provide that abstraction. How?
Vague thought: Could the kinds of heuristics used in branch predictors apply to SAT solvers for choosing a literal assignment on (frequent) restarts? "phase saving" (just use the last value) is a common strategy, but does it make sense to do something more sophisticated?
Good parsers for "full" languages are *so* easy once you know a couple honestly quite simple tricks (such as precedence climbing), it's absurd how much people think they need parser generators. And, in their defense, I also used to think it was hard! Maybe I'll write a post...