dawn on a clear day
#bluesky #sunrise
— rob mahurin (@robtasm.bsky.social) February 22, 2026 at 8:20 AM
[image or embed]
#bluesky #sunrise
— rob mahurin (@robtasm.bsky.social) February 22, 2026 at 8:20 AM
[image or embed]
Here's a really excellent weather forecast graphic that the NOAA website has been producing for ages. I haven't found anyone else who makes one as useful. To find yours, go to weather.gov (easier to remember than to bookmark), put in your zip code, and search the forecast page for the word "hourly."

I talk frequently about my wise who says "I believe weather reports, like 'it rained.'" But if I want to dissect how much to believe a forecast, this is the best one. Let's break it down.
I have been reminding myself about Sphinx, a content management system for static file trees which is most commonly used for Python documentation. I keep repeating the same changes to the default setup. Here is a summary.
One of my favorite people has a birthday today, and received the standard message:
HIPY PAPY BTHUTHDTH THUTHDA BTHUTHDY
A couple of months ago I listened to Brandon Sanderson's "Oathbringer" as an audiobook. I found myself wanting to be a snob about it (there's plenty of self-important silliness), but I kept thinking "wow, that's a real banger of a line." Eventually I started scribbling them on scraps of paper, which I then promptly lost. But here's a few.
Failure is the mark of a life well-lived.
Blasphemy! Art is not art if it has a function.
Sometimes a hypocrite is nothing more than a man who is in the process of changing.
I am reminded today by a video by "Mathemaniac" about the James-Stein estimator.
Suppose I'm trying to estimate a number $n$ of independent parameters simultaneously, by taking a sample from each one-dimensional normal distribution with unknown means $\mu_n$ and unit standard deviations $\sigma_n=1$. The naïve estimator is to use each sample $x_n$ as an estimate $\hat\mu_n$ of the mean. However, if my number of parameters is large enough, the zero-biased estimator
$$ \left(\begin{array}{c} \hat \mu_1 \\ \vdots \\ \hat \mu_n \end{array}\right) = \left( 1-\frac{n-2}{x_1^2 + \cdots + x_n^2} \right) \left(\begin{array}{c} x_1 \\ \vdots \\ x_n \end{array}\right) $$
actually produces a smaller mean-squared error on the ensemble as a whole.

My current book is A City On Mars, by the Weinersmiths. The short-short summary is that space travel is really hard, and that people who are optimistic about a permanent human presence in space have mostly not thought about the problem hard enough.

Found a monthly "board game club" today. Met some strangers and played Gravwell (review, amazon), which is a space-flavored racing game where the mechanic is you play cards which move you towards or away from the other players. Everybody puts down their cards at the same time, so there is a risk that other players' positions will change before it's time for you to move, and you'll move the wrong way.
Pretty fun, honestly.
A lovely explanation of in-camera digital photo processing:
Before:
After:![]()
Far from being an “unedited” photo: there’s a huge amount of math that’s gone into making an image that nicely represents what the subject looks like in person. […] There’s nothing wrong with tweaking the image when the automated algorithms make the wrong call.
I thought it would be cute to follow last night's post with a dawn … but I slept through it. So, another dusk.
I learned the other day about sunwait,
a command-line tool for predicting local sunrise and sunset times.

And so the sun sets on 2025.

Watch the reflection of the sun move from the center of the field of view to the Indian Ocean and become an ocean sunset thousands of miles across.
Then, because it's solstice season, the twilight zips around Antarctica on its way to becoming the dawn.
(source)
Are you missing some adjectives? They might be in this derisively mocking 2013 review of fictional novel Inferno by renowned and unjustly ridiculed author Dan Brown.
By Jenny Joseph, 1961:
When I am an old woman I shall wear purple
With a red hat that doesn't go, and doesn't suit me,
And I shall spend my pension
on brandy and summer gloves
And satin sandals,
and say we've no money for butter.
I shall sit down on the pavement when I am tired,
And gobble up samples in shops and press alarm bells,
And run my stick along the public railings,
And make up for the sobriety of my youth.
I shall go out in my slippers in the rain
And pick the flowers in other people's gardens,
And learn to spit.
You can wear terrible shirts and grow more fat,
And eat three pounds of sausages at a go,
Or only bread and pickle for a week,
And hoard pens and pencils and beer mats
and things in boxes.
But now we must have clothes that keep us dry,
And pay our rent and not swear in the street,
And set a good example for the children.
We will have friends to dinner and read the papers.
But maybe I ought to practise a little now?
So people who know me
are not too shocked and surprised,
When suddenly I am old
and start to wear purple!
I'm trying to decide which turn of phrase I like better here: the terrifically informative
the boiling temperature of paraffin wax is hotter than its autoignition temperature
or the low-hanging fruit that is
Don't try this at home! Do it at your friend's house first.

(source)
Consider the following decimal expansions:
$$\begin{aligned} \textstyle \frac{1}{ 1 } &= 1.0 & \textstyle \frac{1}{ 11 } &= 0.[09] & % 0.09090909090909091 \\ \textstyle \frac{1}{ 21 } &= 0.[04761\,9] \\ % 0.047619047619047616 \\ % \textstyle \frac{1}{ 2 } &= 0.5 & \textstyle \frac{1}{ 12 } &= 0.08[3] & % 0.08333333333333333 \\ \textstyle \frac{1}{ 22 } &= 0.0[45] \\ % 0.045454545454545456 \\ % \textstyle \frac{1}{ 3 } &= 0.[3] & % 0.3333333333333333 \\ \textstyle \frac{1}{ 13 } &= 0.[07692\,3] & % 0.07692307692307693 \\ \textstyle \frac{1}{ 23 } &= 0.[04347\,82608\,69565\,21739\,13] \\ % 0.043478260869565216 \\ % \textstyle \frac{1}{ 4 } &= 0.25 & \textstyle \frac{1}{ 14 } &= 0.0[71428\,5] & % 0.07142857142857142 \\ \textstyle \frac{1}{ 24 } &= 0.041[6] \\ % 0.041666666666666664 \\ % \textstyle \frac{1}{ 5 } &= 0.2 & \textstyle \frac{1}{ 15 } &= 0.0[6] & % 0.06666666666666667 \\ \textstyle \frac{1}{ 25 } &= 0.04 \\ % \textstyle \frac{1}{ 6 } &= 0.1[6] & % 0.16666666666666666 \\ \textstyle \frac{1}{ 16 } &= 0.0625 & \textstyle \frac{1}{ 26 } &= 0.0[38461\,5] \\ % 0.038461538461538464 \\ % \textstyle \frac{1}{ 7 } &= 0.[14285\,7] & % 0.14285714285714285 \\ \textstyle \frac{1}{ 17 } &= 0.[05882\,35294\,11764\,7] & % 0.058823529411764705 \\ \textstyle \frac{1}{ 27 } &= 0.[037] \\ % 0.037037037037037035 \\ % \textstyle \frac{1}{ 8 } &= 0.125 & \textstyle \frac{1}{ 18 } &= 0.0[5] & % 0.05555555555555555 \\ \textstyle \frac{1}{ 28 } &= 0.035[71428\,5] \\ % 0.03571428571428571 \\ % \textstyle \frac{1}{ 9 } &= 0.[1] & % 0.1111111111111111 \\ \textstyle \frac{1}{ 19 } &= 0.[05263\,15789\,47368\,421] & % 0.05263157894736842 \\ \textstyle \frac{1}{ 29 } &= 0.[03448\,27586\,20689\,65517\,24137\,931] \\ % 0.034482758620689655 \\ % \textstyle \frac{1}{ 10 } &= 0.1 & \textstyle \frac{1}{ 20 } &= 0.05 & \textstyle \frac{1}{ 30 } &= 0.0[3] \\ % 0.03333333333333333 \\ \end{aligned}$$
Each of these either terminates, like $\frac18 = 0.125$, or repeats, like $\frac{1}{27} = 0.[037]$. What determines the length of these repeating sequences?
From this year's "summer of math exposition" collection , an explainer on the posit, a variable-precision representation of floating-point numbers.
The posit uses fewer bits to encode small exponents, so numbers whose magnitude is roughly unity can get a few extra bits of precision in the mantissa.
The intuition that most floating-point values have magnitude roughly unity is certainly consistent with my experience, and also consistent with guidance that I have received from various computing mentors. But it would be interesting to build a virtual CPU and monitor which values enter the FPU during normal operation, to check.
This excellent visualization of vaccine effectiveness made the rounds over the past few days:


This book looks like a lot of fun. An excerpt (emphasis added):
"Etymology for its own sake is of little importance, even if it has curiosity value … the chief difficulty is that there can be no 'true' or 'original' meaning since human language stretches back too far." We have to agree with the latter — but the former seems absurd.
It's worth asking at this point why etymology is so seductive. For most people it represents their first (and frequently only) encounter with linguistics. As we know, words are at once completely prosaic — we use them every day, mostly without thinking — and rather mysterious. As a result it's natural to ask where they come from. We weave stories around their origin, both patently false ("lol" and "golf") and more plausible ("decimate" and "educate"). That curiosity shouldn't be dismissed: it's a knocking at the door of linguistics. If they shut it in people's faces, the guardians of knowledge about language risk closing off a route to both enlightenment and wonder. As a result, people seek their wonder elsewhere — in false accounts of how language works.
In any case, it isn't right to see etymology as some poor relation to "proper" linguistics. An attempt to explain why the meanings of words change is an attempt to explain how the mind works, how language works, and how society works. Perhaps this is why it has been deemed out of bounds.
Here's a delightful little story about the SR-71 Blackbird, attributed to Major Brian Shul, USAF (Retired), that I'm going to duplicate here so that it's less likely disappear from the internet.
You can listen to Shul tell the story in a Youtube video, if that floats your boat (or flies your plane, I suppose). Doing both is amusing because the reported speed seems to have gotten larger as Shul has retold it. All good stories grow in retelling.
There were a lot of things we couldn't do in an SR-71, but we were the fastest guys on the block and loved reminding our fellow aviators of this fact. People often asked us if, because of this fact, it was fun to fly the jet. Fun would not be the first word I would use to describe flying this plane. Intense, maybe. Even cerebral. But there was one day in our Sled experience when we would have to say that it was pure fun to be the fastest guys out there, at least for a moment.
I wrote a while ago about tornados with nested funnels, but a few days ago I found this lovely example on Bluesky.

I haven't been putting stuff here on this website, but I should change that.

On the dangers of naïve extrapolation. From Mark Twain’s “Life On The Mississippi,” 1883, via Project Gutenberg.
There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact.
Here's a quick-n-dirty Gaussian curve fitter, which I seem to reinvent about twice a year. There are a number of canned solutions that I can never remember how to use, but I also frequently find myself wanting to fit weird functions.
Below in a minimum working example. But the short-short version is
def func(x, *params): ... x,y = read_some_data() guess_params = [...] from scipy import optimize better_params, covariance = optimize.curve_fit( func, x, y, p0=guess_params)
In the Mandelbrot set — that is, the complex numbers $c$ for which the recurrence
$$\begin{aligned} z_{n+1} = z_n^2 + c \end{aligned}$$
remains finite — there are different regions of recurrence. Here's the classic picture. Let's play a bit.

I am interested in hearing from perfect-pitchers and ethnomusicologists about this delightful physics-of-music video.
I'm particularly interested in an aside observation that the "least dissonant" major third based on this overtone analysis is a little flat relative to the equal-temperament major third. In choral singing, the conductor is always complaining that the major third needs to tune a bit higher.
Suppose I have some random process described by a binomial distribution, with "success" probability $p$. For $n$ trials, the expected number of "successes" obeys
$$\begin{aligned} P(k) &= {n\choose k} p^k (1-p)^{n-k} \end{aligned}$$
Now suppose I do a bunch of different sets of trials, such as practice exams of varying lengths. I want to model each practice exam as being drawn from a distribution of the appropriate size with the same probability. What's the right way to combine them?
It doesn't matter whether you "view source" or not. Although on my current browser, "view source" uses different rules for word-wrapping.
The other day I was
playing with tmux
and learned that I can drive stuff in remote windows.
But for doing this programmatically, the right tool is
Python, as usual.
Specifically, the tmuxp library,
which includes an interactive shell and a parser for readable config-files.