The attempt at building niri for development got me reading a Rust book

the Rust programming language is fundamentally about empowerment: no matter what kind of code you are writing now, Rust empowers you to reach farther, to program with confidence in a wider variety of domains than you did before.

Rust isn’t limited to low-level systems programming. It’s expressive and ergonomic enough to make CLI apps, web servers, and many other kinds of code quite pleasant to write


On the topic of ‘infrastructure as code’ there arises the organizational question of whether to put everything in one repository (e.g. all the infrastructure for any app) or to divvy it up into the individual app repos. There will always be overlap and interdependencies so there’s no easy answer. Some like to deploy infrastructure changes along with the code and so the latter method is good for that. That worries me because there are more variables in flux. I’d rather have rollouts of infrastructure changes generally go out in advance (unless there’s some reason it can’t) of application changes. And so the former (all in one repo) is my preference.


One thing I haven’t found for Linux is a good ‘random unicode symbol’ entry system. On OSX I could easily use key combos (built-in) for the semi-weird characters (like bullets) and then pull up an “emoji/special characters” window that let me select the even weirder characters.

In citations to the Summa, c. => contra and co. => corpus. The contra is the short sed contra est (On the contrary) section after the opening objections. The corpus is the respondeo dicendum (I answer that) section where St. Thomas elaborates his answer.

in ipsa, forma non est potentia ad non esse I, q.9, a.2, co.

Science may be described as the art of systematic oversimplification.

  • Karl Popper

Since the building of all the universe is perfect and is created by the wisdom creator, nothing arises in the universe in which one cannot see the sense of some maximum or minimum.

  • L. Euler

The term for the idea that AI could improve itself indefinitely is “recursive self-improvement” or RSI.

A related term for technology having achieved RSI is “Artificial General Intelligence” or AGI.

RSI’s impossibility stems from epistemological limits (no self-bootstrapping without grounding), computational bounds (diminishing returns, uncomputability), and philosophical subjectivity (“improvement” as human-relative, not machine-quantifiable). For instance, if “the good” is an abstraction requiring nuanced values, pure recursion can’t capture it without human-like qualia or external input.

Related reading: The Illusion of Self-Improvement: Why AI Can’t Think Its Way to Genius