In science, unanchored agentic AI risks becoming its own form of vibe coding—speed without accountability. Spec Science is the alternative: structured, reproducible, and grounded in Research Objects (ROs).
Humanity As Rationalizing The Irrational: The HARTI Hypothesis
The HARTI hypothesis situates humanity’s uniqueness not in pure logic, speech, or play, but in the general-purpose ability to transform the irreducible into the intelligible. / Whether viewed as the root of our greatness or the source of our downfall, rationalization remains central to any account of what it means to be human.
TSM-12: RELIGN: A Homoiconic Language for Synchronous, Stateful Reactive Hardware Design
As the inventor of Verilog, give a keynote at DAC about an ideal hardware design language based on TBC and Hexons that combines the syncronicity of SIGNAL with the statefulness of Erlang.ChatGPT Prompt (condensed) Opening: Setting the Stage Phil Good morning, everyone. It’s great to be here at DAC—a conference that brings together the best... Continue Reading →
TSM-10.3: Hexons – Unifying Hardware and Software Through a Post-Object Model
This idea builds on a concept I’ve long championed: **software and hardware aren’t distinct entities but two expressions of the same fundamental processes**. Hexons aim to reflect this by collapsing the boundary between the two, offering a new kind of computational atom that works equally well at the hardware and software levels.
TSM-10.1: HLIR – Homoiconic, High-Level Intermediate Representation
instructions in a homoiconic form. It represents a novel synthesis in compiler design by bridging the gap between human and machine representations of programs. By combining monadic composition with homoiconic structure, HLIR allows developers to express computational intent with minimal syntax while maintaining direct mappings to MLIR's powerful optimization framework. This marriage of high-level semantics with low-level compilation produces a uniquely ergonomic intermediate representation - one where code is data, transformations are first-class citizens, and optimization becomes natural rather than imposed. The result is a language that is both easy for humans to reason about and efficient for compilers to transform, potentially setting a new standard for intermediate representations in modern compiler design.
TSM-4: Total Computing with Pres — The Future of Safe, Expressive Software
For decades, Turing-complete computing has been the bedrock of modern programming. While this has empowered developers to create powerful, general-purpose systems, it has also forced us to accept a troubling reality: bugs, crashes, and unpredictable behavior are often seen as inevitable. These issues are typically viewed as the price we pay for the flexibility and... Continue Reading →
TSM-1: The Shannon Machine — Better Than Turing Complete?
The Shannon Machine is a decider computational system which uses bit-level word operations (rather than high-level computation) to perform arithmetric. The goal is model practical computation in a way that is more realistic -- but still as formal -- as the Linear Bounded Automoton, which has a similar level of computational power.
