Diagrammatric type theory
(+ the fundamental principles of mathematics)

Comments first published

<! --- LETTERS ---> $$ \newcommand{\lR}{\mathbb{R}} $$ $$ \newcommand{\lZ}{\mathbb{Z}} $$ $$ \newcommand{\lN}{\mathbb{N}} $$ $$ \newcommand{\lI}{\mathbb{I}} $$ $$ \newcommand{\II}{\mathbb{I}} $$ $$ \newcommand{\bI}{\mathbf{I}} $$ $$ \newcommand{\bC}{\mathbf{C}} $$ $$ \newcommand{\bD}{\mathbf{D}} $$ $$ \newcommand{\bE}{\mathbf{E}} $$ $$ \newcommand{\bF}{\mathbf{F}} $$ $$ \newcommand{\iI}{\mathsf{I}} $$ $$ \newcommand{\iA}{\mathsf{A}} $$ $$ \newcommand{\iB}{\mathsf{B}} $$ $$ \newcommand{\iC}{\mathsf{C}} $$ $$ \newcommand{\iD}{\mathsf{D}} $$ $$ \newcommand{\iE}{\mathsf{E}} $$ $$ \newcommand{\iF}{\mathsf{F}} $$ $$ \newcommand{\iM}{\mathsf{M}} $$ <! --- SYMBOLS ---> $$ \newcommand{\eps}{\epsilon} $$ $$ \newcommand{\SetCat}{\mathrm{Set}} $$ $$ \newcommand{\Bool}{\mathrm{Bool}} $$ $$ \newcommand{\Pos}{\mathrm{Pos}} $$ $$ \newcommand{\lbl}{\mathsf{lbl}} $$ $$ \newcommand{\subdiv}{\mathsf{sd}} $$ $$ \newcommand{\Hom}{\mathrm{Hom}} $$ $$ \newcommand{\Fun}{\mathrm{Fun}} $$ $$ \newcommand{\Psh}{\mathrm{PSh}} $$ <! --- OPERATORS ---> $$ \newcommand{\dim}{\mathrm{dim}} $$ $$ \newcommand{\id}{\mathrm{id}} $$ $$ \newcommand{\src}{\partial^-} $$ $$ \newcommand{\tgt}{\partial^+} $$ $$ \newcommand{\und}[1]{\underline{#1}} $$ $$ \newcommand{\abs}[1]{\left|{#1}\right|} $$ $$ \newcommand{\op}{^{\mathrm{op}}} $$ $$ \newcommand{\inv}{^{-1}} $$ <! --- CATEGORIES, FUNCTORS AND CONSTRUCTIONS ---> $$ \newcommand{\ttr}[1]{\mathfrak{T}^{#1}} $$ $$ \newcommand{\lttr}[2]{\mathfrak{T}^{#1}{(#2)}} $$ $$ \newcommand{\truss}[1]{\mathsf{T}\!\mathsf{rs}_{#1}} $$ $$ \newcommand{\sctruss}[1]{\bar{\mathsf{T}}\!\mathsf{rs}_{#1}} $$ $$ \newcommand{\rotruss}[1]{\mathring{\mathsf{T}}\!\mathsf{rs}_{#1}} $$ $$\newcommand{\trussbun}[1]{\mathsf{TrsBun}_{#1}} $$ $$ \newcommand{\trusslbl}[1]{\mathsf{Lbl}\mathsf{T}\!\mathsf{rs}_{#1}} $$ $$ \newcommand{\blcat}[1]{\mathsf{B}\mathsf{lk}_{#1}} $$ $$ \newcommand{\blset}[1]{\mathsf{Blk}\mathsf{Set}_{#1}} $$ $$ \newcommand{\fcl}[2]{\chi^{#1}_{#2}} $$ $$ \newcommand{\Entr}{\mathsf{Entr}} $$ $$ \newcommand{\Exit}{\mathsf{Exit}} $$ <! --- ARROWS AND RELATIONS ---> $$ \newcommand{\fleq}{\preceq} $$ $$ \newcommand{\fles}{\prec} $$ $$ \newcommand{\xto}[1]{\xrightarrow{#1}} $$ $$ \newcommand{\xot}[1]{\xleftarrow{#1}} $$ $$ \newcommand{\ot}{\leftarrow} $$ $$ \newcommand{\toot}{\leftrightarrow} $$ $$ \newcommand{\imp}{\Rightarrow} $$ $$ \newcommand{\iff}{\Leftrightarrow} $$ $$ \newcommand{\iso}{\cong} $$ $$ \newcommand{\into}{\hookrightarrow} $$ $$ \newcommand{\proto}{\longrightarrow\mkern{-2.8ex}{\raisebox{.3ex}{$\tiny\vert$}}\mkern{2.5ex}} $$ <! --- DIAGRAM ARROWS ---> <! ---... should replace \rlap by \mathrlap, see KaTeX doc ---> $$ \newcommand{\ra}[1]{\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} $$ $$ \newcommand{\la}[1]{\xleftarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} $$ $$ \newcommand{\da}[1]{\downarrow\raisebox{.5ex}{\rlap{$\scriptstyle#1$}}} $$ $$ \newcommand{\dra}[1]{\raisebox{.2ex}{\llap{$\scriptstyle#1$}}\kern-.5ex\searrow}} $$ $$ \newcommand{\dla}[1]{\swarrow\kern-1ex\raisebox{.2ex}{\rlap{$\scriptstyle#1$}}} $$ $$ \newcommand{\ua}[1]{\uparrow\raisebox{.2ex}{\rlap{$\scriptstyle#1$}}} $$ $$ \newcommand{\ura}[1]{\raisebox{.2ex}{\llap{$\scriptstyle#1$}\kern-.5ex\nearrow}} $$ $$ \newcommand{\ula}[1]{\nwarrow\kern-1ex\raisebox{.2ex}{\rlap{$\scriptstyle#1$}}} $$

Abstract. Diagrammatic (observational) type theory frames traditional type theoretic ideas in diagrammatic terms. On the way, some new ideas arise. This does a way with complication of inductive types to unveil a (the) deeper mystery of canonical isotopies.

In nine short stories about diagrammatic higher categories the idea of “diagrammatic” (or “geometric”) type theory was introduced. While it so far remains a vision-to-be-implemented, the approach is meant to dissect key principles of (higher-dimensionally infused) type theory as a foundation for mathematics.

The basic principles

I’ll focus on three main principles.

  1. Logical rules as basic geometric operations … and mathematics giving us “infinite room” to work with
  2. Inductive constructions as free composition in the universe … allowing us to make “anything we want”
  3. Equality as a structural property… “observed” by looking inside stuff.

(Note: in the original article, these were split into five principles… the difference is purely cosmetic.)

Principle 1: We got infinite room to work with!

To do new mathematics we compose exisiting mathematics. Composition can be understood, naturally, diagrammatically. Here’s a version of my favorite table from the ninth story:

Logic / type theory Diagrammatic mathematics
(Coherent) composition of terms Diagrammatic composition (+isotopies)
Working with (“non-dependent”) contexts Working with symmetric monoidal diagrams (*)
Reflection from universes Reflection from “boundary” diagrams

A quick way to summarize what’s going on here is that we are working with the $(\infty, \infty)$-category of (actually small) $(\infty, \infty)$-categories. This “universe”:

  • is a category so we can compose its morphisms.
  • is symmetric monoidal (so whether you consider $x : A, y : B$ or $y : B, x : A$ doesn’t really matter)… and we also have a unit $1$!
  • has a (slightly smaller) version $\mathcal{U}$ of itself inside itself; diagrams between $1$ and $\mathcal{U}$ are representation of what’s in the “inside” universe $\mathcal{U}$, which can be “reflected” into the “outside” universe.

Yes, the last point naturally hints at a cumulative hierarchy of universes (and so a proof strength of something like ZFC + countably many inaccessible cardinals, see here).

Moreover, the last point is were dependent types come into play. Indeed, an $A$-dependent type is nothing but a functor $A \to \mathcal{U}$. A traditional context with dependencies is not easily represented as a simple diagram, but rather must first “reflect all data down” to the same level. This is actually were it gets more interesting (and some new ideas may lurk around the corner when trying to make the correspondence precise): for example, the dependent context $x : A, y : B(x)$ can be understood diagrammatically as natural transformation of $\mathrm{const}_1 \to B \circ \mathrm{const}_x$.

(*) Thankfully, infinite-dimensional room can be modelled with finite-dimensional room.

Principle 2: We can make anything we want!

The second principle is that, to do mathematics we can really create whatever we want. In a way, mathematics is “free”. Symptoms of this observation are that universes are cocomplete (in contrast, completeness is, to some extent, a consequence of symmetric monoidality as discussed above).

Another way of expressing the principle is: our type theory has inductive types; i.e., types that are “defined inductively”. To cite the nLab:

A general way to construct free objects is with a transfinite construction of free algebras (in set-theoretic foundations), or with an inductive type or higher inductive type (in type-theoretic foundations).

Now inductive types are great … in theory. But in full generality, we should take them to mean directed higher inductive types, which are still not well understood!

One obvious reason for studying directed higher type theory (especially when $n = \infty$ in the above) is generality: $(\infty,\infty)$-categories are among the most general higher categorical structures. Concretely this offers the following conceptual advantages.

  • Universe types can retain their higher directed structure. Universe types are (small) ‘internal’ reflections of the category of all types of our type theory, but without sufficient higher structure in the theory, this reflection process must forget structure present in the external category (for instance, the category of sets can only contain a ‘set of sets’ as an object which does not remember functions; similarly, the universe type in homotopy type theory must forget about maps between types that are not equivalences).
  • Dependent function types become ‘just’ function types. In the presence of higher directed structure, equipped with a universe type $\mathcal{U}$ and unit type $\mathbf{1}$, dependent function types $\Pi_{x : A} F(x)$, for $F$ a type family $F : A \to \mathcal{U}$, can be understood in terms of ‘just’ function types, namely, as $\mathsf{Fun} _ {\mathsf{Fun}(A,\mathcal{U})}(\mathrm{const} _ {\mathbf{1}},F)$ (where $\mathrm{const}_{\mathbf{1}} : A \to \mathcal{U}$ is the constant functor with image $\mathbf{1}$).
  • Inductive types become substantially more expressive. Inductive types with dependent constructors can also be expressed in terms of non-dependent constructors: for instance, for $F$ a type family $F : A \to \mathcal{U}$, the dependent pair type $\Sigma _ {x : A} F(x)$ can be introduced with a single constructor $\mathrm{in} : F \to \mathrm{const} _ {\Sigma_{x : A} F(x)}$ (which is a map in the function type $\mathsf{Fun}(A, \mathcal{U})$ and should thus be thought of as a natural transformation between functors). Moreover, there is the topic of higher inductive types; a useful example of a ‘true’ directed higher inductive type is the poset type $(\mathbb{N},\leq)$ with constructors \(0 : \mathbf{1} \to \mathbb{N}\) \(\mathsf{succ} : \mathbb{N} \to \mathbb{N}\) \(\mathsf{less} : \mathrm{id}_{\mathbb{N}} \to \mathsf{succ}\)

  • Formalization of (higher) categorical semantics. On a more practical side, access to higher directed structure could allow us to formalize the categorical semantics of other types theories, such as homotopy type theory.

Ok, so this is why higher inductive types are so cool.

But they are certainly not easy. We will revisit a deeper problem in a bit. For now, I’ll add that defining inductive types is just one side of the story here: the other side is to actually work with inductive types, e.g. constructing functors on them (in the ninth stories these construction were called “inferred definitions”). The key term that makes this other side work is … pattern matching. This is a key principle of all of mathematics really, and certainly deserves it’s own treatment: in essence, the principle allows us to construct to morphisms by specifying how to resolve the morphism on any input term to an output term, therefore avoiding to introduce any new terms in our existing types. This is orthogonal to the purpose of inductive definitions, which explicitly introduces new terms!

Principle 3: Things are the same if they look the same

Unlike many other types of mathematiciains, type theorists knew for a long time that equality by itself is a surprisingly deep concept. Deciding when and how stuff is equal is, to some extent, the bread and butter of modern type theories.

When I wrote nine short stories about diagrammatic higher categories I struggled a bit in the last story (the vaguest of them all) when trying to express what I wanted from equality my diagrammatic type theory. It seems to me that observational type theory takes a perspective that fruitfully resolves some of these struggles.

I’ll just mention the basic idea: two diagrams are the same iff they look the same (or unpack to the same thing). Easy. Ok, slighly more seriously: observational equality feels like a sibiling of pattern matching to me in that two things are equal if they have the same constituents. For example, $(a,b) = (c,d)$ iff $a = c$ and $b = d$. There’s more to be explored here for sure, especially from a diagrammatic viewpoint (where we could express “definitions”, like $X := A \times B$, as diagrams themselves … visually this would look like “packing”/”unpacking” diagrams into a points). But overall, making this work properly sounds like a great idea! I’m looking forward to reading about what the HOTT commmunity will come up with.

HIT Focus: the basic problem, solution, and problem

Ok, let’s get back to focus on higher inductive types for the remainder of this text, and make the connection to diagrammatic categories a bit more tangible.

A key problem for working with higher inductive types is that they are “small types”, and correctly defining their higher compositional structure usually uses “big machinery”. But, as we’ve previously discussed, diagrammatic categories (or more precisely, diagrammatic computads) solve this problem. So … you should be able to build a cool higher directed type theory with them?

There’s only one difficulty (here, I see us in a situation of debugging our human approach to mathematics: we solve one problem to uncover a deeper one): canonicity. In order to make our type theory “compute” (i.e., fully resolve terms into expressions of generators) we must be able to guaranteee that isotopies (i.e., diagrammatic coherences) decompose (at least non-constructively) into finite, classifiable, generators.

This reduces to the classification of isotopies that we previously discussed, and thus deeply entangles (ha!) the isotopy classification problem with canonicity of diagrammatic type theory.

PS.

(But: even in the absence of canonicity, you could likely go ahead an formulate some sort of diagrammatic observational type theory … in which things are “true until disproven” :D)