Principles of higherdimensional logic
Abstract. We allow ourselves to speculate a bit about how the 1dimensional logical constructs that one frequently encounters in practical approaches to mathematical foundations may in fact emerge from a set of “more fundamental” principles of compositionality in higherdimensional logic.
Warning. This note discusses some ideas about highercategorical foundations at a philosophical level of rigor. An improved (and less rambly) version was later written up in the story ‘towards geometric type theory’.
Prologue
Here is the basic motivation for what is to come. Mathematical logic started off as a bunch of rules about how to deal with mathematical statements, largely inspired by human language: indeed, most early logical connectives could very well be expressed in human language (such as “and”, “or”, “for all”, “there exists”, …). Of course, more work needed to be done to make rules formal, and describe frameworks for how such rules could be applied (natural deduction and sequent calculus are examples of such “proof calculi”). Besides picking your preferred set of rules, one may hope that there is some sort of “universal” set of rules that can describe all of the mathematics there is (from a platonic perspective, this would be something like the “ground truths of nature”). A damper to this ambition are Gödel’s incompleteness results: whatever reasonable set of rules you come up, there will be statements that it cannot deal with, but a bigger set of rules potentially can. While this may be annoying for platonists, formalists will feel validated: in the end, is mathematics just some manipulation of symbols by madeup “rules of some game”? Alas, a little later in the 20th century category theory entered the stage, and with it a wellorganized toolset that would allow us to have a birdseye perspective and all such choices of rules at once. This seemed to suggest that there was more to mathematical foundations than just picking the rules of a game: indeed, there now was a unifying mathematical theory for all such games.
Very roughly speaking, this works as follows. Category theory abstractly studies structures that can be composed. Our choices for the “rules of logic” above produce such structures, since rules compose, and they thus can be studied by the means of category theory (see e.g. syntactic categories). Up to maybe some technicalities (for instance, how we deal with issues of “size”), this gives a pretty clear picture for how to study logic from a categorical perspective—in fact, this is part of a fundamental correspondence of foundational phenomena called the computational trilogy or the CurryHowardLambek correspondence. To summarize, logics (i.e. systems of logical rules) give rise to categories which can then be studied in category theory. Crucially however, in doing so the mathematics of category theory transcends the principles of these logics: collectively, the categories of logics form a new type of structure called a “2dimensional category”, or “2category” for short. In contrast, the categories associated to the logics that we started with are “1dimensional categories”, or “1categories”. Unlike 1categories, the structure of a 2categories contains objects that, compositionally, behave like 2dimensional geometric objects. In contrast, previously we encountered only 1dimensional compositional behaviour: rules compose linearly by chaining them one after the other (roughly speaking, at least).
Early on it was realized that we have, at this point, encountered a repeating pattern: 1categories collectively organize into a 2category, but 2categories in turn collectively organize into a 3category, 3categories into a 4category and so on. While this “metaprinciple” is intuitively evident, it is not straightforward at all to write down a definition of what an $n$category actually is in order to make the principle mathematically precise. The difficulty in writing down the required definitions hides in the intricacies of higher dimensional geometry, which allows objects to have complicated “knots” in higher dimensions (in fact, starting in dimension 3). Consequently, while 2categorical logics (i.e. the logics that via the computationally trilogy correspond to 2categories) have still been considered in some detail, 3categorical logics have already proven to be rather unfathomable. Instead, most practical foundational approaches to mathematics work by “projecting” higher dimensions to lower ones—most readers will be familiar at least with one example of such a projection: functions between sets themselves form a set, and in this way functions may be described using the theory of sets and their elements. Importantly however, functions, unlike generic elements of sets, have a notion of composition: such (higher) compositional behaviour still needs to be accounted for in this approach, and we may do so by introducing appropriate logical connectives and their rules (such as those in cartesian closed categories). The end result of this projection process is that, equipped with enough rules, we can describe the logics of 1categories inside a 1categorical logic (such as that of the category of sets). In fact, many practical foundations for mathematics presently work along these lines (in most cases, using 0categorical and 1categorical logic). We will mention another example of such a projection later on.
Unfortunately, looking at logical connectives that arise from projections of higher dimensional categorical behaviour in this way very quickly gets complicated: while 1categorical behaviours (such as functions and their compositions) are still relatively easy to describe by connectives, in general, we will run into difficulties rooting in the aforementioned intricacies of higher dimensional geometry. For example, if we were to describe 3categorical logics we would have to introduce a lot of language to describe 4categorical behaviour, and this would be difficult (of course, this similarly applies to any other finite dimension $n > 2$). There is really only one way out of this dilemma: namely, letting $n$ go to $\infty$ and describing the logic of $\infty$categories (meant here in an $(\infty,\infty)$sense), thereby avoiding logical connectives that are projections of highercategorical behaviour—instead we describe this behaviour directly (from which, if we want, we could recover connectives; we shall sketch an example of this later). Letting $n$ go to $\infty$ also comes with a second comforting observation, which concerns the more technical concept of higher object classifiers: while, for finite $n \leq \infty$, such classifiers are structures that witness the inclusion of lowerdimensional into higherdimensional logic (for instance, the 1category of sets contains a set for 0categorical logic: namely, the set with elements “true” and “false”), for $n = \infty$, we instead observe the inclusion of $\infty$logic into $\infty$logic (in other words, collectively, $\infty$categories form again an $\infty$category). On one hand, this once more avoids ever having to deal with $n$logic for finite $n$. On the other hand, and more importantly, we have found a natural way to recover the structures of “hierarchical universes” $U_i : U_{i+1}$ which feature heavily in many practical foundations to avoid the issues of size mentioned earlier.
So, in summary, why is this line of investigation into higherdimensional logic potentially attractive? As sketched above, the hope of highercategorical logic would be to provide an approach to mathematical foundations that is “free of connectives and axioms” in the traditional sense alltogether, but instead is based purely on (higher) compositional principles. This would realize the previously described “metaprinciple” across all dimensions at a foundational level. It would also circumvent the historical hope of finding a “complete axiomatization” of mathematics, and make many other constructions more natural (such as the hierarchical universes mentioned above). We will hopefully be able to clarify some of these points further in the rest of the note.
TL;DR. Here is maybe another (rather condensed) way of invoking the underlying ideas of the above discussion in the reader’s mind. Are, say, dependent types the ultimate “natural” language of mathematical reality? Very probably not. (Are they very useful? Yes.) So where do they come from? Maybe they rather directly derive from a (more) “natural” highercategorical foundational language. We shall discuss the possibility of existence and study of the latter here.
Main principles
We now outline more concretely some of the central intuitions and ingredients that one would probably want be part of highercategorical foundations. We then also give an informal specification of a system that aims to fulfil these desirata. The approach described here is strongly geometrically inspired: namely, we will work in the framework of diagrammatic higher category theory based on the theory of manifold diagrams. Indeed, this will mean that a lot of structure comes for free (such as the naturality of natural transformations and their higher counterparts, associativity coherences and their higher counterparts, and also certain operations for variable substitution).
Useful keepinminds
We start by briefly mentioning three (hopefully clarifying, but a bit vague) intuitions about the approach that we are thinking of here.
A first point of clarification concerns the topic of inference rules vs. assumptions (or axioms). It is often a useful distinction between rules and assumptions: the “inference rules” govern the logical calculus we work with, but within this calculus we may “assume” additional statements in various ways (while useful, the distinction is sometimes arbitrary, and in the previous section we have omitted to make it). For example, we may set ourselves up to work with the rules of simply typed lambda calculus and later decide that we want to assume further type and term constants. Such assumptions can be thought of as creating new objects and morphisms in the syntactic category, but the rules of the internal language of the category (in this case, the inference rules of minimal logic) stay the same. In contrast, from a perspective of highercategorical foundations the concept of logical inference rules dissappears essentially completely (well, there are still rules governing higher compositions as we will see shortly, but these are not the rules of a “traditional” logical framework)—the only thing we are left with to do is making assumptions. In particular, from the metaperspective of highercategorical foundations, traditional calculi of inference rules may then be studied and described by making appropriate assumptions about the structure of some type/category. (Aside #1: there is a useful distinction between small and big types; roughly, “small” types are constructed by making assumptions about their objects, while “big” types are constructed by considering the universe of all types with additional assumptions on those types; for instance, the category of finite groups, i.e. of finite sets with group structures, is big, but each individual finite group is small—the “rulesasassumptions” intuition tends to play out slightly differently for big and small types.)
A second (related!) clarification concerns the topic of toposes vs. the prototypical topos. The prototypical 1topos is the 1category of sets. Its 1categorical logic is that of structural set theory (which is essentially equivalent to what is commonly understood by the term “set theory”). However, there are many other 1toposes (all of them can be obtained by looking at certain (reflective) subcategories of parametrized sets). What they share with the category of sets is large parts of their internal language (but not all of it; for instance, the category of sets has a natural number object while a general 1topos need not have one). For the approach to highercategorical foundations sketched here, we are not really interested in describing internal languages shared across higher toposes. Instead, we want to describe the compositional principles governing the collection of (arbitrarily highdimensional) categories, the functors between them, etc.—and this is akin to a direct description of the workings of the prototypical higher topos of such higher categories. This makes our approach rather different from the usual “internal language of general toposes” approach. (Aside #2: a relevant intuition in this regard is that prototypical toposes are the free cocompletions of the trivial object (for instance, the category of sets is the free cocompletion of the singleton set); freely making assumptions about our objectsofstudy (thereby introducing them by giving their “constructors”) starting from a trivial assumption is, intuitively, parallel to a free cocompletion starting from a trivial object—note that in finite dimensions this intuition doesn’t work as nicely due to the presence of projected connectives.)
Thirdly, a brief comment on finite vs. infinite dimensions. While the highercategorical foundations sketched here aim describe categorical dimensions up to $n \to \infty$, of course, any assumption made in practice will have to be an object of finite dimension. Somehow this is an important point: while we must work in finite dimensions, there is no bound on how high that dimension may be.
First ingredients
We next mention three basic ingredients for our highercategorical foundations (really, we expect these to be part of any form of formalization of the idea of highercategorical foundations). As remarked earlier, our approach will heavily rely on the idea of diagrammatic higher category theory via manifold diagrams. But there are also two other mathematical ingredients which we do probably cannot dispense of. Together this yields the following list.

Types and boundaries. Based on everything said so far it will come as no surprise that we are looking for a sort of “highercategorical type theory”. Slightly deviating from standard terminology, we will say the “type of a $k$term” to mean the “category of a $k$morphism” (where $k$ may be $0$); we will say the “boundary of a term” to refer to the terms that it maps between (i.e. its domain and codomain as a morphisms)—this is always of the form “domain $\to$ codomain” (for $k = 0$, this is empty). We indicate types by writing $f : A$, and boundaries by writing $\partial f = a \to b$.

Dependencies on variables. Variables seem to be rather indispensable in mathematical foundations, and we will allow objects to depend on variables in several (constrained) ways. We will use turnstile “$\vdash$” notation (combined with functional $”()”$ notation) to indicated dependencies on variables; i.e. we write $x : A \vdash f(x) : B$ or something similar to mean $f(x)$ is a term of type $B$ depending on a variable term $x$ of type $A$. If there is no dependency, the lefthand side of the turnstile will be left empty.

Diagrams and coherences. If objects are of the appropriate type then they may be composed… in a highercategorical fashion, namely, via manifold diagrams. In this way, highercategorical coherence laws will be “builtin” into our approach, as they are witnessed geometrically by the isotopies of manifold diagrams. (For instance, natural transformations between functors will automatically satisfy the naturality law as this is witnessed by the braid coherence in the universe type of higher categories.)
Informal specification
Keeping the previous intuitions and ingredients in mind, let us now sketch a specification of a highercategorical type theory made in their image. The sketch is given in the ordered list below; importantly, note that later points in that list may affect earlier points (in a somewhat “mutually inductive” way). Though the sketch is rough around the edges, most of the points will hopefully make sense at least to those readers who have some familiarity with the definitions of manifold diagrams and their singularities.

Universes. There are basic types as follows: there is a $\lZ$indexed collection of “universe types” $U_i$, such that $U_i : U_{i+1}$, i.e. $U_i$ is a term in the universe $U_{i+1}$.

Trivialities. Each universe $U_i$ has a “trivial” type, denoted by $\mathbf{1} : U_i$. (Recall, we are hunting for the “free cocompletion” of $\mathbf{1}$, so assuming this type as a primitive is, unsurprisingly, important.) The property characterizing $\mathbf{1}$ is that any term or variable in it is forced to be trivial.

Assumptions. If the boundary of a manifold $n$diagram singularity can be consistently labelled with terms in a type $A$ that were already assumed or constructed, then we can assume a new term in $A$ which labels this singularity. The new term may depend on variables in a context but (importantly) its boundary is not allowed to depend on any variables—the next example illustrates the rationale behind the constraint.
Example: if we first assume the 0terms $\vdash A, B, C : U_1$ (with empty boundary), then next we may assume a 1term $\vdash f : U_1$ with $\partial f = B \to C$, or a dependent 1term $x : A \vdash g(x) : U_1$ with $\partial g(x) = B \to C$—but we can not, say, first assume $x : A \vdash B’(x) : U_1$, $x : A \vdash C’(x) : U_1$ and then assume a 1term $x : A \vdash g’(x) : U_1$ with $\partial g’(x) = B’(x) \to C’(x)$ whose boundary is variable dependent. Indeed, what the latter situation really asks for is the assumption of 1terms $\vdash B’, C’ : U_1$ with $\partial B’ = \partial C’ = A \to U_0$ and a 2term $\vdash g’ : U_1$ with $\partial g’ = B’ \to C’$.

Compositions. All assumed and constructed terms in a type may be composed: concretely, any manifold diagram that can be consistently labelled by exisiting terms describes such a composite—the resulting labelled diagrams is called a “composite term” (in contrast, “assumed terms” in the sense of the previous point are “noncomposite” terms). Types of composite terms are once more given by their boundaries as manifold diagrams. Labels of deepest strata in a composite term may again depend on variables.

Homs. Given two (potentially composite) $k$terms $a, b : A$ with the same boundary in a type $A : U_i$, we may construct their “hom type” as a term $(a \to b) : U_i$ in the universe. It is important to emphasize that we can have $k > 0$, i.e. $a$ and $b$ may correspond to (higher) morphisms in the category $A$.

Substitutions. If a (potentially composite) term has dependencies on variables (say, $x : A \vdash f(x) : B$) then we can substitute for these variables. The formal process of substitution works singularitybysingularity. For each singularity $f(x)$ we take its “product” with the substituting manifold diagram: that is, for $a : A$ a (potentially composite) term, the substitution $f(x := a)$ is defined to be the product of the diagram singularity $f$ and the diagram $a$, labelled such that boundaries are kept constant and such that internal labels are the labels of $a$ with “$f()$” applied to them. Note that dependencies in the resulting labelled diagram $f(x := a)$ automatically again satisfy that only deepest strata depend on variables.

Reflections. There are “reflections” between the terms of a type and the terms of its hom types (and the triviality $\mathbf{1}$ gets to play a special role). Namely, (potentially composite) terms $a : A$ are reflections of terms $a : U_i$ with $\partial a = \mathbf{1} \to A$. Similarly, terms $f : a \to b$ with $a,b : A$ are reflections of terms $f : A$ with $\partial f = a \to b$. The diagrams of terms and their reflections differ in general, as they must account for a change of boundaries (we implicitly claim that there is a reasonable way to accomplish the translation). Finally, there is also the usual reflections of universes, allowing to reflect $A : U_i$ with $A : U_{i+1}$.

Computations. So far, there is no real way to “compute” as to do so we need equality. We allow the assumption of (“definitional”) equality between nonuniverse terms in the same type and with the same boundary: that is, if, say, $\vdash a, b : A$ and $\partial a = \partial b$ then we may assume $\vdash a = b$. After this assumption, in effect the terms $a$ and $b$ can be used interchangeable for all matters in the previous points.
There are several details which the above leaves very much unaddressed, for instance: reflections maybe need more syntax (somewhat similar to Tarskistyle universes); the relation of variable dependencies and morphism between types is rather weak (morphisms between types can be turned into variable dependencies and viceversa by using reflection, composition and computation, but maybe this relation should be “tighter”); we ommitted mentioning “ambiguities up to isotopies” in both substitution and computation principles above; we omitted discussing dealing with multiple independent variable dependencies; etc. … . We shall, however, leave things at this incomplete state for now—in fact, this will suffice to already sketch an important example in the next section.
An example: $\Pi_{x: \mathbb{N}}F(x)$
We shall construct a $\Pi$type for an $\lN$indexed type family. The construction is given by the following sequence of assumptions.

Assume $\vdash \lN : U_1$ (its boundary is empty, i.e. $\lN$ is a 0term in $U_1$).

Assume a 1term $\vdash 0 : U_1$ with boundary $\partial 0 = 1 \to \lN$ (equivalently, by reflection, this is the assumption $\vdash 0 : \lN$).

Assume a 1term $s : U_1$ with boundary $\partial s = \lN \to \lN$.

Note we can now form composite 1terms $s(s(…s(0)…)) : U_i$ which have boundary $1 \to \lN$ (and thus, by reflection, these are equivalently 0terms in $\lN$).

Next, assume a 1term $F : U_1$ with boundary $\partial F = \lN \to U_0$. This can be thought of as a “$\lN$indexed type family”.

A canonical such type family (namely, the trivial one) is the following. Assume $\mathrm{const} _ {\mathbf{1}} : U _ 1$ with $\partial \mathrm{const} _ {\mathbf{1}} = \lN \to U_0$. Then assume, using computation and reflection, that $x : \lN \vdash \mathrm{const} _ {\mathbf{1}} \circ x = \mathbf{1}$.
(Indeed, by reflection $x : \lN$ is equivalently a term $x : U_1$ with boundary $\mathbf{1} \to \lN$ and thus can be composed with $\mathrm{const} _ {\mathbf{1}}$. By reflection again, the composite 1term $\mathrm{const} _ {\mathbf{1}} \circ x$ is a 0term in $U_0$, and thus of the same type as $\mathbf{1} : U_0$. We can therefore definitionally equate the two.)

We can now introduce the “$\Pi$type”. Assume $\vdash \Pi_\lN F : U_1$ and define it to be the hom type $\mathrm{const} _ {\mathbf{1}} \to F$, that is, introduce the computation rule $\vdash \Pi_\lN F = (\mathrm{const} _ {\mathbf{1}} \to F)$ (note, both are terms with the same boundary in $U_1$ and can thus be equated).
We leave further “playing around” with the resulting sort of $\Pi$type to the reader—many of the expected behaviours of a $\Pi$type can be recovered from the principles sketched in the previous section!
Note, crucially, that our construction of $\Pi_\lN F$ above requires access to 2dimensional hom spaces (namely, the hom space between $\mathrm{const} _ {\mathbf{1}}$ and $F$). Thus, the same construction cannot be given in a purely 1categorical setting. Nonetheless, while the construction is not 1dimensional, the assumptions that $\Pi_\lN F$ depends on (namely, an object $\lN : U_1$ and a morphism $F : U_1$) are 1dimensional—thus, “projecting” out higher dimensions it makes sense to retain a $\Pi$type that mirrors this behaviour. Indeed, $\Pi$types are common in 1categorical logics such as dependent type theory (there are of course also abstract categorytheoretic reasons for why $\Pi$type are a particularly good idea).
Epilogue
Let us end by giving a brief summary of the main features of our above discussion of highercategorical foundations.

No “traditional rules of logic” are being assumed in highercategorical foundations, but instead the principles at work are of compositional/geometric nature—this is attractive for the usual reasons in theoretical work, and it could be a rather appealing state for mathematical foundations.

Nonetheless, one may recover traditional logical syntax as the “projections” of constructions in higher dimensions; our motivating example for this was that of a $\Pi$type, which is a common construct in traditional 1categorical logic. In this way, highercategorical foundations may reproduce classical lowercategorical foundations.

In contrast, classical “issues” (like Goedel’s incompleteness theorem) are almost all nonstarters in highercategorical foundations: we circumvent the historical hope of finding a “complete axiomatization” of mathematics alltogether, and instead base ourselves only the metaperspective of higher category theory (governed by the aforementioned compositional principles).

This also shines new light on questions such as consistency or canonicity—both question are now (to some extent) removed from mathematical foundations: they no longer apply to the underlying foundational system at work but to the assumptions that the practicing mathematician makes in it.
Again, let us be very clear that all of the above is somewhat speculative (albeit hopefully in an entertaining way). The base theory as sketched above lacks expressiveness and detail. Nonetheless, I feel like overall this could be a promising foundational approach, and I wanted to use this note to discuss and flesh out the promise of it.