Taking nonlogical concepts seriously

vision
language
philosophy
Author
Published

2024-10-11

Abstract

What is the right math to capture our concepts which don’t have formal definitions? Recent work in philosophy of language clarifies the relationship between logic and good reasoning, with consequences for science, applied math, and AI. In this post we will introduce logical expressivism, its underlying mathematics, and a vision for how it can support software with novel affordances for communication and integration.

1 Reasoning vs (purely) logical reasoning

This section is largely an exposition of material in (Hlobil and Brandom 2024).

1.1 Logical validity

Logic has historically been a model of good reasoning. Let X,Y,... \vdash Z denote that the premises X, Y,... (and so on) altogether are a good reason for Z. Alternatively: it’s incoherent to accept all of the premises while rejecting the conclusion. As part of codifying the \vdash (‘turnstile’) relation, one uses logical rules, such as A,B \vdash A\land B (the introduction rule for conjunction, for generic A and B) and A \land B \vdash A (one of two elimination rules for conjunction). In trying to reason about whether some particular inference (e.g. p \vdash q \land r) is good, we also want to be able to assume things about nonlogical content. The space of possibilities for the nonlogical content of p,q,r are shown below:

p q r
\top\ /\ \bot ? ? ?

We care about good reasoning for specific cases, such as p is “1+1=2”, q is “The moon is made of cheese”, and r is “Paris is in France”. Here, we can represent our nonlogical assumptions in the following table:

p q r
\top\ /\ \bot \top \bot \top

Any logically valid inference, e.g. p \vee q,\ p \land r \vdash p, is reliable precisely because it does not presuppose its subject matter. Not presupposing anything about p,q,r here means it’s incoherent to accept the premises while rejecting the conclusion in any of the eight scenarios. Logical validity requires (willful) ignorance of the contents of nonlogical symbols, whereas good reasoning will also consider the nonlogical contents.

This post will argue that traditional attitudes towards logic fail in both ways: they do in fact presuppose things about nonlogical contents, and their focus on logically-good reasoning (to the exclusion of good reasoning) is detrimental.

Sequents with multiple conclusions

We are used to sequents which have a single conclusion (and perhaps an empty conclusion like A,B\vdash to signify the incompatibility of A and B, also sometimes written A,B\vdash \bot). However, we can perfectly make sense of multiple conclusions using the same rule as earlier: “It’s incoherent to reject everything on the right hand side while accepting everything on the left hand side.” This gives commas a conjunctive flavor on the left hand side and a disjunctive flavor on right hand side. For example, in propositional logic, we have {A,B \vdash C,D} iff {\ \vdash (A\land B) \rightarrow (C\vee D)}.

1.2 Nonlogical content

Suppose we buy into relational rather than atomistic thinking: there is more to a claimable p than whether it is true or false; rather, the nature of p is bound up in its relationships to the other nonlogical contents. It may be the case that p is a good reason for q. The inference from “California is to the west of New York” to “New York is to the east of California” is good due to the meanings of ‘east’ and ‘west’, not logic.1 It’s clearly not a logically-good inference (terminology: call it a materially-good inference, marked with a squiggly turnstile, p\ {\mid\hspace{-.2em}\thicksim}\ q). The 16 possibilities for the nonlogical (i.e. material) content of p and q are depicted in the grid below, where we use + to mark a claimable as a “premise” and - to mark it as a “conclusion”.

\checkmark\ /\ \times 0 p^- q^- p^-q^-
0 \overset{?}{\mid\hspace{-.2em}\thicksim} \overset{?}{\mid\hspace{-.2em}\thicksim}\ p \overset{?}{\mid\hspace{-.2em}\thicksim}\ q \overset{?}{\mid\hspace{-.2em}\thicksim}\ p,q
p^+ p\ \overset{?}{\mid\hspace{-.2em}\thicksim} p\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ p p\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ q p\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ p,q
q^+ q\ \overset{?}{\mid\hspace{-.2em}\thicksim} q\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ p q\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ q q\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ p,q
p^+q^+ p,q\ \overset{?}{\mid\hspace{-.2em}\thicksim} p,q\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ p p,q\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ q p,q\ \overset{?}{\mid\hspace{-.2em}\thicksim}\ p,q
Navigating the grid of possible implications

Suppose we wanted to find the cell corresponding to p\ {\mid\hspace{-.2em}\thicksim}\ p,q. We should look at the row labeled p^+ (“p is the premise”) and the column labeled p^-q^- (“p,q is the conclusion”).

As an example, if p is “It’s a cat” and q is “It has four legs”.2

\checkmark\ /\ \times 0 p^- q^- p^-q^-
0 {\checkmark} \boxed{\times} \boxed{\times} \boxed{\times}
p^+ \boxed{\times} \checkmark \boxed{\checkmark} \checkmark
q^+ \boxed{\times} \boxed{\times} \checkmark \checkmark
p^+q^+ \boxed{\times} \checkmark \checkmark \checkmark

We represent p\ {\mid\hspace{-.2em}\thicksim}\ q above as the \boxed{\checkmark} in second row, third column. We represent p, q\ {\not\mid\hspace{-0.2em}\sim} (i.e. that it is not incompatible for it to both be a cat and to have four legs) above as the \boxed{\times} in fourth row, first column. There are boxes drawn around the ‘interesting’ cells in the table. The others are trivial3 or have overlap between the premises and conclusions. Because we interpret \Gamma\ {\mid\hspace{-.2em}\thicksim}\ \Delta as “It’s incoherent to reject everything in \Delta while accepting everything in \Gamma”, they are checkmarks insofar as it’s incoherent to simultaneously accept and reject any particular proposition. This (optional, but common) assumption is called containment.

If our turnstile is to help us do bookkeeping for good reasoning, then suddenly it may seem wrong to force ourselves to ignore nonlogical content: p\vdash q is not provable in classical logic, so we miss out on good inferences like “It’s a cat” {\mid\hspace{-.2em}\thicksim} “It has four legs” by fixating on logically-good inferences.

Not only do we miss good inferences but we can also derive bad ones. Treating nonlogical content atomistically is tied up with monotonicity: the idea that adding more premises cannot remove any conclusions (p\vdash q therefore p,r\vdash q). For example, let r be “It lost a leg”. Clearly “It’s a cat”, “It lost a leg” {\not\mid\hspace{-0.2em}\sim} “It has four legs”. This is depicted in the following table (where we hold p^+ fixed, i.e. we’re talking about a cat).

\checkmark\ /\ \times 0 q^- r^- q^-r^-
0 {\checkmark} \boxed{\checkmark} \boxed{\times} \boxed{\checkmark}
q^+ \boxed{\times} \checkmark \boxed{\times} \checkmark
r^+ \boxed{\times} \boxed{\times} \checkmark \checkmark
q^+r^+ \boxed{\checkmark} \checkmark \checkmark \checkmark

The \boxed{\times} in the third row, second column here is the interesting bit of nonmonotonicity: adding the premise r defeated a good inference.

Some logics forgo monotonicity, but almost all do presuppose something about nonlogical contents, namely that the inferential relationships between them satisfy cumulative transitivity: \Gamma\ {\mid\hspace{-.2em}\thicksim}\ A and \Gamma, A \ {\mid\hspace{-.2em}\thicksim}\ B entails \Gamma \ {\mid\hspace{-.2em}\thicksim}\ B. In addition to this being a strong assumption about how purely nonlogical contents relate to each other, this property under mild assumptions is enough to recover monotonicity.4

1.3 Logical expressivism

If what follows from what, among the material concepts, is actually a matter of the input data rather than something we logically compute, what is left over for logic to do? Consider the inferences possible when we regard b (“Bachelors are unmarried”) in isolation.

\checkmark\ /\ \times 0 b^-
0 {\checkmark} \boxed{\checkmark}
b^+ \boxed{\times} \checkmark

Even without premises, one has a good reason for b (hence the \boxed{\checkmark}), and b is not self-incompatible (hence the \boxed{\times}). If we extend the set of things we can say to include \neg b, the overall {\mid\hspace{-.2em}\thicksim} relation can be mechanically determined:

\checkmark\ /\ \times 0 b^- \neg b^- b^-\neg b^-
0 {\checkmark} \boxed{\checkmark} \boxed{\times} \boxed{\checkmark}
b^+ \boxed{\times} \checkmark \boxed{\times} \checkmark
\neg b^+ \boxed{\checkmark} \boxed{\checkmark} \checkmark \checkmark
b^+\neg b^+ \boxed{\checkmark} \checkmark \checkmark \checkmark

It would seem like there are six (interesting, i.e. non-containment) decisions to make, but our hand is forced if we accept a regulative principle5 for the use of negation, called incoherence-incompatibility: \Gamma\ {\mid\hspace{-.2em}\thicksim}\ A iff \Gamma, \neg A\ {\mid\hspace{-.2em}\thicksim} (iff A is a conclusion in some context \Gamma, then \neg A is incompatible with \Gamma). This is appropriately generalized in the multi-conclusion setting (where a “context” includes a set \Gamma of premises on the left as well as a set \Delta of conclusion possibilities on the right) to \Gamma\ {\mid\hspace{-.2em}\thicksim}\ A, \Delta iff \Gamma, \neg A\ {\mid\hspace{-.2em}\thicksim}\ \Delta. Negation has the functional role of swapping a claim from one side of the turnstile to the other.6 So, to take {\mid\hspace{-.2em}\thicksim}\ b, \neg b as an example, we can evaluate whether or not this is a good implication via moving the \neg b to the other side, obtaining b\ {\mid\hspace{-.2em}\thicksim}\ b, which was a good implication in the base vocabulary.

Let’s see another example, now by extending our cat example with the sentence p \rightarrow q (“If it’s a cat, then it has four legs”). Here again we also have no freedom in the goodness of inferences involving the sentence p \rightarrow q. It is fully determined by the goodness of inferences involving just p and q. The property we take to be constitutive of being a conditional is deduction detachment: \Gamma, A\ {\mid\hspace{-.2em}\thicksim}\ B, \Delta iff \Gamma\ {\mid\hspace{-.2em}\thicksim}\ A \rightarrow B, \Delta.

\checkmark\ /\ \times 0 p^- q^- p^-q^- (p\rightarrow q)^-
0 {\checkmark} \boxed{\times} \boxed{\times} \boxed{\times} \boxed{\checkmark}
p^+ \boxed{\times} \checkmark \boxed{\checkmark} \checkmark \boxed{\checkmark}
q^+ \boxed{\times} \boxed{\times} \checkmark \checkmark \boxed{\checkmark}
p^+q^+ \boxed{\times} \checkmark \checkmark \checkmark \boxed{\checkmark}

(Note, only an abbreviated portion of the 8x8 grid is shown.)

To summarize: logic does not determine what follows from what (logic shouldn’t presuppose anything about material contents and purely material inferential relations). Rather, logic gives us a vocabulary to describe those relationships among the material contents.7 In the beginning we may have had “It’s a cat”{\mid\hspace{-.2em}\thicksim}“It has four legs”, but we didn’t have a sentence saying that “If it’s a cat, then it has four legs” until we introduced logical vocabulary. This put the inference into the form of a sentence (which can then itself be denied or taken as a premise, if we wanted to talk about our reasoning and have the kind of dialogue necessary for us to reform it).

This is both intuitive and radical at once. A pervasive idea, called logicism about reasons, is that logic underwrites or constitutes what good reasoning is.8 This post has taken the converse order of explanation, called logical expressivism: ‘good reason’ is conceptually-prior to ‘logically-good reason’. We start with a notion of what follows from what simply due to the nonlogical meanings involved (this is contextual, such as “In the theology seminar” or “In a chemistry lab”), which is the data of a {\mid\hspace{-.2em}\thicksim} relation. Then, the functional role of logic is to make explicit such a {{\mid\hspace{-.2em}\thicksim}} relation. We can understand these relations both as something we create and something we discover, as described in the previous post which exposits (Brandom 2013).

Familiar logical rules need to be readjusted if they are to also grapple with arbitrary nonlogical content without presupposing its structure (see the Math section for the syntax and model theory that emerges from such an enterprise). However, the wide variety of existing logics can be thought of as being perfectly appropriate for explicitating specific {\mid\hspace{-.2em}\thicksim} relations with characteristic structure.9

There are clear benefits to knowing one is working in a domain with monotonic structure (such as being able make judgments in a context-free way): for this reason, when we artificially (by fiat) give semantics to expressions, as in mathematics, we deliberately stipulate such structure. However, it is wishful thinking to impose this structure on the semantics of natural language, which is conceptually prior to our formal languages.

2 Broader consequences

2.1 Resolving tensions in how we think about science

The logicist idea that our judgments are informal approximations of covertly logical judgments is connected to the role of scientific judgments, which are prime candidates for the precise, indefeasible statements we were approximating. One understands what someone means by “Cats have four legs” by translating it appropriately into a scientific idiom via some context-independent definition. And, consequently, one hasn’t really said something if such a translation isn’t possible.

Scientific foundations

The overwhelming success of logic and mathematics can lead to blind spots in areas where these techniques are not effective: concepts which are governed by norms which are resistant to formalization, such as ethical and aesthetic questions. Relegating these concepts to being subjective or arbitrary, as an explanation for why they lie outside the scope of formalization, can implicitly deny their significance. Scientific objectivity is thought to minimize the role of normative values, yet this is in tension with a begrudging acknowledgment that the development of science hinges essentially on what we consider to be ‘significant’, what factors are ‘proper’ to control for, what metrics ‘ought’ to be measured. In fact, any scientific term, if pressed hard enough, will have a fuzzy boundary that must be socially negotiated. For example, a statistician may be frustrated that ‘outlier’ has no formal definition, despite the exclusion of outliers being essential to scientific practice.

Although a naive empiricist picture of scientific objectivity is repeatedly challenged by philosophers of science (1956; 1962; 1979; 1997), the worldview and practices of scientists are unchanged. However, logical expressivism offers a formal, computational model for how we can reason with nonlogical concepts. This allows us to acknowledge the (respectable, essential) role nonlogical concepts play in science.

‘All models are wrong, but some are useful’

This aphorism is at odds with the fact that science has a genuine authority, and scientific statements have genuine semantic content and are ‘about the world’. It’s tempting to informally cash out this aboutness in something like first-order models (an atomistic semantics: names pick out elements of some ‘set of objects in the world’, predicates pick out subsets, sentences are made true independently of each other). For example, the inference “It’s a cat”\vdash“It’s a mammal” is explained as a good inference because ⟦cat⟧ \subseteq ⟦mammal⟧. This can be seen as an explanation for why any concepts of ours (that actually refer to the world) have monotonic / transitive inferential relations. Any concept of “cat” which has a nonmonotonic inferential profile must be informal and therefore ‘just a way of talking’. It can’t be something serious, such as picking out the actual set of things which are cats. Thus, we can’t in fact have actually good reasons for these inferences, though we can imagine there are good reasons which we were approximating.

Deep problems follow from this worldview, such as the pessimistic induction about the failures of past scientific theories (how can we refer to electrons if future scientists inevitably will show that there exists nothing that has the properties we ascribe to ‘electron’?) and the paradox of analysis (if our good inferences are logically valid, how can they be contentful?). Some vocabulary ought be thought of in this representational, logical way, but one’s general story for semantics must be broader.10

Black box science

Above I argued for a theory-laden (or norm-laden) picture of science; however, the role of machine learning in science is connected to a ‘theory-free’ model of science (2023): a lack of responsibility in interrogating the assumptions that go into ML-produced results follows from a belief that ML instantiates the objective ideal of science. To the extent this view of science is promoted as a coherent ideal we strive for, there will be a drive towards handing scientific authority to ML, despite the fact it often obfuscates rather than eliminates the dependence of science on nonlogical concepts and normativity. Allowing such norms to be explicit and rationally criticized is an essential component for the practice of science, thus there will be negative consequences if scientific culture loses this ability in exchange for models with high predictive success (success in a purely formal sense, as the informal interpretation of the predictions and the encoding of the informally-stated assumptions are precisely what make the predictions useful).

2.2 AI Safety

We can expect that AI will be increasingly used to generate code. This code will be inscrutably complex,11 which is risky if we are worried about malicious or unintentional harm potentially hidden in the code. One way to address this is via formal verification: one writes a specification (in a formal language, such as dependent-type theory) and only accepts programs which provably meet the specification. However, something can seem like what we want (we have a proof that this program meets the specification of “this robot will make paperclips”) but turns out to not be what we want (e.g. the robot does a lot of evil things in order to acquire materials for the paperclips). This is the classic rule-following paradox, which is a problem if one takes a closed, atomistic approach to semantics (one will always realize one has ‘suppressed premises’ if the only kind of validity is logical validity). We cannot express an indefeasible intention in a finite number of logical expressions. Our material concepts do not adhere to structural principles of monotonicity and transitivity which will be taken for granted by a logical encoding.

Logical expressivism supports an open-ended semantics (one can effectively change the inferential role of a concept by introducing new concepts). However, its semantics is less deterministic12 and harder to compute with than traditional logics; future research may show these to be merely technological issues which can be mitigated while retaining the expressive capabilities necessary for important material concepts, e.g. ethical concepts.

2.3 Applied mathematics and interoperable modeling

A domain expert approaches a mathematician for guidance,13 who is able to reframe what the expert was saying in a way that makes things click. However, if the mathematician is told a series of nonmonotonic or nontransitive “follows from” statements, it’s reasonable for the mathematician to respond “you’ve made some sort of mistake”, or “your thinking must be a bit unclear”, or “you were suppressing premises”. This is because traditional logical reasoning is incapable of representing material concepts. The know-how of the expert can’t be put into such a formal system. However, we want to build technology that permits collaboration between domain experts of many different domains, not merely those who trade in concepts which are crystallized enough to be faithfully captured by definitions of the sort found in mathematics. Thus, acknowledgement of formal (but not logical) concepts is important for those who wish to work on the border of mathematics and application (particularly in domains studied by social sciences and the humanities).

3 A vision for software

These ideas can be implemented in software, which could lead to an important tool for scientific communication, where both formal and informal content need to be simultaneously negotiated at scale in principled, transparent ways.

One approach to formalization in science is to go fully formal and logical, e.g. encoding all of our chemical concepts within dependent type theory (Bobbin et al. 2024). If we accept that our concepts will be material rather than logical, this will seem like a hopeless endeavor (though still very valuable in restricted contexts). On the opposite end of the spectrum, current scientific communication works best through face-to-face conversations, lectures, and scientific articles. Here we lack the formality to reason transparently and at scale.

Somewhere in between these extremes lies discourse graphs (Chan et al. 2024): these are representations which fix an ontology for discourse that includes claims, data, hypotheses, results, support, opposition. However, the content of these claims is expressed in natural language, preventing any reliable mechanized analysis.

(Figure reproduced from (2023))

In a future post, I will outline a vision for expressing these building blocks of theories, data, claims, and supporting/opposition relationships in the language of logical expressivism (in terms of vocabularies, interpretations, and inferential roles, as described below). This approach will be compositional (common design patterns for vocabularies are expressible via universal properties) and formal, in the sense of the meaning of some piece of data (or some part of a theory or claim) being amenable to mechanized analysis via computation of its inferential role. This would make explicit how the meaning of the contents of our claims depends on our data, and dually how the meaning of our data depends on our theories.

4 Math

This section is largely an exposition of material in (Hlobil and Brandom 2024).

There is much to say about the mathematics underlying logical expressivism, and there is a lot of interesting future work to do. A future blog post will methodically go over this, but this section will just give a preview.

Vocabularies

An implication frame (or: vocabulary) is the data of a {\mid\hspace{-.2em}\thicksim} relation, i.e. a lexicon \mathcal{L} (a set of claimables: things that can be said) and a set of incoherences, \mathbb{I}\subseteq \mathcal{P}(\mathcal{L}+\mathcal{L}), i.e. the good implications where it is incoherent to deny the conclusions while accepting the premises.

Given any base vocabulary X=(\mathcal{L}_X,\mathbb{I}_X), we can introduce a logically-elaborated vocabulary whose lexicon includes \mathcal{L}_X but is also closed under \neg, \rightarrow, \land, \vee. The \mathbb{I} of the logically-elaborated relation is precisely \mathbb{I}_X when restricted to the nonlogical vocabulary (i.e. logical vocabulary must be harmonious), and the following sequent rules indicate how the goodness of implications including logical vocabulary is determined by implications without logical vocabulary.

The double bars are bidirectional meta-inferences: thus they provide both introduction and elimination rules for each connective being used as a premise or a conclusion. They are quantified over all possible sets \Gamma and \Delta. The top of each rule makes no reference to logical vocabulary, so the logical expressions can be seen as making explicit the implications of the non-logical vocabulary.

Vocabularies can be given a semantics based on implicational roles, where the role of an inference a\ {\mid\hspace{-.2em}\thicksim}\ b is the set of implications \Gamma\ {\mid\hspace{-.2em}\thicksim}\ \Delta in which a\ {\mid\hspace{-.2em}\thicksim}\ b is a good inference:

(a\ {\mid\hspace{-.2em}\thicksim}\ b)^*:=\{(\Gamma,\Delta)\ |\ \Gamma, a\ {\mid\hspace{-.2em}\thicksim}\ b,\Delta \in \mathbb{I}\}

The role of an implication can also be called its range of subjunctive robustness.

To see an example, first let’s remind ourselves of our q (“The cat has four legs”) and r (“The cat lost a leg”) example, a vocabulary which we’ll call C = (\mathcal{L}_C=\{q,r\}, \mathbb{I}_C):

\mathbb{I}_C 0 q^- r^- q^-r^-
0 {\checkmark} \boxed{\checkmark} \boxed{\times} \boxed{\checkmark}
q^+ \boxed{\times} \checkmark \boxed{\times} \checkmark
r^+ \boxed{\times} \boxed{\times} \checkmark \checkmark
q^+r^+ \boxed{\checkmark} \checkmark \checkmark \checkmark

The role of q^- (i.e. {\mid\hspace{-.2em}\thicksim}\ q) in vocabulary C is the set of all 16 possible implications except for r\ {\mid\hspace{-.2em}\thicksim} and r\ {\mid\hspace{-.2em}\thicksim}\ q.

The role of a set of implications is defined as the intersection of the roles of each element:

{\Gamma^*:=\bigcap_{\gamma \in \Gamma} \gamma^*}

The power set of implications for a given lexicon have a quantale structure with the \otimes operation:

\Gamma\otimes \Delta := \{\gamma \cup \delta\ |\ (\gamma,\delta) \in \Gamma \times \Delta\}

Roles are naturally combined via a dual operation, \sqcup:

r_1\sqcup r_2 := (r_1^* \otimes r_2^*)^*

A pair of roles (a premisory role and a conclusory role) is called a conceptual content: to see why pairs of roles are important, consider how the sequent rules for logical connectives are quite different for introducing a logically complex term on the left vs the right of the turnstile; in general, the inferential role of a sentence is different depending on whether it is being used as a premise or a conclusion. Any sentence a \in \mathcal{L} has its premisory and conclusory roles as a canonical conceptual content, written in typewriter font:

\texttt{a} := ((a\ {\mid\hspace{-.2em}\thicksim})^* ,( {\mid\hspace{-.2em}\thicksim}\ a)^*)

Below are recursive semantic formulas for logical connectives: given arbitrary conceptual contents {\texttt{A}=(a^+,a^-)} and {\texttt{B}=(b^+,b^-)}, we define the premisory and conclusory roles of logical combinations of \texttt{A} and \texttt{B}. Because \sqcup is an operation that depends on all of \mathbb{I}, this is both a compositional and a holistic semantics.

Connective Premisory role Conclusory role
\neg \texttt{A} a^- a^+
\texttt{A} \land \texttt{B} a^+\sqcup b^+ a^-\cap b^- \cap (a^-\sqcup b^-)
\texttt{A} \vee \texttt{B} a^+\cap b^+ \cap (a^+\sqcup b^+) a^-\sqcup b^-
\texttt{A} \rightarrow \texttt{B} a^-\cap b^+ \cap (a^-\sqcup b^+) a^+\sqcup b^-

Note: each cell in this table corresponds directly to a sequent rule further above, where combination of sentences within a sequent corresponds to \sqcup, and multiple sequents are combined via \cap.

There are other operators we can define on conceptual contents, such as {\texttt{A}^+:=(a^+,a^+)} and {\texttt{A} \sqcup \texttt{B}:=(a^+\sqcup b^+, a^-\sqcup b^-)}.

Given two sets {\texttt{G}=\{\texttt{g}_1,...,\texttt{g}_m\}}, {\texttt{D}=\{\texttt{d}_1...,\texttt{d}_n\}} of conceptual contents, we can define content entailment:

\texttt{G}\ {\mid\hspace{-0.1em}\mid\hspace{-0.2em}\sim}\ \texttt{D} := \mathbb{I}^* \subseteq \texttt{g}_1^+ \sqcup ... \sqcup \texttt{g}_m^+ \sqcup \texttt{d}_1^-\sqcup... \sqcup \texttt{d}_n^-

A preliminary computational implementation (in Julia, available on Github) supports declaring implication frames, computing conceptual roles / contents, and computing the logical combinations and entailments of these contents. This can be used to demonstrate that this is a supraclassical logic:14 this semantics validates all the tautologies of classical logic while also giving one the ability to reason about the entailment of nonlogical contents (or combinations of both logical and nonlogical contents).

C = ImpFrame([[]=>[:q], []=>[:q,:r], [:q,:r]=>[]], [:q,:r]; containment=true)
𝐪, 𝐫 = contents(C)            # canonical contents for the bearers q and r
∅ = nothing                   # empty set of contents
@test ∅ ⊩ (((𝐪 → 𝐫) → 𝐪) → 𝐪) # Pierce's law
@test ∅ ⊮ ((𝐪 → 𝐫) → 𝐪)       # not Pierce's law

Interpretations

We can interpret a lexicon in another vocabulary. An interpretation function {\lbrack\hspace{-0.15em}\lbrack{-} \rbrack\hspace{-0.15em}\rbrack: A \rightarrow B} between vocabularies assigns conceptual contents in B to sentences of A. We often want the interpretation function to be compatible with the structure of the domain and codomain: it is sound if for any candidate implication in A, we have {\Gamma\ {\mid\hspace{-.2em}\thicksim}_C\ \Delta} iff {\lbrack\hspace{-0.15em}\lbrack{\Gamma} \rbrack\hspace{-0.15em}\rbrack \ {\mid\hspace{-0.1em}\mid\hspace{-0.2em}\sim}_B\ \lbrack\hspace{-0.15em}\lbrack{\Delta} \rbrack\hspace{-0.15em}\rbrack}.

To see an example of interpretations, let’s first define a new vocabulary S with \mathcal{L}_S=\{x,y,z\}.

  • x: “It started in state s
  • y: “It’s presently in state s
  • z: “There has been a net change in state”.
\mathbb{I}_S 0 x^- y^- z^- x^-y^- x^-z^- y^-z^- x^-y^-z^-
0 \checkmark \boxed{\times} \boxed{\times} \boxed{\times} \boxed{\times} \boxed{\times} \boxed{\times} \boxed{\times}
x^+ \boxed{\times} \checkmark \boxed{\checkmark} \boxed{\times} \checkmark \checkmark \boxed{\checkmark} \checkmark
y^+ \boxed{\times} \boxed{\times} \checkmark \boxed{\times} \checkmark \boxed{\times} \checkmark \checkmark
z^+ \boxed{\times} \boxed{\times} \boxed{\times} \checkmark \boxed{\times} \checkmark \checkmark \checkmark
x^+y^+ \boxed{\times} \checkmark \checkmark \boxed{\times} \checkmark \checkmark \checkmark \checkmark
x^+z^+ \boxed{\times} \checkmark \boxed{\times} \checkmark \checkmark \checkmark \checkmark \checkmark
y^+z^+ \boxed{\times} \boxed{\times} \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark
x^+y^+z^+ \boxed{\checkmark} \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark

S claims it is part of our concept of ‘state’ that something stays in a given state, unless its state has changed (hence there is a similar non-monotonicity to the one in C, but now with x\ {\mid\hspace{-.2em}\thicksim}\ y and x,z\ {\not\mid\hspace{-0.2em}\sim}\ y). We can understand what someone is saying by r or q in terms of interpreting these claimables in S. The interpretation function q\mapsto \texttt{x}^+ \sqcup \texttt{y} and r\mapsto \texttt{x}^+ \sqcup \texttt{z} is sound. We can offer a full account for what we meant by our talk about cats and legs in terms of the concepts of states and change.

We could also start with a lexicon \mathcal{L}_D=\{x,y,z\} and interpretation function q\mapsto \texttt{x} \rightarrow \texttt{y} and r\mapsto \texttt{x} \rightarrow \texttt{z}; we can compute what structure \mathbb{I}_{D} must have in order for us to see \mathbb{I}_C as generated by the interpretation of q,r in D. Below \boxed{?} means that it doesn’t matter whether that implication is in \mathbb{I}_D:

\mathbb{I}_{D} 0 x^- y^- z^- x^-y^- x^-z^- y^-z^- x^-y^-z^-
0 \checkmark \boxed{\checkmark} \boxed{?} \boxed{?} \boxed{?} \boxed{?} \boxed{?} \boxed{?}
x^+ \boxed{\times} \checkmark \boxed{\checkmark} \boxed{\times} \checkmark \checkmark \boxed{\checkmark} \checkmark
y^+ \boxed{\times} \boxed{\checkmark} \checkmark \boxed{?} \checkmark \boxed{?} \checkmark \checkmark
z^+ \boxed{?} \boxed{\checkmark} \boxed{?} \checkmark \boxed{?} \checkmark \checkmark \checkmark
x^+y^+ \boxed{?} \checkmark \checkmark \boxed{\times} \checkmark \checkmark \checkmark \checkmark
x^+z^+ \boxed{?} \checkmark \boxed{\times} \checkmark \checkmark \checkmark \checkmark \checkmark
y^+z^+ \boxed{\checkmark} \boxed{\checkmark} \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark
x^+y^+z^+ \boxed{?} \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark

We can do the same with {q\mapsto \texttt{x} \land \texttt{y}} and {r\mapsto \texttt{x} \land \texttt{z}}.

\mathbb{I}_D 0 x^- y^- z^- x^-y^- x^-z^- y^-z^- x^-y^-z^-
0 \checkmark \boxed{\checkmark} \boxed{\checkmark} \boxed{\times} \boxed{\checkmark} \boxed{\checkmark} \boxed{\checkmark} \boxed{\checkmark}
x^+ \boxed{?} \checkmark \boxed{?} \boxed{?} \checkmark \checkmark \boxed{?} \checkmark
y^+ \boxed{?} \boxed{?} \checkmark \boxed{?} \checkmark \boxed{?} \checkmark \checkmark
z^+ \boxed{?} \boxed{?} \boxed{?} \checkmark \boxed{?} \checkmark \checkmark \checkmark
x^+y^+ \boxed{\times} \checkmark \checkmark \boxed{\times} \checkmark \checkmark \checkmark \checkmark
x^+z^+ \boxed{\times} \checkmark \boxed{\times} \checkmark \checkmark \checkmark \checkmark \checkmark
y^+z^+ \boxed{\times} \boxed{\times} \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark
x^+y^+z^+ \boxed{\checkmark} \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark \checkmark

Interpretation functions can be used to generate vocabularies in via a sound_dom function in our software implementation, which constructs the domain \mathbb{I} of an interpretation function from the assumption that it is sound. The following code witnesses how we recover our earlier vocabulary C via interpretation functions into the vocabularies S and D above.

S = ImpFrame([[:x]=>[:y], [:x]=>[:y,:z], [:x,:y,:z]=>[]]; containment=true)
𝐱, 𝐲, 𝐳 = contents(S) 
𝐱⁺ = Content(prem(𝐱), prem(𝐱))
f = Interp(q = 𝐱⁺ ⊔ 𝐲, r = 𝐱⁺ ⊔ 𝐳)
@test sound_dom(f) == C

D = ImpFrame([[]=>[:x], []=>[:y], []=>[:x,:y], []=>[:x,:z],
              []=>[:y,:z],[]=>[:x,:y,:z],[:x,:y,:z]=>[]]; containment=true)
𝐱, 𝐲, 𝐳 = contents(D) 
f = Interp(q = 𝐱 ∧ 𝐲, r = 𝐱 ∧ 𝐳)
@test sound_dom(f) == C
Category theory

What role can category theory play in developing or clarifying the above notions of vocabulary, implicational role, conceptual content, interpretation function, and logical-elaboration? A lot of progress has been made towards understanding various flavors of categories of vocabularies (with ‘simple’ maps which send sentences to sentences as well as thinking of interpretation functions as morphisms) with intuitive constructions emerging from (co)-limits. The semantics of implicational roles is closely related to Girard’s phase semantics for linear logic. Describing this preliminary work is also left to a sequel post.

5 References

Andrews, Mel. 2023. “The Devil in the Data: Machine Learning & the Theory-Free Ideal.”
Bobbin, Maxwell P, Samiha Sharlin, Parivash Feyzishendi, An Hong Dang, Catherine M Wraback, and Tyler R Josephson. 2024. “Formalizing Chemical Physics Using the Lean Theorem Prover.” Digital Discovery 3 (2): 264–80.
Brandom, Robert. 2013. “A Hegelian Model of Legal Concept Determination.” In Pragmatism, Law, and Language. Routledge.
Chan, Joel, Matthew Akamatsu, David Vargas, Lukas Kawerau, and Michael Gartner. 2024. “Steps Towards an Infrastructure for Scholarly Synthesis.” arXiv Preprint arXiv:2407.20666.
Feyerabend, Paul K. 1962. “Explanation, Reduction, and Empiricism.”
Fine, Kit. 2017. “A Theory of Truthmaker Content i: Conjunction, Disjunction and Negation.” Journal of Philosophical Logic 46: 625–74.
Hanson, Norwood Russell. 1979. Patterns of Discovery: An Inquiry into the Conceptual Foundations of Science. CUP Archive.
Hlobil, Ulf, and Robert Brandom. 2024. Reasons for Logic, Logic for Reasons: Pragmatics, Semantics, and Conceptual Roles. Taylor & Francis.
Kuhn, Thomas S. 1997. The Structure of Scientific Revolutions. Vol. 962. University of Chicago press Chicago.
Murzi, Julien, and Florian Steinberger. 2017. “Inferentialism.” A Companion to the Philosophy of Language, 197–224.
Research, Protocol Labs. 2023. “Discourse Graphs and the Future of Science.” Protocol Labs Research. https://research.protocol.ai/blog/2023/discourse-graphs-and-the-future-of-science/.
Sellars, Wilfrid. 1956. “Empiricism and the Philosophy of Mind.”

Footnotes

  1. The goodness of this inference is conceptually prior to any attempts to come up with formal definitions of ‘east’ and ‘west’ such that we can recover the inferences we already knew to be good.↩︎

  2. This is an inference you must master in order to grasp the concept of ‘cat’. If you were to teach a child about cats, you’d say “Cats have four legs”; our theory of semantics should allow making sense of this statement as good reasoning, even if in certain contexts it might be proper to say “This cat has fewer than four legs”.↩︎

  3. Not much hinges on whether empty sequent, \varnothing{\mid\hspace{-.2em}\thicksim}\varnothing, is counted as a good inference or not. Some math works out cleaner if we take it to be a good implication, so we do this by default.↩︎

  4. Logic shouldn’t presume any structure of nonlogical content, even cautious monotonicity, i.e. from \Gamma \ {\mid\hspace{-.2em}\thicksim}\ A and \Gamma \ {\mid\hspace{-.2em}\thicksim}\ B one is tempted to say \Gamma, A \ {\mid\hspace{-.2em}\thicksim}\ B (the inference to B might be infirmed by some arbitrary claimable, but at least it should be undefeated by anything which was already a consequence of \Gamma). By rejecting even cautious monotonicity, this becomes more radical than most approaches to nonmonotonic logic; however, there are plausible cases where making something explicit (taking a consequence of \Gamma, which was only implicit in \Gamma, and turning it into an explicit premise) changes consequences that are properly drawn from \Gamma.↩︎

  5. We might think of logical symbols as arbitrary, thinking one can define a logic where \neg plays the role of conjunction, e.g. A \neg B \vdash A. However, in order to actually be a negation operator (in order for someone to mean negation by their operator), one doesn’t have complete freedom. There is a responsibility to pay to the past usage of \neg, as described in a previous post.↩︎

  6. The sides of the turnstile represent acceptance and rejection according to \Gamma\ {\mid\hspace{-.2em}\thicksim}\ \Delta meaning “Incoherent to accept all of \Gamma while rejecting all of \Delta”, so the duality of negation comes from the duality of acceptance and rejection.↩︎

  7. Thoroughgoing relational thinking, a semantic theory called inferentialism (2017), would say the conceptual content of something like “cat” or “electron” or “ought” is exhaustively determined by its inferential relations to other such contents.↩︎

  8. We have a desire to see the cat inference as good because of the logical inference “It is a cat”, “All cats have four legs” \vdash “It has four legs”. Under this picture, the original ‘good’ inference was only superficially good; it was technically a bad inference because of the missing premise. But in that case, a cat that loses a leg is no longer a cat. Recognizing this as bad, we start iteratively adding premises ad infinitum (“It has not lost a leg”, “It didn’t grow a leg after exposure to radiation”, “It was born with four legs”, etc.). One who thinks logic underwrites good reasoning is committed to there being some final list of premises for which the conclusion follows. Taking this seriously, none of our inferences (even our best scientific ones, which we accept may turn out to be shown wrong by future science) are good inferences. We must wonder what notion of ‘good reasoning’ that logic as such is codifying at all.↩︎

  9. In (2024), various flavors of propositional logic (classical, intuitionistic, LP, K3, ST, TS) are shown to make explicit particular types of {\mid\hspace{-.2em}\thicksim} relations.↩︎

  10. Logical expressivists have a holistic theory of how language and the (objective) world connect. Chapter 4 of (Hlobil and Brandom 2024) describes an isomorphism between the pragmatist idiom of “A entails B because it is normatively out of bounds to reject B while accepting A” and a semantic, representationalist idiom of “A entails B because it is impossible for the world to make A true while making B false.” This latter kind of model comes from Kit Fine’s truthmaker semantics (2017), which generalizes possible world semantics in order to handle hyperintentional meaning (we need something finer-grained in order to distinguish “1+1=2” and “\pi is irrational”, which are true in all the same possible worlds).↩︎

  11. At Topos, we believe this is not necessary. AI should be generating structured models (which are intelligible and composable). If executable code is needed, we can give a computational semantics to an entire class of models, e.g. mass-action kinetics for Petri nets.↩︎

  12. See the Harman point: the perspective shift of logical expressivism means that logic does not tell us how to update our beliefs in the face of contradiction; this directionality truly comes from the material inferential relations which we must supply. Logical expressivism shows how the consequence relation is in general deterministic (there is a unique consequence set to compute from any set of premises) under very specific conditions.↩︎

  13. Many wonderful examples in David Spivak’s “What are we tracking?” talk.↩︎

  14. Caveat: the base vocabulary must satisfy containment for this property to hold. When the only good implications in the base vocabulary are those forced by containment, one precisely recovers the classical logic consequence relation.↩︎

Leaving a comment will set a cookie in your browser. For more information, see our cookies policy.