Christine Ladd-Franklin

Christine Ladd-Franklin (1847-1930) was a philosopher, logician, and psychologist. A student of C. S. Peirce, she made numerous contributions to psychology and mathematical logic despite a tenuous attachment to professional academia. Misogyny being what it was (and is) in academic hiring, Ladd-Franklin never secured a reliable academic post:

Laurel Furumoto notes that "her inability to secure a regular academic position was a predictable consequence, in that time period, of her decision to marry" (Furumoto, 1994, p. 97). In light of the fact that "she never held a regular academic appointment" and therefore never secured solid academic affiliation, her active academic career was remarkable (Furumoto, 1994, p. 93). [source]

Plenty of interesting biographical work on Ladd-Franklin exists, though I somewhat recently got to hear Jessica Gordon-Roth speak on the implicit sexism involved in the (often lengthy) biographical preamble that precedes discussion of women in philosophy, so I won’t rehash Ladd-Franklin’s life story. Recently I read her paper “Epistemology for the Logician” [link]. In it she claims that philosophy, unlike science, does not make steady progress and her argument for why is that science, unlike philosophy, has a “common groundwork” (what she later calls explicit primitives) that command the assent of everyone involved. Here’s a reconstruction of the argument:

  1. If X is a branch of knowledge, then the knowledge produced by X is cumulative.

  2. If the knowledge produced by some branch of knowledge is cumulative, then there is some set of explicit primitives that are the common ground among practitioners of that branch.

  3. So, either philosophy is not a branch of knowledge or there is a set of explicit primitives that are philosophy’s common ground.

Since she wants to reject the first disjunct of the conclusion, Ladd-Franklin sets out to describe what the set of explicit primitives must be like, assuming they exist. For now I’m less interested in the content of the common ground than I am in the claim that philosophy, or science for that matter, must have one.

It seems to me that Ladd-Franklin is on to the same thing expressed by Peter Spirtes, Clark Glymour, and Richard Scheines in Causation, Prediction and Search when they distinguish between Platonic and Euclidean analyses of causation:

One approach to clarifying the notion of causation—the philosophers’ approach ever since Plato—is to try to define “causation” in other terms, to provide necessary and sufficient and noncircular conditions for one thing, or feature or event or circumstance, to cause another, the way one can define “bachelor” as “unmarried adult male human.” Another approach to the same problem—the mathematician’s approach ever since Euclid—is to provide axioms that use the notion of causation without defining it, and to investigate the necessary consequences of those assumptions.

Ladd-Franklin is calling for a set of philosophical axioms or, at the very least, a set of methodological axioms for philosophy. It’s clear that Spirtes et al. heartily endorse the Euclidean approach over that of Plato. But if we understand logic, and philosophy more generally, as the normative study of methods—methods of inquiry, methods of political organization, methods of living—why expect that that study will have a static method? In particular, it seems like a fixed metaphilosophical (or metametaphilosophical, or…) method would be to already answer the questions posed by philosophy.

This isn’t to say that philosophical subfields ethics, or politics, or the metaphysics of causation, or whatever can’t do the thing Ladd-Franklin wants them to do. They clearly can! It’s just that taking Ladd-Franklin’s proposal to apply as a general aim for philosophy as such seems wrong to me.

Regarding the argument reconstruction above, I’d want to deny premise (1), though only on the grounds that I think the implied universal quantifier doesn’t scope over philosophy. Or, rather, that philosophy isn’t a branch of knowledge in the same way that physics, biology, or ethics are. Philosophy is the trunk. The evidence of its cumulative development of knowledge isn’t in the advancement of better theories or more accurate predictions, it is the proliferation of its branches.

- - -

Photo by Scott Webb on Unsplash

"If you're happy and you know it, clap your hands!"

How should we translate this into a logical sentence in order to capture everything that’s going on here?

First, we could just translate it as “P” but that seems way too easy. We can at least get the conditional and conjunction structure like this:

P = “You’re happy.” | Q = “You know it.” | R = “Clap your hands.”

(P ∧ Q) → R

So far so good. But there’s at least three things in those atoms that we ought to try and get some logical vocabulary for. First, let’s consider the epistemic modality. If X is true and S knows X, then we can introduce an epistemic modal operator, K, that we use to represent knowledge. We’re not going to worry about multi-agent modeling, so we just need one operator.

p = “You’re happy.” | Kp = “You know (that you’re happy).” | r = “Clap your hands.”

(p ∧ Kp) → r

We could also represent the relationship of the object (‘you’) and its states or properties (‘being happy’) with first-order predication.

y = you | H = “…is happy.” | Kϕ = “S knows that ϕ.” | r = “Clap your hands.”

(Hy ∧ KHy) → r

Okay, that’s two of the three features of the sentence that seem obviously worth capturing the logical structure of. Last, there’s the imperative: “Clap your hands.” We could just leave it as is and include an imperative operator (‘!’) to indicate that it’s a non-propositional subformula. It would be better, I think, to try and incorporate the first-order structure we have so far. In that case, we can think of imperatives as issuing to specific objects that are defined in our first order language. We might as well do the same for our epistemic modal operator:

y = you | H = “…is happy.” | Kˢϕ = “S knows that ϕ.” | !ˢϕ = “S, do ϕ!” | C = “Clap …’s hands.”

(Hy ∧ KʸHy) → !ʸCy

This is, I think, the best translation we can give without getting into worries about second-person indexical pronouns. It’s interesting to me that this sentence, which is ostensibly so simple that we teach children to sing it, requires a non-propositional, first-order epistemic modal logic in order to capture its structure.

♩♫ 𝅘𝅥𝅯 𝄽 ♬ 𝄽♪ 𝄽 ♫ 𝅘𝅥𝅯 ♪♬ 𝅘𝅥𝅯 ♬ ♪ ♫ ♩ ♪ ♫ ♬ ♫ 𝅘𝅥𝅯 ♪ 𝄽 ♬ 𝅘𝅥𝅯 ♬ ♪ ♫ ♫ 𝅘𝅥𝅯 𝄽 ♪♬ 𝅘𝅥𝅯 ♬ ♪ ♫

(Hy∧KʸHy) → !ʸCy, (Hy∧KʸHy) → !ʸCy, (Hy∧KʸHy) → Fy, (Hy∧KʸHy) → !ʸCy

(Hy∧KʸHy) → !ʸSy, (Hy∧KʸHy) → !ʸSy, (Hy∧KʸHy) → Fy, (Hy∧KʸHy) → !ʸSy

(Hy∧KʸHy) → !ʸRy, (Hy∧KʸHy) → !ʸRy, (Hy∧KʸHy) → Fy, (Hy∧KʸHy) → !ʸRy

(Hy∧KʸHy) → !ʸ(Cy∧Sy∧Ry), (Hy∧KʸHy) → !ʸ(Cy∧Sy∧Ry), (Hy∧KʸHy) → Fy,
(Hy∧KʸHy) → !ʸ(Cy∧Sy∧Ry)

♩♫ 𝅘𝅥𝅯 𝄽 ♬ 𝄽♪ 𝄽 ♫ 𝅘𝅥𝅯 ♪♬ 𝅘𝅥𝅯 ♬ ♪ ♫ ♩ ♪ ♫ ♬ ♫ 𝅘𝅥𝅯 ♪ 𝄽 ♬ 𝅘𝅥𝅯 ♬ ♪ ♫ ♫ 𝅘𝅥𝅯 𝄽 ♪♬ 𝅘𝅥𝅯 ♬ ♪ ♫

Designated Values

In logic a designated value is a valuation of sentences in the logic that is preserved by entailments. In other words, if we define a logic L=⟨P,F,⊢⟩ with atomic sentences in P and operations in F, then the entailment relation ⊢ takes (a set of) sentences to another (set of) sentence(s) in the language with the following condition: If the input sentences all have the designated value, then the output sentence will too. When the designated value isn’t “preserved” in this way, then the entailment relation doesn’t obtain between the sentences.

Typically, we think of the designated value as “Truth” and, in the case of bivalent logics with only two possible values, the other value is “Falsity.” Falsity is undesignated so it isn’t preserved by the entailment relation. But Truth isn’t the only thing that we might want to preserve across some entailment-like relation. There are a few ways we might extend this idea.

First, there might be other values that we could assign to sentences in our logic. If the sentences aren’t declaratives—but instead they’re imperatives or interrogatives—then we might preserve the force of the commands or the answerability(?) of the questions. Another way of taking this idea forward would be to preserve something other than, or in addition to, truth in declarative sentences. Truth might not be the only valuation we can give to declarative sentences and other valuations might have their own structure. We could preserve their (pragmatic or evidential) assertability. Or we could preserve their believability or justifiedness. Other than broadly epistemic values, we might care about preserving the beauty or pith or pragmatic value of some sentences. We could restrict our attention to only sentences of a certain kind, such as descriptions of moves in a game, and identify those sets that preserve the possibility of a win.

One question about this extension is whether entailments can mix. If we have two orthogonal valuations with two designated values, then do we need distinct entailment relations for each designated value? Could these mix in interesting (i.e. non-additive) ways?

A problem with this approach, however, is that identifying alternative valuations and designated values may be a different way into creating a kind of modal logic. We could just as well introduce a “believable” modal operator and then assert something like ‘◻P’ means that it is true that P is believable. All these other modalities and moods can just collapse back into a single designated value: Truth.

A second extension we could try is to assign a gradient of distinct measures of a single value to sentences in our logic. If invent multiple possible valuations and we think that each of the values are all measuring the same thing, then we’re faced with a decision. Should we assign a threshold value such that that value and the ones ‘above’ it in some sense are designated or should we treat designation (and entailment) as something that comes in degrees? Both directions make sacrifices. The former essentially reproduces the same division between Truth and Falsity but with extra steps. Furthermore, it permits us to treat a relation between sentences that is ‘lossy’ as if it were an entailment. This might be a desirable property for a logic of a process that is lossy in this sense; perhaps we can never be certain of X, so any inference to X (even from X itself) will only entail an ‘above-the-threshold’ valuation.

The latter approach is more suggestive, although it’s hard to see how to designation and entailment could come in degrees. It could be that under certain conditions the entailment relation returns an output that is no less designated than the least designated input. This would be some kind of deductive or super-deductive entailment relation, in the sense that we could actually improve the level of designation that we started with. Or we could have entailment that returns the mean or some other function of the designations of the inputs. Perhaps instead the entailment relation returns an output that is at best the level of the highest designated input. This looks more inductive. In any case, what entailment preserves on this approach is less clear. The point of the entailment relation seems to be that it functions as a guarantee: if you put good things in and play by the rules, then good things will come out. But weakening entailment this way results in an entailment relation that only guarantees the result is ‘minimally bad’ in some way, rather than really preserving the designated value.

The last thing to consider about how we could extend designated values and entailment relations is other contexts in which we care about preserving values across some operation or relation. Take cooking, for example. Abstracted from any specific content we can think of cooking as just “Take some ingredients, apply some operations to them, produce some food.” Not only is “edibility” preserved across (some) cooking relations, but another valuation of food that we care about—tastiness—should increase when the relation successfully obtains. This is a limitation on the classical entailment relation: the output can never be any better than the inputs. There are lots of cases of practical reasoning—morality, economic or political strategy—where we want inference relations of this sort to be ampliative, that is, to improve upon or go beyond their inputs. Whatever alternative valuations we can imagine it seems to me that ampliative inferences, while valuable, are much more context-dependent. Whether so much so that they no longer merit the term “logical,” I’m not sure.

- - -

Photo by Tim Evans on Unsplash

Revisiting the Categories

When we think about giving a “logic” of something, what guides our choices? What makes some concepts logically apt while others seem to resist formalization?

Kant

In the Critique of Pure Reason Kant says the following:

In such a way there arise exactly as many pure concepts of the understanding, which apply to objects of intuition in general a priori, as there were logical functions of all possible judgments in the previous table: for the understanding is completely exhausted and its capacity entirely measured by these functions. Following Aristotle we will call these concepts categories, for our aim is basically identical with his although very distant from it in execution. [A79-80; B105]

Kant proceeds to list off the four types of category—Quantity, Quality, Relation, Modality—that correspond to the four types of judgment. Of the judgments, there are

  1. Quantity: Universal, Particular, Singular

  2. Quality: Affirmative, Negative, Infinite

  3. Relation: Categorical, Hypothetical, Disjunctive

  4. Modality: Problematic, Assertoric, Apodictic

These correspond to the categories:

  1. Quantity: Unity, Plurality, Totality

  2. Quality: Reality, Negation, Limitation

  3. Relation: Substance, Causality, Community

  4. Modality: Possibility, Existence, Necessity

Frege

A few years later, in §4 of the Begriffsschrift, Frege attempts to “explain the significance for our purposes of the distinctions that we introduce among judgments.” Of these distinctions, Frege dismisses the entire class of relational judgments/categories. “The distinction between categoric, hypothetic, and disjunctive judgments seems to me to have only grammatical significance.”

He is likewise dismissive of the class of modal judgments/categories, saying that they have “since this does not affect the conceptual content of the judgment, the form of the apodictic judgment has no significance for us” and that when a speaker frames an assertion as possible, “either the speaker is suspending judgment by suggesting that he knows no laws from which the negation of the proposition would follow or he says that the generalization of this negation is false.”

Frege essentially clears the road of obstacles to a ‘mathematical’ logic by eliminating the super-categories of Relation and Modality as not up for logical analysis. What is left can be given a purely extensional analysis.

Peirce

It was a few years before Frege published the Begriffshrift that Peirce introduced his three fundamental categories: Firstness, Secondness, and Thirdness. Peirce investigated the Kantian categories and concluded “the fundamental categories of thought really have that sort of dependence upon formal logic that Kant asserted.” He says that he “became thoroughly convinced that such a relation really did and must exist.” [CP 1.561]

Peirce went on to consider the possibility that the Kantian categories are “part of a larger system of conceptions.” [CP 1.563] In particular, he thought that they might have a kind of intersecting nature in which, for example:

the categories of relation… are so many different modes of necessity, which is a category of modality; and in like manner, the categories of quality… are so many relations of inherence, which is a category of relation. Thus, as the categories of the third group are to those of the fourth, so are those of the second to those of the third; and I fancied, at least, that the categories of quantity… were, in like manner, different intrinsic attributions of quality. [CP 1.563]

The point of which seems to be that members each of the Kantian fundamental categories relates to or is a form of each of the others. Since there are four fundamental categories each one has three others it can relate to. This reinforces Peirce’s triadic approach to the categories and the idea that the source of the fundamental categories are (the concepts of) the first three ordinals. These are the “modes of being” that Peirce uses to investigate and explain the ‘downstream’ accidental categories: quality, relation, representation.

Questions about the categories are interesting in their own right, but I’m interested in them primarily as the atoms of logical formalism. For Aristotle, the categories are “in no way composite” (Categories §1, part 3). Kant, Frege, and Peirce attempted to classify and supplement the Aristotelian categories. The explosion of intensional logics after C. I. Lewis has given us the tools to represent the logical fine structure of more of the categories than ever. We can represent all kinds of possibility and necessity and all kinds of modal contexts. Graphical models give us formal tools to dig into the structure of relations as well.

I suspect, but can’t yet prove, that the logic of graphical causal models finally puts the categories of action and being acted upon within the scope of purely formal logic. But it seems to me that giving us access to the fine structure of causal relations is what sets graphical modeling apart from purely predicative or probabilistic analyses of causation.

- - -

Photo by Patrick Fore on Unsplash

Properties of Relations

I often want a list of properties that can obtain for two-place relations and don’t have a good place to look them up. Here is an ongoing list.

First, some notation. Consider a relation R on a (possibly infinite) set X. That is:

R: X→X

Equivalence Properties

A relation R is an equivalence relation if and only if R is reflexive, symmetric, and transitive.

  1. R is reflexive iff ∀x∈X(Rxx)

  2. R is symmetric iff ∀x,y∈X(Rxy → Ryx)

  3. R is transitive iff ∀x,y,z∈X((Rxy ∧ Ryz) → Rxz)

Each of these properties also has variants:

  1. R is irreflexive iff ∀x∈X(¬Rxx)

  2. R is quasi-reflexive iff ∀x,y∈X(Rxy → (Rxx ∧ Ryy))

  3. R is antisymmetric iff ∀x,y∈X((Rxy ∧ Ryx) → x=y)

  4. R is antitransitive iff ∀x,y,z∈X((Rxy ∧ Ryz) → ¬Rxz)

Other Properties

  1. R is connected iff ∀x,y∈X(Rxy ∨ Ryx)

  2. R is right Euclidean iff ∀x,y,z∈X((Rxy ∧ Rxz) → Ryz)

  3. R is left Euclidean iff ∀x,y,z∈X((Ryx ∧ Rzx) → Ryz)

Loading more posts…