Designated Values
In logic a designated value is a valuation of sentences in the logic that is preserved by entailments. In other words, if we define a logic L=⟨P,F,⊢⟩ with atomic sentences in P and operations in F, then the entailment relation ⊢ takes (a set of) sentences to another (set of) sentence(s) in the language with the following condition: If the input sentences all have the designated value, then the output sentence will too. When the designated value isn’t “preserved” in this way, then the entailment relation doesn’t obtain between the sentences.
Typically, we think of the designated value as “Truth” and, in the case of bivalent logics with only two possible values, the other value is “Falsity.” Falsity is undesignated so it isn’t preserved by the entailment relation. But Truth isn’t the only thing that we might want to preserve across some entailment-like relation. There are a few ways we might extend this idea.
First, there might be other values that we could assign to sentences in our logic. If the sentences aren’t declaratives—but instead they’re imperatives or interrogatives—then we might preserve the force of the commands or the answerability(?) of the questions. Another way of taking this idea forward would be to preserve something other than, or in addition to, truth in declarative sentences. Truth might not be the only valuation we can give to declarative sentences and other valuations might have their own structure. We could preserve their (pragmatic or evidential) assertability. Or we could preserve their believability or justifiedness. Other than broadly epistemic values, we might care about preserving the beauty or pith or pragmatic value of some sentences. We could restrict our attention to only sentences of a certain kind, such as descriptions of moves in a game, and identify those sets that preserve the possibility of a win.
One question about this extension is whether entailments can mix. If we have two orthogonal valuations with two designated values, then do we need distinct entailment relations for each designated value? Could these mix in interesting (i.e. non-additive) ways?
A problem with this approach, however, is that identifying alternative valuations and designated values may be a different way into creating a kind of modal logic. We could just as well introduce a “believable” modal operator and then assert something like ‘◻P’ means that it is true that P is believable. All these other modalities and moods can just collapse back into a single designated value: Truth.
A second extension we could try is to assign a gradient of distinct measures of a single value to sentences in our logic. If invent multiple possible valuations and we think that each of the values are all measuring the same thing, then we’re faced with a decision. Should we assign a threshold value such that that value and the ones ‘above’ it in some sense are designated or should we treat designation (and entailment) as something that comes in degrees? Both directions make sacrifices. The former essentially reproduces the same division between Truth and Falsity but with extra steps. Furthermore, it permits us to treat a relation between sentences that is ‘lossy’ as if it were an entailment. This might be a desirable property for a logic of a process that is lossy in this sense; perhaps we can never be certain of X, so any inference to X (even from X itself) will only entail an ‘above-the-threshold’ valuation.
The latter approach is more suggestive, although it’s hard to see how to designation and entailment could come in degrees. It could be that under certain conditions the entailment relation returns an output that is no less designated than the least designated input. This would be some kind of deductive or super-deductive entailment relation, in the sense that we could actually improve the level of designation that we started with. Or we could have entailment that returns the mean or some other function of the designations of the inputs. Perhaps instead the entailment relation returns an output that is at best the level of the highest designated input. This looks more inductive. In any case, what entailment preserves on this approach is less clear. The point of the entailment relation seems to be that it functions as a guarantee: if you put good things in and play by the rules, then good things will come out. But weakening entailment this way results in an entailment relation that only guarantees the result is ‘minimally bad’ in some way, rather than really preserving the designated value.
The last thing to consider about how we could extend designated values and entailment relations is other contexts in which we care about preserving values across some operation or relation. Take cooking, for example. Abstracted from any specific content we can think of cooking as just “Take some ingredients, apply some operations to them, produce some food.” Not only is “edibility” preserved across (some) cooking relations, but another valuation of food that we care about—tastiness—should increase when the relation successfully obtains. This is a limitation on the classical entailment relation: the output can never be any better than the inputs. There are lots of cases of practical reasoning—morality, economic or political strategy—where we want inference relations of this sort to be ampliative, that is, to improve upon or go beyond their inputs. Whatever alternative valuations we can imagine it seems to me that ampliative inferences, while valuable, are much more context-dependent. Whether so much so that they no longer merit the term “logical,” I’m not sure.
- - -