Sunday, February 14, 2010

Symbolic logic and ontology

image: theorem from Russell and Whitehead, Principia Mathematica

In what ways do the abstract features of symbolic logic reflect characteristics of thought?

The syntax of symbolic logic is illustrative.  First order predicate theory provides syntactic categories for individuals (a, b, c; x, y, z), properties (Fx, Gx), relations (xRy), n-place relations (R(x1, ..., xn)), logical connectives (∧, ∨, ~, ⊃), and quantifiers (∀x, ∃x).  We also need to introduce the notation of mathematical functions (x = f(w,y,z)) -- though this represents a significant expansion of the formal power of symbolic logic.  Individuals are objects that possess properties and fall within relations with other objects.  Properties and relations may be interpreted intensionally or extensionally: in terms of a verbal definition or in terms of a class of objects possessing the property or relation.  Functions are mathematical relations specifying the value of the dependent variable for all settings of the independent variables.

This syntax permits us to formulate statements about individuals and their properties and relations:
  • Bj (John is bald)
  • jTa  (John is taller than Alice)
  • ∃x(xTa)  (there exists some x such that x is taller than Alice; someone is taller than Alice)
  • ∀x(Cx ⊃ xTa) (for all x, if x is C then x is taller than Alice; all members of the choir are taller than Alice)
  • p = nRT/v (pressure equals n times R times temperature divided by volume)
So the syntax of symbolic logic has a relationship to a small subset of English syntax: nouns (singular and generic), adjectives, and relation terms.  What this syntax lacks is a direct way of expressing "doing" or becoming -- verbs or process terms. To express a thought like "The Roman Empire was becoming more corrupt over time" we would need to do some gymnastics.  If we restrict ourselves to the predicate core of symbolic logic, then we would need to introduce a new predicate:
  • Cx = x is becoming more corrupt over time
  • r = Roman Empire
  • Cr = The Roman Empire is becoming more corrupt over time
This solution is unsatisfactory because it leaves elements of the statement that are inferentially relevant in English, invisible in the logical paraphrase.  This is precisely the idea of change over time.  If we make use of the conceptual machinery of mathematical functions and differential equations, then we can provide a more refined analysis of the sentence:
  • r = Roman Empire
  • C(x,t) = the degree of corruption possessed by x at time t
  • dC(r,t)/dt > 0 = the value of C for r is increasing over time
Here we have been able to represent change or process as a derivative: the rate of change of the value of the function with respect to time.  This gives us a way of representing action, process, and change; though it is an open question whether this formalism will suffice for all types of change.  Consider this statement: "Robert's personality has changed a lot over the past decade."  In order to capture this idea we need a set of characteristics that constitute personality -- that is, we need a theory or definition of personality; and we need to represent these characteristics as functions of time.  Suppose this is our working analysis of "personality":
  • A(x,t) = degree of agreeableness x shows at time t
  • E(x,t) = degree of extroversion x shows at time t
  • O(x,t) = degree of openness x shows at time t
  • N(x,t) = degree of neuroticism x shows at time t
  • C(x,t) = degree of conscientiousness x shows at time t
To say that Robert's personality is unchanging is the simplest case:
  • dA(r,t)/dt = dE(r,t)/dt = dO(r,t)/dt = dN(r,t)/dt = dC(r,t)/dt = 0
And to say that Robert's personality is changing in that he is becoming less agreeable and more neurotic might be paraphrased this way:
  • dA(r,t)/dt < 0 ∧ dN(r,t)/dt > 0
It is a premise of logical positivism, including Bertrand Russell in Principia Mathematica, Ludwig Wittgenstein in the Tractatus, and Rudolph Carnap in The Logical Structure of the World, that all the knowledge claims of science can be formulated using only these syntactic elements.  (In fact, it is possible to reduce the logical connectives to a single connective, the sheffer stroke ("not and"), and the quantifiers to a single quantifier and the negation sign.)  The vocabulary of a science consists of a finite number of primitive (undefined) terms and a number of terms defined in terms of logical compounds of primitive terms.  For example, mass, time, and location might be primitive terms in classical mechanics; then velocity and momentum are defined as logical compounds of these primitives.  And this view of the adequacy of first-order predicate logic for the whole of science implies something like  claim of "concept neutrality": the vocabulary of any science can be reformulated in terms of these simple logical elements.

Does this formulation help when it comes to inquiring about our "conceptual schemes" when we attempt to categorize the social world (link)?  Not very much. The hard questions that arise when we attempt to articulate a conceptual system for analyzing personality, social movements, revolutions, or ideologies are not aided by the putative fact that we could represent any adequate scheme of concepts in terms of the formalism of first order predicate theory plus mathematical functions. If we were willing to treat revolutions as discrete historical individuals with fixed properties, then of course it is true that we can formulate our theories of revolution in terms like these:
  • "All revolutions involve either widespread social unrest or defeat in war."
  • ∀x(Rx ⊃ (Ux ∨ Dx)) [all historical things are such that if they are revolutions then either they possess social unrest or experience defeat in war]
The ontological problem is simply this: revolutions are not uniform across instances or across time.  Revolutions do not have fixed, invariant properties.  So we cannot really treat revolutions as individuals over which we can quantify.  And the syntactical choice of representing "revolutions" as unchanging individuals is unsatisfactory.

Two observations seem justified.  First, the syntax of first-order predicate theory plus functions probably succeeds as a "grammar" within which we can express any knowledge claim in science.  (The most obvious exception is modal logic: claims that express necessity and possibility.  But many philosophers and scientists would argue that modal claims are not necessary to the vocabulary of science.)  But second, this logical syntax does not help to solve the deep conceptual and theoretical problems that must be addressed in real exercises of social science.  To define "revolution," "corruption," or "anomie" requires conceptual work that goes beyond the task of representing a given system of scientific statements in terms of a set of predicates and functions.  So symbolic logic is simply a scheme of representation, not a master system of concepts and syntax.

(See an earlier posting on these issues under the rubric of "Knowledge Claims in the Social Sciences".)

5 comments:

Vanitas said...

Another awesome post.

That being said, I think the next move in this game is for the arch-positivist to insist that the concepts used in the social sciences are ontologically problematic BECAUSE they can't eventually be reduced to predicates in first-order logic.

Such a thinker will point to the near-impossibility of citing causal relations between such entities/concepts, and to the inherent instability of any "science" of social relations which uses them. What would you say in response?

Dan Little said...

Nick, it's true that it is challenging to express a realist interpretation of a causal assertion in 1st-order predicate logic. We can't express a counterfactual with these resources -- if Franco hadn't returned to Malaga the Civil War would have ended quickly. And a realist interpretation of causation asserts the existence of an underlying real causal mechanism or power -- which seems to require some kind of modal resource -- natural necessity, for example. We wouldn't accept the paraphrase

F causes G = (x)(Fx => Gx)

or its probabilistic equivalent. These merely capture constant conjunction. So maybe we need a bit more resources.

Dan Little said...

Here is one way of representing activities and doings. We might represent an activity as a 2-place relation: x is doing A to y. So we might represent "John is building a house" in this way:

xBy : x is building y
Hx : x is a house
j : John

(Ex)(Hx & jBx) : there exists an x such that x is a house and John is building x)

is this a satisfactory way of representing activities and doings?

Alex Tolley said...

If you use binary logic, what you say is true, although even here any x can be redefined as a collection of properties.

However, you can use other logic, e.g. fuzzy logic, to handle the identity of concepts. This largely solves the toy problem in your argument.

Jacob said...

The field of artificial intelligence has invested a great deal of effort augmenting first order logic to successfully model real world phenomena. It might be useful to consult a major textbook introduction to the field to get a handle on how well it actually succeeds.

First order logic is not particularly well-equipped to handle reasoning about propositional attitudes, without appropriate modification. Hence, the development of epistemic logic.