Wednesday, August 26, 2015

Guest post by Doug Porpora on social structures


Here is a response to my earlier post on social ontology and structure from Doug Porpora, professor of sociology, Drexel University. Doug is the author of Reconstructing Sociology: The Critical Realist Approach. Thanks, Doug, for this thoughtful and considered reflection!

FROM DOUG PORPORA:

I have four comments in response. First, while I am happy to stand somewhere alongside John Levi Martin, one important difference between us is that I attach reality – and even an emergent mind-independence – to the relations connecting social positions whereas, I believe, he would not.

Second, sticking close to an ordinary language sense of the word, I would confine the word structure to those connecting relations and not to the higher level things you speak of. I would rather call the higher level entities institutions.

Third, as you suggest, I certainly do not deny the existence of families, social movements, clubs and states. But they seem to me to be a nexus of connections among social positions.

My fourth reflection is more uncertain. Do those things – states, etc. – represent some kind of emergent entities and as such a higher level of thing? You are not alone in pushing me in this direction. Most fellow critical realists would do so, and Ruth Groff has recently been trying to get me to do the same.

I certainly believe in emergence rather than reductionism. I believe life is a level emergent from non-life and consciousness a yet higher emergent level, and self-consciousness an even further emergent level.

I believe emergence is more general than just these levels, but let us stay there. In the case of each of these levels, a new kind of causality emerges not found on the level below – replication and natural selection in the case of life, speech acts in the case of linguistic consciousness. The emergence of these new causal powers can be explained by the level below but not their functioning.

Is there anything like that going on with the putative emergent entities of families and states and such? I suppose I would say that something like Durkheimian social cohesion is an example of a new causal property not associated with individual people. So, as you suggest, do new causal properties emerge from people in social life? Yes.

Do these wholes and their properties constitute a level in the same way as does life or consciousness? I would say no for two reasons. First, as new causal kinds, speech acts and natural selection act directly from whole to whole. We can explain the causal logic connecting the holistic behavior without micro-analysis of their parts.

I do not see anything like that going on with social cohesion. Any effect it has on other wholes cannot be explained without manifesting through individual behavior. There is no new causal logic here. I think you would agree, no? Second, any new, putative causal logic is always too penetrated by acts of what Hegel called “world historical individuals” to constitute an autonomous level.

So socially emergent entities, okay. An autonomous level of them as per sociological holism? In my opinion, no. Thanks again for the reflection.

Doug



Sunday, August 23, 2015

A flat social reality?


I've been inclined to talk about the social world in terms of levels or layers, with a few provisos -- multiple layers, causation across layers, fuzzy boundaries (link, link). But is this perhaps a misleading ontology? Would we be better served by thinking of the social world as "flat" -- involving processes and relations all at the same level? It sometimes appears that John Levi Martin has such an ontology in mind in Social Structures (link), and Doug Porpora envisions such a possibility in "Four Concepts of Structure" (link). So this is the idea I'd like to explore here.

What would that flat world look like? Here is one effort at formulating a flat social ontology.
The social world exists as the embodiment of sets of individual persons with powers, capacities, and actions and interactions, and who stand in a vast range of concrete social relationships with each other.
Here is a snippet from Porpora's article about structure mentioned above that seems to have this view in mind:
In contrast with the previous conception of social structure, this one is not a version of sociological holism. It does not portray social structure as something that operates over the heads of human actors. Instead, social structure is a nexus of connections among them, causally affecting their actions and in turn causally affected by them. The causal affects of the structure on individuals are manifested in certain structured interests, resources, powers, constraints and predicaments that are built into each position by the web of relationships. These comprise the material circumstances in which people must act and which motivate them to act in certain ways. As they do so, they alter the relationships that bind them in both intended and unintended ways. (200)
What does this description leave out? For starters, it leaves out things we would have said were higher levels of the social reality: families, organizations, social movements, institutions, economies, clubs, and states. And of course these are legitimate social constructs. But are they inherently "higher level"? Or are they compounds and extended aggregates of the lower-level stuff just mentioned -- individuals with powers, actions, and relations?

Playing this idea out, we might consider that a social movement is a partially ordered group of individuals in association with each other. The organizations that call them forward are other groups of individuals, including their deliberative bodies and executives. The repressive organs of the state? -- yet other organized groups of individuals with powers and agency. And in fact the theory of strategic action fields seems to lean in this direction (Fligstein and McAdam, A Theory of Fields; link).

One important consideration that might come forward for rejecting the flat ontology is the idea that there are causal properties at a higher level that don't attach to entities at the base level.

This view corresponds to the idea some sociologists have of emergence. It is sometimes maintained that social structures have properties at the structural level that cannot be reduced to the properties of the components of the structure. These are emergent properties. If this is so, then we will miss important explanations if we decline to recognize the reality of social structures. And yet a social structure is plainly a higher-level social entity than a group of coordinated individuals. Its higher-level standing is a result of this fact: it is composed of objects at level 1; but it has properties that cannot be explained or derived from objects at level 1.

A related reason for rejecting the flat ontology is the idea that structures, institutions, or value systems -- higher level social things -- may have legitimate causal properties that can be adequately discovered through study of these social things without more information about the base level (individuals in relations). This possibility doesn't necessarily imply that these are emergent properties, only that they are relatively autonomous from the base level. Here again, it seems reasonable to call these higher-level social entities -- and therefore the flat ontology isn't quite enough.

Another important consideration is the evident fact that social compounds have compositional structure. A fish is more than a collection of living cells; it has a stable structure and an internal organization that serves the needs of the fish organism. So it is entirely appropriate to refer to fish as well as living cells. And it seems correct to observe that something like this is true of some social entities as well -- government agencies, worship organizations, corporations.

Finally, it is hard to dispute that social things like kinship systems, business firms, and armies have stable and knowable characteristics that can be studied empirically. We shouldn't adopt an ontology that excludes legitimate topics for empirical research.

So it seems that the parsimonious social ontology doesn't work. It forces us to overlook explanatory factors that are important for explaining social outcomes. And it unreasonably asks us to ignore important features of the social world of which we have reasonably good understanding. In fact, the flat ontology is not far removed from the ontology associated with spare versions of methodological individualism.

So how might a bounded conception of higher-level social entities look? A formulation of a minimal multi-layer alternative to the flat ontology might go along these lines:
  • The social world consists of individuals and relations at the base level PLUS stable compounds of items at this level which have quasi-permanent properties and non-reducible causal powers that have effects on items at the base level.
Here the criterion of higher-level standing in use is --
  • possession of causal properties not reducible to [or needing reduction to] properties at the base level.
By analogous reasoning, we might consider whether there are more complex configurations of base and level 1 entities which themselves have properties that are emergent from or autonomous from base and level 1. And so forth iteratively.

Are there level 2 entities by this criterion? For example, might the state be a level-2 entity, in that it encompasses organizations and individuals and and it possesses new causal properties not present at level 1? In principle this seems possible. The state is a complex network of organizations and individuals. And it is logically possible that new causal powers emerge that depend on both base and level 1, but that do not require reduction to those lower-level properties.

So the language of levels of the social appears to be legitimate after all. It gives us a conceptual vocabulary that captures composition and complexity, and it allows us to identify important social causal powers that would not be accessible to us on the flat ontology.

Saturday, August 15, 2015

Niiniluoto on scientific realism


Debates over realism have been at the center of the philosophy of science for at least seventy-five years. The fundamental question is this: what exists in the world? And how do we best gain knowledge about the nature and properties of these real things? The first question is metaphysical, while the second is epistemic.

Scientific realism is the view that “mature” areas of science offer theories of the nature of real things and their properties, and that the theories of well-confirmed areas of science are most likely approximately true. So science provides knowledge about reality independent from our ideas; and the methods of science justify our belief in these representations of the real world. Scientific methods are superior to other forms of belief acquisition when it comes to successful discovery of the entities and properties of the world in which we live.

But this statement conceals a number of difficult issues. What is involved in asserting that a theory is true? We have the correspondence theory of truth on the one hand — the idea that the key concepts of the theory succeed in referring to real entities in the world independent of the theory. And on the other hand, we have the pragmatist theory of truth — the idea that “truth” means “well confirmed”. A further difficulty arises from the indisputable fallibility of science; we know that many well confirmed scientific theories have turned out to be false. Finally, the idea of “approximate truth” is problematic, since it seems to imply “not exactly true,” which in turn implies “false”. Hilary Putnam distinguished two kinds of realism based on the polarity of correspondence and justification, metaphysical realism and internal realism; and it seems plain enough that “internal realism” is not a variety of realism at all.

Another central issue in the metatheory of realism is the question, what kinds of considerations are available to permit us to justify or refute various claims of realism? Why should we believe that the contents of current scientific theories succeed in accurately describing unobservable but fundamental features of an independent world? And the strongest argument the literature has produced is that offered by Putnam and Boyd in the 1970s: the best explanation of the practical and predictive successes of the sciences is the truth of the theoretical assumptions on which they rest.

Ilkka Niiniluoto’s 1999 Critical Scientific Realism proceeds from the general orientation of Roy Bhaskar’s critical realism. But it is not a synthesis of the philosophy of critical realism as much as it is an analytical dissection of the logic and plausibility of various claims of scientific realism. As such it is an excellent and rigorous introduction to the topic of scientific realism for current discussions. Niiniluoto analyzes the metatheory of realism into six areas of questions: ontological, semantical, epistemological, axiological, methodological, and ethical (2). And he provides careful and extensive discussions of the issues that arise under each topic. Here is a useful taxonomy that he provides for the many variants of realism (11):


Here is how Niiniluoto distinguishes “critical scientific realism” from other varieties of realism:
  • R0: At least part of reality is ontologically independent of human minds.
  • R1: Truth is a semantical relation between language and reality (correspondence theory).
  • R2: Truth and falsity are in principle applicable to all linguistic products of scientific enquiry.
  • R3: Truth is an essential aim of science.
  • R4: Truth is not easily accessible or recognizable, and even our best theories can fail to be true.
  • R5: The best explanation for the practical success of science is the assumption that scientific theories in fact are approximately true.
These are credible and appealing premises. And they serve to distinguish this version of realism from other important alternatives -- for example, Putnam's internal realism. But it is evident that Niiniluoto's "critical scientific realism" is not simply a further expression of "critical realism" in the system of Bhaskar. It is a distinctive and plausible version of scientific realism; but its premises equally capture the realisms of other philosophers of science whose work is not within the paradigm of standard critical realism. As the diagram indicates, other philosophers who embrace R0-R5 include Popper, Sellars, Bunge, Boyd, and Nowak, as well as Niiniluoto himself. (It is noteworthy that Bhaskar's name does not appear on this list!)

So how much of a contribution does Critical Scientific Realism represent in the evolving theory of scientific realism within philosophy of science? In my reading this is an important step in the evolution of the arguments for and against realism. Niiniluoto's contribution is a synthetic one. He does an excellent job of tracing down the various assumptions and disagreements that exist within the field of realism and anti-realism debates, and the route that he traces through these debates under the banner of "critical scientific realism" represents (for me, anyway) a particularly plausible combination of answers to these various questions. So one might say that the position that Niiniluoto endorses is a high point in the theory of scientific realism -- the most intellectually and practically compelling combination of positions from metaphysics, epistemology, semantics, and methodology that are available in the assessment of the truthiness of science.

What it is not, however, is the apotheosis of "critical realism" in the sense intended by the literature extending from Bhaskar to the current generation of critical realist thinkers. Niiniluoto's approach is appealingly eclectic; he follows the logic of the arguments he entertains, rather than seeking to validate or extend a particular view within this complicated field of realist arguments. And this is a good thing if our interest is in making the most sense possible of the idea of scientific realism as an interpretation of the significance of science in face of the challenges of constructivism, conceptual and theoretical underdetermination, and relativism.

Monday, August 10, 2015

Critical realism and social heterogeneity


Is the metaphysics of critical realism compatible with the idea of a highly heterogeneous social world?

Here is what I mean by heterogeneity in this context. First social causation is inherently multiple, with many kinds and tempos of social causation at work. It is therefore crucial that we avoid the impulse to reduce social change to a single set of underlying causal factors. The occurrence of a race riot at a time and place is partly caused by the instigating incident, partly caused by the long-simmering background conditions, partly caused by the physical geography of the city in question and partly caused by a legal and political context far from the site of rioting. We sometimes describe this fact as the conjunctural nature of social causation. Second, social events, changes, and forms of stability depend on contingent alignments of forces and causes, which do not recur in regular sequences of Humean causation. Third, social causes are generally historically conditioned, with the result that we do not have a general statement of, same cause, same effect. I characterize these points by saying that social causation is contingent, contextual, and conjunctural.

Another important aspect of heterogeneity in the social world has to do with the status of social kinds or social types. I take the view that social entities do not constitute social kinds, in that there is substantial and deep variation across the instances of items which we classify under riot, revolution, or state. Another way to put this point is to observe that social things do not have essential natures. Being Muslim is not an essential social or cultural or religious identity. Being a late industrial city is not an essential characteristic of a group of cities. Being a social revolution is not an essential underlying set of characteristics of the Chinese, French, and Russian episodes. Rather, in each of these examples there is broad variation across the instances that are embraced by the term.

So my question here is a simple one. Is Bhaskar's version of realism consistent with this treatment of heterogeneous social entities and heterogeneous social causes, or does Bhaskar presuppose social essences and universal causes in ways that are inconsistent with heterogeneity?

There are elements Bhaskar's theory that point in both directions on this question.

His emphasis on the logic of experimentation is key to his transcendental argument for realism. But oddly enough, this analysis cuts against the premise of heterogeneity because it emphasizes exceptionless causal factors. He emphasizes the necessity of postulating underlying causal laws, which are themselves supported by generative causal mechanisms, and the implication is that the natural world unfolds as the expression of these generative mechanisms. Here is a clear statement from The Possibility of Naturalism: A philosophical critique of the contemporary human sciences:
Once made, however, the ontological distinction between causal laws and patterns of events allows us to sustain the universality of the former in the face of the non-invariance of the latter. Moreover, the actualist analysis of laws now loses all plausibility. For the non-invariance of conjunctions is a condition of an empirical science and the non-empirical nature of laws a condition of an applied one. (PON p. 11)
And his account sometimes seems to rest upon a kind of "mechanism fundamentalism" -- the idea that there is a finite set of non-reducible mechanisms with essential properties:
On the transcendental realist system a sequence A, B is necessary if and only if there is a natural mechanism M such that when stimulated by A, B tends to be produced. (PON p. 11)
Concerns about mechanisms fundamentalism are allayed, however, because Bhaskar notes that it is always open to the scientist to ask the new question, how does this mechanism work? (PON 13) So mechanisms are not irreducible.

These are a few indications that Bhaskar's realism might be uncongenial to the idea of social heterogeneity.

More compelling considerations are to be found on the other side of the issue, however. First, his introduction of the idea of the social world as an "open" system of causation leaves space for causal heterogeneity. Here is a relevant passage from A Realist Theory of Science, deriving from an example of historical explanation:
In general as a complex event it will require a degree of what might be called 'causal analysis', i.e. the resolution of the event into its components (as in the case above). (RTS kl 2605)
For the different levels that mesh together in the generation of an event need not, and will not normally, be typologically locatable within the structures of a single theory. In general the normic statements of several distinct sciences, speaking perhaps of radically different kinds of generative mechanism, may be involved in the explanation of the event. This does not reflect any failure of science, but the complexity of things and the multiplicity of forms of determination found in the world. (RTS kl 2613)
Here is how Bhaskar conceives of social and historical things in The Possibility of Naturalism:
From this perspective, then, things are viewed as individuals possessing powers (and as agents as well as patients). And actions are the realization of their potentialities. Historical things are structured and differentiated (more or less unique) ensembles of tendencies, liabilities and powers; and historical events are their transformations. (PON 20)
The phrase "more or less unique" is crucial. It implies the kind of heterogeneity postulated here, reflecting the ideas of contingency and heterogeneity mentioned above.

Another reason for thinking Bhaskar is open to heterogeneity in the social realm is his position on reductionism.
But, it might be objected, is not the universe in the end nothing but a giant machine with inexorable laws of motion governing everything that happens within it? I want to say three things: First, that the various sciences treat the world as a network of 'machines', of various shapes and sizes and degrees of complexity, whose proper principles of explanation are not all of the same kind as, let alone reducible to, those of classical mechanics. Secondly, that the behaviour of 'machines', including classical mechanical ones, cannot be adequately described, let alone understood, in terms of the 'whenever x, then y' formula of regularity determinism. Thirdly, that even if the world were a single 'machine' this would still provide no grounds for the constant conjunction idea, or a fortiori any of the theories of science that depend upon it. Regularity determinism is a mistake, which has been disastrous for our understanding of science. (RTS kl 1590)
Here Bhaskar is explicit in referring to multiple kinds of causal processes ("machines"). And, indeed, Bhaskar affirms the conjunctural nature of social causation:
Now most social phenomena, like most natural events, are conjuncturally determined. And as such in general have to be explained in terms of a multiplicity of causes. (PON p. 54)
Similar ideas are expressed in Scientific Realism and Human Emancipation:
Social phenomena must be seen, in general, as the product of a multiplicity of causes, i.e. social events as 'conjunctures' and social things as (metaphysically) 'compounds'. (107)
Finally, his discussion of social structures in PON as the social equivalent of natural mechanisms also implies heterogeneity over time:
(3) Social structures, unlike natural structures, may be only relatively enduring (so that the tendencies they ground may not be universal in the sense of space-time invariant). (PON 49)
So on balance, I am inclined to think that Bhaskar's philosophy of social science is indeed receptive to social heterogeneity. And this in turn makes it a substantially more compelling contribution to the philosophy of social science than it would otherwise be, and superior to many of the positivist variants of philosophy of science that he criticizes.

Monday, August 3, 2015

Social construction of technical knowledge


After there was the sociology of knowledge (link), before there was a new sociology of knowledge (link), and more or less simultaneous with science and technology studies (link), there was Paul Rabinow's excellent ethnography of the invention of the key tool in recombinant DNA research -- PCR (polymerase chain reaction). Rabinow's monograph Making PCR: A Story of Biotechnology appeared in 1996, after the first fifteen years of the revolution in biotechnology, and it provides a profound narrative of the intertwinings of theoretical science, applied bench work, and material economic interests, leading to substantial but socially imprinted discoveries and the development of a powerful new technology. Here is how Rabinow frames the research:
Making PCR is an ethnographic account of the invention of PCR, the polymerase chain reaction (arguably the exemplary biotechnological invention to date), the milieu in which that invention took place (Cetus Corporation during the 1980s), and the key actors (scientists, technicians, and business people) who shaped the technology and the milieu and who were, in turn, shaped by them. (1)
This book focuses on the emergence of biotechnology, circa 1980, as a distinctive configuration of scientific, technical, cultural, social, economic, political, and legal elements, each of which had its own separate trajectory over the preceding decades. It examines the "style of life" or form of "life regulation" fashioned by the young scientists who chose to work in this new industry rather than pursue promising careers in the university world.... In sum, it shows how a contingently assembled practice emerged, composed of distinctive subjects, the site in which they worked, and the object they invented. (2)
There are several noteworthy features of these very exact descriptions of Rabinow's purposes. The work is ethnographic; it proceeds through careful observation, interaction, and documentation of the intentionality and practices of the participants in the process. It is focused on actors of different kinds -- scientists, lab technicians, lawyers, business executives, and others -- whose interests, practices, and goals are distinctly different from each others'. It is interested in accounting for how the "object" (PCR) came about, without any implication of technological or scientific inevitability. It highlights both contingency and heterogeneity in the process. The process of invention and development was a meandering one (contingency) and it involved a large group of heterogeneous influences (scientific, cultural, economic, ...).

Legal issues come into this account because the fundamental question -- what is PCR and who invented it? -- cannot be answered in narrowly technical or scientific terms. Instead, it was necessary to go through a process of practical bench-based development and patent law to finally be able to answer both questions.

A key part of Rabinow's ethnographic finding is that the social configuration and setting of the Cetus laboratory was itself a key part of the process leading to successful development of PCR. The fact of hierarchy in traditional scientific research spaces (universities) is common -- senior scientists at the top, junior technicians at the bottom. But Cetus had developed a local culture that was relatively un-hierarchical, and Rabinow believes this cultural feature was crucial to the success of the undertaking.
Cetus's organizational structure was less hierarchical and more interdisciplinary than that found in either corporate pharmaceutical or academic institutions. In a very short time younger scientists could take over major control of projects; there was neither the extended postdoc and tenure probationary period nor time-consuming academic activities such as committees, teaching, and advising to divert them from full-time research. (36)
And later:
Cetus had been run with a high degree of organizational flexibility during its first decade. The advantages of such flexibility were a generally good working environment and a large degree of autonomy for the scientists. The disadvantages were a continuing lack of overall direction that resulted in a dispersal of both financial and human resources and in continuing financial losses. (143)
A critical part of the successful development of PCR techniques in Rabinow's account was the highly skilled bench work of a group of lab technicians within the company (116 ff.). Ph.D. scientists and non-Ph.D. lab technicians collaborated well throughout the extended period during which the chemistry of PCR needed to be perfected; and Rabinow's suggestion is that neither group by itself could have succeeded.

So some key ingredients in this story are familiar from the current wisdom of tech companies like Google and FaceBook: let talented people follow their curiosity, use space (physical and social) to elicit strong positive collaboration; don't try to over-manage the process through a rigid authority structure.

But as Rabinow points out, Cetus was not an anarchic process of smart people discovering things. Priorities were established to govern research directions, and there were sustained efforts to align research productivity with revenue growth (almost always unsuccessful, it must be said). Here is Rabinow's concluding observation about the company and the knowledge environment:
Within a very short span of time some curious and wonderful reversals, orthogonal movements, began happening: the concept itself became an experimental system; the experimental system became a technique; the techniques became concepts. These rapidly developing variations and mutually referential changes of level were integrated into a research milieu, first at Cetus, then in other places, then, soon, in very many other places. These places began to resemble each other because people were building them to do so, but were often not identical. (169).
And, as other knowledge-intensive businesses from Visicalc to Xerox to H-P to Microsoft to Google have discovered, there is no magic formula for joining technical and scientific research to business success.



Saturday, August 1, 2015

Microfoundations 2.0?


Figure. An orderly ontological hierarchy (University of Leeds (link)


Figure. Complex non-reductionist social outcome -- blight

The idea that hypotheses about social structures and forces require microfoundations has been around for at least 40 years. Maarten Janssen’s New Palgrave essay on microfoundations documents the history of the concept in economics; link. E. Roy Weintraub was among the first to emphasize the term within economics, with his 1979 Microfoundations: The Compatibility of Microeconomics and Macroeconomics. During the early 1980s the contributors to analytical Marxism used the idea to attempt to give greater grip to some of Marx's key explanations (falling rate of profit, industrial reserve army, tendency towards crisis). Several such strategies are represented in John Roemer's Analytical Marxism. My own The Scientific Marx (1986) and Varieties of Social Explanation (1991) took up the topic in detail and relied on it as a basic tenet of social research strategy. The concept is strongly compatible with Jon Elster's approach to social explanation in Nuts and Bolts for the Social Sciences (1989), though the term itself does not appear  in this book or in the 2007 revised edition.

Here is Janssen's description in the New Palgrave of the idea of microfoundations in economics:
The quest to understand microfoundations is an effort to understand aggregate economic phenomena in terms of the behavior of individual economic entities and their interactions. These interactions can involve both market and non-market interactions.  
In The Scientific Marx the idea was formulated along these lines:
Marxist social scientists have recently argued, however, that macro-explanations stand in need of microfoundations; detailed accounts of the pathways by which macro-level social patterns come about. (1986: 127)
The requirement of microfoundations is both metaphysical -- our statements about the social world need to admit of microfoundations -- and methodological -- it suggests a research strategy along the lines of Coleman's boat (link). This is a strategy of disaggregation, a "dissecting" strategy, and a non-threatening strategy of reduction. (I am thinking here of the very sensible ideas about the scientific status of reduction advanced in William Wimsatt's "Reductive Explanation: A Functional Account"; link).

The emphasis on the need for microfoundations is a very logical implication of the position of "ontological individualism" -- the idea that social entities and powers depend upon facts about individual actors in social interactions and nothing else. (My own version of this idea is the notion of methodological localism; link.) It is unsupportable to postulate disembodied social entities, powers, or properties for which we cannot imagine an individual-level substrate. So it is natural to infer that claims about social entities need to be accompanied in some fashion by an account of how they are embodied at the individual level; and this is a call for microfoundations. (As noted in an earlier post, Brian Epstein has mounted a very challenging argument against ontological individualism; link.)

Another reason that the microfoundations idea is appealing is that it is a very natural way of formulating a core scientific question about the social world: "How does it work?" To provide microfoundations for a high-level social process or structure (for example, the falling rate of profit), we are looking for a set of mechanisms at the level of a set of actors within a set of social arrangements that result in the observed social-level fact. A call for microfoundations is a call for mechanisms at a lower level, answering the question, "How does this process work?"

In fact, the demand for microfoundations appears to be analogous to the question, why is glass transparent? We want to know what it is about the substrate at the individual level that constitutes the macro-fact of glass transmitting light. Organization type A is prone to normal accidents. What is it about the circumstances and actions of individuals in A-organizations that increases the likelihood of normal accidents?

One reason why the microfoundations concept was specifically appealing in application to Marx's social theories in the 1970s was the fact that great advances were being made in the field of collective action theory. Then-current interpretations of Marx's theories were couched at a highly structural level; but it seemed clear that it was necessary to identify the processes through which class interest, class conflict, ideologies, or states emerged in concrete terms at the individual level. (This is one reason I found E. P. Thompson's The Making of the English Working Class (1966) so enlightening.) Advances in game theory (assurance games, prisoners' dilemmas), Mancur Olson's demonstration of the gap between group interest and individual interest in The Logic of Collective Action: Public Goods and the Theory of Groups (1965), Thomas Schelling's brilliant unpacking of puzzling collective behavior onto underlying individual behavior in Micromotives and Macrobehavior (1978), Russell Hardin's further exposition of collective action problems in Collective Action (1982), and Robert Axelrod's discovery of the underlying individual behaviors that produce cooperation in The Evolution of Cooperation (1984) provided social scientists with new tools for reconstructing complex collective phenomena based on simple assumptions about individual actors. These were very concrete analytical resources that promised help further explanations of complex social behavior. They provided a degree of confidence that important sociological questions could be addressed using a microfoundations framework.

There are several important recent challenges to aspects of the microfoundations approach, however.

First, there is the idea that social properties are sometimes emergent in a strong sense: not derivable from facts about the components. This would seem to imply that microfoundations are not possible for such properties.

Second, there is the idea that some meso entities have stable causal properties that do not require explicit microfoundations in order to be scientifically useful. (An example would be Perrow's claim that certain forms of organizations are more conducive to normal accidents than others.) If we take this idea very seriously, then perhaps microfoundations are not crucial in such theories.

Third, there is the idea that meso entities may sometimes exert downward causation: they may influence events in the substrate which in turn influence other meso states, implying that there will be some meso-level outcomes for which there cannot be microfoundations exclusively located at the substrate level.

All of this implies that we need to take a fresh look at the theory of microfoundations. Is there a role for this concept in a research metaphysics in which only a very weak version of ontological individualism is postulated; where we give some degree of autonomy to meso-level causes; where we countenance either a weak or strong claim of emergence; and where we admit of full downward causation from some meso-level structures to patterns of individual behavior?

In one sense my own thinking about microfoundations has already incorporated some of these concerns; I've arrived at "microfoundations 1.1" in my own formulations. In particular, I have put aside the idea that explanations must incorporate microfoundations and instead embraced the weaker requirement of availability of microfoundations (link). Essentially I relaxed the requirement to stipulate only that we must be confident that microfoundations exist, without actually producing them. And I've relied on the idea of "relative explanatory autonomy" to excuse the sociologist from the need to reproduce the microfoundations underlying the claim he or she advances (link).

But is this enough? There are weaker positions that could serve to replace the MF thesis. For now, the question is this: does the concept of microfoundations continue to do important work in the meta-theory of the social sciences?

Tuesday, July 28, 2015

Supervenience, isomers, and social isomers


A prior post focused on the question of whether chemistry supervenes upon physics, and I relied heavily on R. F. Hendry's treatment of the way that quantum chemistry attempts to explain the properties of various molecules based on fundamentals of quantum mechanics. This piece raised quite a bit of great discussion, from people who agree with Hendry and those who disagree.

It occurs to me that there is a simpler reason for thinking that chemistry fails to supervene upon the physics of atoms, however, which does not involve the subtleties of quantum mechanics. This is the existence of isomers for various molecules. An isomer is a molecule with the same chemical composition as another but a different geometry and different chemical properties. From the facts about the constituent atoms we cannot infer uniquely what geometry a molecule consisting of these atoms will take. Instead, we need more information external to the physics of the atoms involved; we need an account of the path of interactions that the atoms took in "folding" into one isomer or the other. Therefore chemistry does not supervene upon the quantum-mechanical or physical properties of atoms alone.

For example, the properties of the normal prion protein and its isomer, infectious prion protein, are not fixed by the constituent elements; the geometries associated with these two compounds result from other causal influences. The constituent elements are compatible with both non-equivalent expressions. The prion molecules do not supervene upon the properties of the constituent elements. The question of which isomer emerges is one of a contingent path dependent process.

It is evident that this is not an argument that chemistry does not supervene upon physics more generally, since the history of interactions through which a given isomer emerges is itself a history of physical interactions. But it does appear to be a rock-solid refutation of the idea that molecules supervene upon the atoms of which they are constituted.

Significantly, this example appears to have direct implications for the relation between social facts and individual actors. If we consider the possibility of "social isomers" -- social structures consisting of exactly similar actors but different histories and different configurations and causal properties in the present -- then we also have a refutation of the idea that social facts supervene upon the actors of which they are constituted. Instead, we would need to incorporate the "path-dependent" series of interactions that led to the formation of one "geometry" of social arrangements rather than another, as well as to the full suite of properties associated with each individual actor. So QED -- social structures do not supervene on the features of the actors. And if some of the events that influence the emergence of one social structure rather than another are stochastic or random -- one social isomer instead of its compositional equivalent -- then at best social structures supervene on individuals conjoined with chance events in a path-dependent process.

There has been much discussion of the question of multiple realizability -- that one higher-level structure may correspond to multiple underlying configurations of components and processes. But so far as I have been able to see, there has been no discussion of the converse possibility -- multiple higher-level structures corresponding to a single underlying configuration. And yet this is precisely what is the case in chemistry for isomers and in the hypothetical but plausible possibility sketched here for "social isomers". This is indeed the key finding of the discovery of path-dependencies in social outcomes.

Sunday, July 26, 2015

Is chemistry supervenient upon physics?


Many philosophers of science and physicists take it for granted that "physics" determines "chemistry". Or in terms of the theory of supervenience, it is commonly supposed that the domain of chemistry supervenes upon the domain of fundamental physics. This is the thesis of physicalism: the idea that all causation ultimately depends on the causal powers of the phenomena described by fundamental physics.

R. F. Hendry takes up this issue in his contribution to Davis Baird, Eric Scerri, and Lee McIntyre's very interesting volume, Philosophy of Chemistry. Hendry takes the position that this relation of supervenience does not obtain; chemistry does not supervene upon fundamental physics.

Hendry points out that the dependence claim depends crucially on two things: what aspects of physics are to be considered? And second, what kind of dependency do we have in mind between higher and lower levels? For the first question, he proposes that we think about fundamental physics -- quantum mechanics and relativity theory (174). For the second question, he enumerates several different kinds of dependency: supervenience, realization, token identity, reducibility, and derivability (175). In discussing the macro-property of transparency in glass, he cites Jaegwon Kim in maintaining that transparency in glass is "nothing more" than the features of the microstructure of glass that permit it to transmit light. But here is a crucial qualification:
But as Kim admits, this last implication only follows if it is accepted that “the microstructure of a system determines its causal/nomic properties” (283), for the functional role is specified causally, and so the realizer’s realizing the functional property that it does (i.e., the realizer–role relation itself) depends on how things in fact go in a particular kind of system. For a microstructure to determine the possession of a functional property, it must completely determine the causal/nomic properties of that system. (175)
Hendry argues that the key issue underlying claims of dependence of B upon A is whether there is downward causation from the level of chemistry (B) to the physical level (A); or, on the contrary, is physics "causally complete". If the causal properties of the higher level are fully fixed by the causal properties of the underlying level, then supervenience is possible; but if the higher level has causal properties that permit influence on the lower level, then supervenience is not possible.

In order to gain insight into the specific issues arising concerning chemistry and physics, Hendry makes use of the "emergentist" thinking associated with C.D. Broad. He finds that Broad offers convincing arguments against "Pure Mechanism", the view that all material things are determined by the micro-physical level (177). Here are Broad's two contrasting possibilities for understanding the relations between higher levels and the physical micro-level:
(i) On the first form of the theory the characteristic behavior of the whole could not, even in theory, be deduced from the most complete knowledge of the behavior of its components, taken separately or in other combinations, and of their proportions and arrangements in this whole . . .
(ii) On the second form of the theory the characteristic behavior of the whole is not only completely determined by the nature and arrangements of its components; in addition to this it is held that the behavior of the whole could, in theory at least, be deduced from a sufficient knowledge of how the components behave in isolation or in other wholes of a simpler kind (1925, 59). [Hendry, 178]
The first formulation describes "emergence", whereas the second is "mechanism". In order to give more contemporary expression to the two views Hendry introduces the key concept of quantum chemistry, the Hamiltonian for a molecule. A Hamiltonian is an operator describing the total energy of a system. A "resultant" Hamiltonian is the operator that results from identifying and summing up all forces within a system; a configurational Hamiltonian is one that has been observationally adjusted to represent the observed energies of the system. The first version is "fundamental", whereas the second version is descriptive.

Now we can pose the question of whether chemistry (behavior of molecules) is fixed by the resultant Hamiltonian for the components of the atoms involved (electrons, protons, neutrons) and the forces that they exert on each other. Or, on the other hand, does quantum chemistry achieve its goals by arriving at configurational Hamiltonians for molecules, and deriving properties from these descriptive operators? Hendry finds that the latter is the case for existing derivations; and this means that quantum chemistry (as it is currently practiced) does not derive chemical properties from fundamental quantum theory. Moreover, the configuration of the Hamiltonians used requires abstractive description of the hypothesized geometry of the molecule and the assumption of the relatively slow motion of the nucleus. But this is information at the level of chemistry, not fundamental physics. And it implies downward causation from the level of chemical structure to the level of fundamental physics.
Furthermore, to the extent that the behavior of any subsystem is affected by the supersystems in which it participates, the emergent behavior of complex systems must be viewed as determining, but not being fully determined by, the behavior of their constituent parts. And that is downward causation. (180)
So chemistry does not derive from fundamental physics. Here is Hendry's conclusion, supporting pluralism and anti-reductionism in the case of chemistry and physics:
On the other hand is the pluralist version, in which physical law does not fully determine the behavior of the kinds of systems studied by the special sciences. On this view, although the very abstractness of the physical theories seems to indicate that they could, in principle, be regarded as applying to special science systems, their applicability is either trivial (and correspondingly uninformative), or if non-trivial, the nature of scientific inquiry is such that there is no particular reason to expect the relevant applications to be accurate in their predictions.... The burden of my argument has been that strict physicalism fails, because it misrepresents the details of physical explanation (187)
Hendry's argument has a lot in common with Herbert Simon's arguments about system complexity (link) and with Nancy Cartwright's arguments about the limitations of (real) physics' capability of representing and calculating the behavior of complex physical systems based on first principles (link). In each case we get a pragmatic argument against reductionism, and a weakened basis for assuming a strict supervenience relation between higher-level structures and a limited set of supposedly fundamental building blocks. What is striking is that Hendry's arguments undercut the reductionist impulse at what looks like its most persuasive juncture -- the relationship between quantum physics and quantum chemistry.


Thursday, July 23, 2015

Microfoundations for rules and ascriptions




One of the more convincing arguments for the existence of social facts that lie above the level of individual actors is the social reality of rules and ascriptive identities. Bob and Alice are married by Reverend Green at 7 pm, July 1, 2015. The social fact that Bob and Alice are now married is not simply a concatenation of facts about their previous motions, beliefs, and utterances. Rather, it depends also on several trans-individual circumstances: first, that their behaviors and performances conform to a set of legal rules governing marriage (e.g., that neither was married at the time of their marriage to each other, or that they had secured a valid marriage license from the county clerk); and second, that various actors in the event possess a legal identity and qualification that transcend the psychological and observational properties they possess. (Reverend Green is in fact a legally qualified agent of a denomination that gives him the legal authority to perform the act of marriage between two qualified adults.) If Bob has permanently forgotten his earlier marriage in a moment of intoxication to Francine, or if Reverend Green is an imposter, then the correct performance of each of the actions of the ceremony nonetheless fails to secure the legal act of "marriage". Bob and Alice are not married if these prior conditions are not satisfied. So the social fact that Bob and Alice are married does not depend exclusively on their performance of a specific set of actions and utterances.

Is this kind of example a compelling refutation of the thesis of ontological individualism (as Brian Epstein believes it is; link)? John Searle thinks that facts like these are fundamentally important in the social world; he refers to them as "status functions" (link). And Epstein's central examples of supra-individual social facts have to do with membership and ascriptive status. However, several considerations suggest to me that the logical status of rules and ascriptions does not have a lot of importance for our understanding of the ontology of the social world.

First, ascriptive properties are ontologically peculiar. They are dependent upon presuppositions and implicatures that cannot be fully validated in the present. Consider the contrast between these two statements about Song Taizu, founder of the Song Dynasty: "Song was a military and political mastermind" and "Song was legitimate emperor of China." The former statement is a factual statement about Song's embodied characteristics and talents. The latter is a complex historical statement with debatable presuppositions. The truth of the statement turns on our interpretation of the legal status of the seven-year-old "Emperor" whom he replaced. It is an historical fact that Song ruled long and effectively as chief executive; it is a legal abstraction to assert that he was "legitimate emperor".

Second, it is clear that systems of rules have microfoundations if they are causally influential. There are individuals and concrete institutions who convey and interpret the rules; there are prosecutors who take offenders to task; there are libraries of legal codes and supporting interpretations that constitute the ultimate standard of adjudication when rules and behavior come into conflict. And individuals have (imperfect) grasp of the systems of rules within which they live and act -- including the rule that specifies that ignorance is no excuse for breach of law. So it is in fact feasible to sketch out the way that a system of law or a set of normative rules acquires social reality and becomes capable of affecting behavior.

Most fundamentally, I would like to argue that our interest is not in social facts simpliciter, but in facts that have causal and behavioral consequences. We want to know how social agglomerates behave, and in order to explain these kinds of facts, we need to know how the actors who make them up think, deliberate, and act. Whether Alice and Bob are really married is irrelevant to their behavior and that of the individuals who surround them. Instead, what matters is how they and others represent themselves. So the behaviorally relevant question is this: do Alice, Bob, Reverend Green, and the others with whom they interact believe that they are married? So the behaviorally relevant content of "x is married to y" is restricted to the beliefs and attitudes of the individuals involved -- not the legalistic question of whether their marriage satisfied current marriage laws.

To be sure, if a reasonable doubt is raised about the legal validity of their marriage, then their beliefs (and those of others) will change. Assuming they understand marriage in the same way as we do -- "two rationally competent individuals have undertaken legally specified commitments to each other, through a procedurally qualified enactment" -- then doubts about the presuppositions will lead them to recalculate their current beliefs and status as well. They will now behave differently than they would have behaved absent the reasonable doubts. But what is causally active here is not the fact that they were not legally married after all; it is their knowledge that they were not legally married.

So is the fact that Bob and Alice are really married a social fact? Or is it sufficient to refer to the fact that they and their neighbors and family believe that they are married in order to explain their behavior? In other words, is it the logical fact or the epistemic fact that does the causal work? I think the latter is the case, and that the purely ascriptive and procedural fact is not itself causally powerful. So we might turn the tables on Epstein and Searle, and consider the idea that only those social properties that have appropriate foundations at the level of socially situated individuals should be counted as real social properties.


Wednesday, July 15, 2015

Supervenience and the social: Epstein's critique



Does the social world supervene upon facts about individuals and the physical environment of action? Brian Epstein argues not in several places, most notably in "Ontological Individualism Reconsidered" (2009; link). (I plan to treat Epstein's more recent arguments in his very interesting book The Ant Trap: Rebuilding the Foundations of the Social Sciences in a later post.) The core of his argument is the idea that there are other factors influencing social facts besides facts about individuals. Social facts then fail to supervene in the strict sense: they depend on facts other than facts about individuals. There are indeed differences at the level of the social that do not correspond to a difference in the facts at the level of the individual. Here is how Epstein puts the core of his argument:
My aim in this paper is to challenge this [the idea that individualism is simply the denial of spooky social autonomy]. But ontological individualism is a stronger thesis than this, and on any plausible interpretation, it is false. The reason is not that social properties are determined by something other than physical properties of the world. Instead it is that social properties are often determined by physical ones that cannot plausibly be taken to be individualistic properties of persons. Only if the thesis of ontological individualism is weakened to the point that it is equivalent to physicalism can it be true, but then it fails to be a thesis about the determination of social properties by individualistic ones. (3)
And here is how Epstein formulates the claim of weakly local supervenience of social properties upon individual properties:
Social properties weakly locally supervene on individualistic properties if and only if for any possible world w and any entities x and y in w, if x and y are individualistically indiscernible in w, then they are socially indiscernible in w. Two objects are individualistically- or socially-indiscernible if and only if they are exactly like with respect to every individualistic property or every social property, respectively. (9)
The causal story for supervenience of the social upon the individual perhaps looks like this:




The causal story for non-supervenience that Epstein tells looks like this:


In this case supervenience fails because there can be differences in S without any difference in I (because of differences in O).

But maybe the situation is even worse, as emergentists want to hold:


Here supervenience fails because social facts may be partially "auto-causal" -- social outcomes are partially influenced by differences in social facts that do not depend on differences in individual facts and other facts.

In one sense Epstein's line of thought is fairly easy to grasp. The outcome of a game of baseball between the New York Yankees and the Boston Red Sox depends largely on the actions of the players on the field and in the dugout; but not entirely and strictly. There are background facts and circumstances that also influence the outcome but are not present in the motions and thoughts of the players. The rules of baseball are not embodied on the field or in the minds of the players; so there may be possible worlds in which the same pitches, swings, impacts of bats on balls, catches, etc., occur; and yet the outcome of the game is different. The Boston pitcher may be subsequently found to be ineligible to play that day, and the Red Sox are held to forfeit the game. The rule in our world holds that "tie goes to the runner"; whereas in alto-world it may be that the tie goes to the defensive team; and this means that the two-run homer in the ninth does not result in two runs, but rather the final out. So the game does not depend on the actions of the players alone, but on distant and abstract facts about the rules of the game.

So what are some examples of "other facts" that might be causally relevant to social outcomes? The scenario offered here captures some of the key "extra-individual" facts that Epstein highlights, and that play a key role in the social ontology of John Searle: situating rules and interpretations that give semantic meaning to behaviors. Epstein highlights facts that determine "membership" in meaningful social contexts: being President, being the catcher on the Boston Red Sox. Both Epstein and Searle emphasize that there are a wide range of dispersed facts that must be true in order for Barack Obama to be President and Ryan Hanigan to be catcher. This is not a strictly "individual-level" fact about either man. Epstein quotes Gregorie Currie on this point: "My being Prime Minister ... is not just a matter of what I think and do; it depends on what others think and do as well. So my social characteristics are clearly not determined by my individual characteristics alone" (11).

So, according to Epstein, local supervenience of the social upon the individual fails. What about global supervenience? He believes that this relation fails as well. And this is because, for Epstein, "social properties are determined by physical properties that are not plausibly the properties of individuals" (20). These are the "other facts" in the diagrams above. His simplest illustration is this: without cellos there can be no cellists (24). And without hanging chads, George W. Bush would not have been President. And, later, one can be an environmental criminal because of a set of facts that were both distant and unknown to the individual at the time of a certain action (33).

Epstein's analysis is careful and convincing in its own terms. Given the modal specification of the meaning of supervenience (as offered by Jaegwon Kim and successors), Epstein makes a powerful case for believing that the social does not supervene upon the individual in a technical and specifiable sense. However, I'm not sure that very much follows from this finding. For researchers within the general school of thought of "actor-centered sociology", their research strategy is likely to remain one that seeks to sort out the mechanisms through which social outcomes of interest are created as a result of the actions and interactions of individuals. If Epstein's arguments are accepted, that implies that we should not couch that research strategy in terms of the idea of supervenience. But this does not invalidate the strategy, or the broad intuition about the relation between the social and the actions of locally situated actors upon which it rests. These are the intuitions that I try to express through the idea of "methodological localism"; link, link. And since I also want to argue for the possibility of "relative explanatory autonomy" for facts at the level of the social (for example, features of an organization; link), I am not too troubled by the failure of a view of the social and individual that denies strict determination of the former by the latter. (Here is an earlier post where I wrestled with the idea of supervenience; link.)

Sunday, July 5, 2015

Goffman on close encounters



image: GIF from D. Witt (link)

George Herbert Mead's approach to social psychology is an important contribution to the new pragmatism in sociology (link). Mead puts forward in Mind, Self, and Society: From the Standpoint of a Social Behaviorist a conception of the self that is inherently social; the social environment is prior to the individual, in his understanding. And what this means is that individuals acquire habits, attitudes, and ways of thinking through their interactions in the social environments in which they live and grow up. The individual's social conduct is built up out of the internalized traces of the practices, norms, and orientations of the people around him or her.

Erving Goffman is one of the sociologists who has given the greatest attention to the role of social norms in ordinary social interaction. One of his central themes is a focus on face-to-face interaction. This is the central topic in his book, Interaction Ritual - Essays on Face-to-Face Behavior. So rereading Interaction Ritual is a good way to gain some concrete exposure to how some sociologists think about the internalized norms and practices that Mead describes.

Goffman's central concern in this book is how ordinary social interactions develop. How do the participants shape their contributions in such a way as to lead to a satisfactory exchange? The ideas of "line" and "face" are the central concepts in this volume. "Line" is the performative strategy the individual has within the interaction. "Face" is the way in which the individual perceives himself, and the way he perceives others in the interaction to perceive him. Maintaining face invokes pride and honor, while losing face invokes shame and embarrassment. So a great deal of the effort extended by the actor in social interactions has to do with maintaining face -- what Goffman refers to as "face-work". Here are several key descriptions of the role of face-work in ordinary social interactions:
By face-work I mean to designate the actions taken by a person to make whatever he is doing consistent with face. (12)
The members of every social circle may be expected to have some knowledge of face-work and some experience in its use. In our society, this kind of capacity is sometimes called tact, savoir-faire, diplomacy, or social skill. (13)
A person may be said to have, or be in, or maintain face when the line he effectively takes presents an image of him that is internally consistent, that is supported by judgment and evidence conveyed by other participants, and that is confirmed by evidence conveyed through and personal agencies in the situation. (6-7)
So Goffman's view is that the vast majority of face-to-face social interactions are driven by the logic of the participants' conceptions of "face" and the "lines" that they assume for the interaction. Moreover, Goffman holds that in many circumstances, the lines available for the person in the circumstance are defined by convention and are relatively few. This entails that most interactional behavior is scripted and conventional as well. This line of thought emphasizes the coercive role played by social expectations in face to face encounters. And it dovetails with the view Goffman often expresses of action as performative, and self as dramaturgical.

The concept of self is a central focus of Mead's work in MSS. Goffman too addresses the topic of self:
So far I have implicitly been using a double definition of self: the self as an image pieced together from the expressive implications of the full flow of events in an undertaking; and the self as a kind of player in a ritual game who copes honorably or dishonorably, diplomatically or undiplomatically, with the judgmental contingencies of the situation. (31)
Fundamentally, Goffman's view inclines against the notion of a primeval or authentic self; instead, the self is a construct dictated by society and adopted and projected by the individual.
Universal human nature is not a very human thing. By acquiring it, the person becomes a kind of construct, build up not from inner psychic propensities but from moral rules that are impressed upon him from without. (45)
Moreover, Goffman highlights the scope of self-deception and manipulation that is a part of his conception of the actor:
Whatever his position in society, the person insulates himself by blindnesses, half-truths, illusions, and rationalizations. He makes an "adjustment" by convincing himself, with the tactful support of his intimate circle, that he is what he wants to be and that he would not do to gain his ends what the others have done to gain theirs. (43)
One thing that is very interesting about this book is the concluding essay, "Where the Action Is". Here Goffman considers people making choices that are neither prudent nor norm guided. He considers hapless bank robbers, a black journalist mistreated by a highway patrolman in Indiana, and other individuals making risky choices contrary to the prescribed scripts. In this setting, "action" is an opportunity for risky choice, counter-normative choice, throwing fate to the wind. And Goffman thinks there is something inherently attractive about this kind of risk-taking behavior.

Here Goffman seems to be breaking his own rules -- the theoretical ones, anyway. He seems to be allowing that action is sometimes not guided by prescriptive rules of interaction, and that there are human impulses towards risk-taking that make this kind of behavior relatively persistent in society. But this seems to point to a whole category of action that is otherwise overlooked in Goffman's work -- the actions of heroes, outlaws, counter-culture activists, saints, and ordinary men and women of integrity. In each case these actors are choosing lines of conduct that break the norms and that proceed from their own conceptions of what they should do (or want to do).  In this respect the pragmatists, and Mead in particular, seem to have the more complete conception of the actor, because they leave room for spontaneity and creativity in action, as well as a degree of independence from coercive norms of behavior. Goffman opens this door with his long concluding essay here; but plainly there is a great deal more that can be said on this subject.

The 1955 novel and movie Man in the Grey Flannel Suit seems to illustrate both parts of the theory of action in play here -- a highly constrained field of action presented to the businessman (played by Gregory Peck), punctuated by occasional episodes of behavior that break the norms and expectations of the setting. Here is Tom Rath speaking honestly to his boss. (The whole film is available on YouTube.)


Thursday, July 2, 2015

Deliberative democracy and the age of social media


Several earlier posts have focused on the theory of deliberative democracy (link, link, link). The notion is that political decision-making can be improved by finding mechanisms for permitting citizens to have extended opportunities for discussion and debate over policies and goals. The idea appeals to liberal democratic theorists in the tradition of Rousseau -- the idea that people's political preferences and values can become richer and more adequate through reasoned discussion in a conversation of equals, and political decisions will be improved through such a process. This idea doesn't quite equate to the wisdom of the crowd; rather, individuals become wiser through their interactions with other thoughtful and deliberative people, and the crowd's opinions improve as a result.

Here is the definition of deliberative democracy offered by Amy Gutmann and Dennis Thompson in Why Deliberative Democracy? (2004):
Most fundamentally, deliberative democracy affirms the need to justify decisions made by citizens and their representatives. Both are expected to justify the laws they would impose on one another. In a democracy, leaders should therefore give reasons for their decisions, and respond to the reasons that citizens give in return... The reasons that deliberative democracy asks citizens and their representatives to give should appeal to principles that individuals who are trying to find fair terms of cooperation cannot reasonably reject. (3)
All political reasoning inherently involves an intermingling of goals, principles, and facts. What do we want to achieve? What moral principles do we respect as constraints on political choices? How do we think about the causal properties of the natural and social world in which we live? Political disagreement can derive from disagreements in each of these dimensions; deliberation in principle is expected to help citizens to narrow the range of disagreements they have about goals, principles, and facts. And traditional theorists of deliberative democracy, from the pre-Socratics to Gutmann, Thompson, or Fishkin, believe that it is possible for people of good will to come to realize that the beliefs and assumptions they bring to the debate may need adjustment.

But something important has changed since the 1990s when a lot of discussions of deliberative democracy took place. This is the workings of social media -- blogs, comments, Twitter discussions, Facebook communities. Here we have millions of people interacting with each other and debating issues -- but we don't seem to have a surge of better or more informed thinking about the hard issues. On the one hand, we might hope that the vast bandwidth of debate and discussion of issues, involving enormous numbers of the world's citizens, would have the effect of deepening the public's understanding of complex issues and policies. And on the other hand, we seem to have the evidence of continuing superficial thinking about issues, hardening of ideological positions, and reflexive habits of racism, homophobia, and xenophobia. The Internet seems to lead as often to a hardening and narrowing of attitudes as it does to a broadening and deepening of people's thinking about the serious issues we face.

So it is worth reflecting on what implications are presented to our ideas about democracy by the availability of the infrastructure of social media. It was observed during the months of the Arab Spring that Twitter and other social media platforms played a role in mobilization of groups of people sharing an interest in reform. And Guobin Yang describes the role that the Internet has played in some areas of popular activism in China (link). This is a little different from the theory of deliberative democracy, however, since mobilization is different from deliberative value-formation. The key question remains unanswered: can the quality of thinking and deliberation of the public be improved through the use of social media? Can the public come to a better understanding of issues like climate change, health care reform, and rising economic inequalities through the debates and discussions that occur on social media? Can our democracy be improved through the tools of Twitter, Facebook, or Google? So far the evidence is not encouraging; it is hard to find evidence suggesting a convergence of political or social attitudes deriving from massive use of social media. And the most dramatic recent example of change in public attitudes, the sudden rise in public acceptance of single-sex marriage, does not seem to have much of a connection from social media.

Here is a very interesting report by the Pew Foundation on the political segmentation of the world of Twitter (link). The heart of their findings is that Twitter discussions of politics commonly segment into largely distinct groups of individuals and websites (link).
Conversations on Twitter create networks with identifiable contours as people reply to and mention one another in their tweets. These conversational structures differ, depending on the subject and the people driving the conversation. Six structures are regularly observed: divided, unified, fragmented, clustered, and inward and outward hub and spoke structures. These are created as individuals choose whom to reply to or mention in their Twitter messages and the structures tell a story about the nature of the conversation.
If a topic is political, it is common to see two separate, polarized crowds take shape. They form two distinct discussion groups that mostly do not interact with each other. Frequently these are recognizably liberal or conservative groups. The participants within each separate group commonly mention very different collections of website URLs and use distinct hashtags and words. The split is clearly evident in many highly controversial discussions: people in clusters that we identified as liberal used URLs for mainstream news websites, while groups we identified as conservative used links to conservative news websites and commentary sources. At the center of each group are discussion leaders, the prominent people who are widely replied to or mentioned in the discussion. In polarized discussions, each group links to a different set of influential people or organizations that can be found at the center of each conversation cluster.
And here is the authors' reason for thinking that the clustering of Twitter conversations is important:
Social media is increasingly home to civil society, the place where knowledge sharing, public discussions, debates, and disputes are carried out. As the new public square, social media conversations are as important to document as any other large public gathering. Network maps of public social media discussions in services like Twitter can provide insights into the role social media plays in our society. These maps are like aerial photographs of a crowd, showing the rough size and composition of a population. These maps can be augmented with on the ground interviews with crowd participants, collecting their words and interests. Insights from network analysis and visualization can complement survey or focus group research methods and can enhance sentiment analysis of the text of messages like tweets.
Here are examples of "polarized crowds" and "tight crowds":


There is a great deal of research underway on the network graphs that can be identified within social media populations. But an early takeaway seems to be that segmentation rather than convergence appears to be the most common pattern. This seems to run contrary to the goals of deliberative democracy. Rather than exposing themselves to challenging ideas from people and sources in the other community, people tend to stay in their own circle.

So this is how social media seem to work if left to their own devices. Are there promising examples of more intentional uses of social media to engage the public in deeper conversations about the issues of the day? Certainly there are political organizations across the spectrum that are making large efforts to use social media as a platform for their messages and values. But this is not exactly "deliberative". What is more intriguing is whether there are foundations and non-profit organizations that have specifically focused on creating a more deliberative social media community that can help build a broader consensus about difficult policy choices. And so far I haven't been able to find good examples of this kind of effort.

(Josh Cohen's discussion of Rousseau's political philosophy is interesting in the context of fresh thinking about deliberation and democracy; link. And Archon Fung and Erik Olin Wright's collection of articles on democratic innovation, Deepening Democracy: Institutional Innovations in Empowered Participatory Governance (The Real Utopias Project) (v. 4), is a very good contribution as well.)

Monday, June 29, 2015

Quantum mental processes?


One of the pleasant aspects of a long career in philosophy is the occasional experience of a genuinely novel approach to familiar problems. Sometimes one's reaction is skeptical at first -- "that's a crazy idea!". And sometimes the approach turns out to have genuine promise. I've had that experience of moving from profound doubt to appreciation several times over the years, and it is an uplifting learning experience. (Most recently, I've made that progression with respect to some of the ideas of assemblage and actor-network theory advanced by thinkers such as Bruno Latour; link, link.)

I'm having that experience of unexpected dissonance as I begin to read Alexander Wendt's Quantum Mind and Social Science: Unifying Physical and Social Ontology. Wendt's book addresses many of the issues with which philosophers of social science have grappled for decades. But Wendt suggests a fundamental switch in the way that we think of the relation between the human sciences and the natural world. He suggests that an emerging paradigm of research on consciousness, advanced by Giuseppi Vitiello, John Eccles, Roger Penrose, Henry Stapp, and others, may have important implications for our understanding of the social world as well. This is the field of "quantum neuropsychology" -- the body of theory that maintains that puzzles surrounding the mind-body problem may be resolved by examining the workings of quantum behavior in the central nervous system. I'm not sure which category to put the idea of quantum consciousness yet, but it's interesting enough to pursue further.

The familiar problem in this case is the relation between the mental and the physical. Like all physicalists, I work on the assumption that mental phenomena are embodied in the physical infrastructure of the central nervous system, and that the central nervous system works according to familiar principles of electrochemistry. Thought and consciousness are somehow the "emergent" result of the workings of the complex physical structure of the brain (in a safe and bounded sense of emergence). The novel approach is the idea that somehow quantum physics may play a strikingly different role in this topic than ever had been imagined. Theorists in the field of quantum consciousness speculate that perhaps the peculiar characteristics of quantum events at the sub-atomic level (e.g. quantum randomness, complementary, entanglement) are close enough to the action of neural networks that they serve to give a neural structure radically different properties from those expected by a classical-physics view of the brain. (This idea isn't precisely new; when I was an undergraduate in the 1960s it was sometimes speculated that freedom of the will was possible because of the indeterminacy created by quantum physics. But this wasn't a very compelling idea.)

Wendt's further contribution is to immerse himself in some of this work, and then to formulate the question of how these perspectives on intentionality and mentality might affect key topics in the philosophy of society. For example, how do the longstanding concepts of structure and agency look when we begin with a quantum perspective on mental activity?

A good place to start in preparing to read Wendt's book is Harald Atmanspacher's excellent article in the Stanford Encyclopedia of Philosophy (link). Atmanspacher organizes his treatment into three large areas of application of quantum physics to the problem of consciousness: metaphorical applications of the concepts of quantum physics; applications of the current state of knowledge in quantum physics; and applications of possible future advances in knowledge in quantum physics.
Among these [status quo] approaches, the one with the longest history was initiated by von Neumann in the 1930s.... It can be roughly characterized as the proposal to consider intentional conscious acts as intrinsically correlated with physical state reductions. (13)
A physical state reduction is the event that occurs when a quantum probability field resolves into a discrete particle or event upon having been measured. Some theorists (e.g. Henry Stapp) speculate that conscious human intention may influence the physical state reduction -- thus a "mental" event causes a "physical" event. And some process along these lines is applied to the "activation" of a neuronal assembly:
The activation of a neuronal assembly is necessary to make the encoded content consciously accessible. This activation is considered to be initiated by external stimuli. Unless the assembly is activated, its content remains unconscious, unaccessed memory. (20)
Also of interest in Atmanspacher's account is the idea of emergence: are mental phenomena emergent from physical phenomena, and in what sense? Atmanspacher specifies a clear but strong definition of emergence, and considers whether mental phenomena are emergent in this sense:
Mental states and/or properties can be considered as emergent if the material brain is not necessary or not sufficient to explore and understand them. (6)
This is a strong conception in a very specific way; it specifies that material facts are not sufficient to explain "emergent" mental properties. This implies that we need to know some additional facts beyond facts about the material brain in order to explain mental states; and it is natural to ask what the nature of those additional facts might be.

The reason this collection of ideas is initially shocking to me is the difference in scale between the sub-atomic level and macro-scale entities and events. There is something spooky about postulating causal links across that range of scales. It would be wholly crazy to speculate that we need to invoke the mathematics and theories of quantum physics to explain billiards. It is pretty well agreed by physicists that quantum mechanics reduces to Newtonian physics at this scale. Even though the component pieces of a billiard ball are quantum entities with peculiar properties, as an ensemble of 10^25 of these particles the behavior of the ball is safely classical. The peculiarities of the quantum level wash out for systems with multiple Avogadro's numbers of particles through the reliable workings of statistical mechanics. And the intuitions of most people comfortable with physics would lead them to assume that neurons are subject to the same independence; the scale of activity of a neuron (both spatial and temporal) is orders of magnitude too large to reflect quantum effects. (Sorry, Schrodinger's cat!)

Charles Seife reports a set of fundamental physical computations conducted by Max Tegmark intended to demonstrate this in a recent article in Science Magazine, "Cold Numbers Unmake the Quantum Mind" (link). Tegmark's analysis focuses on the speculations offered by Penrose and others on the possible quantum behavior of "microtubules." Tegmark purports to demonstrate that the time and space scales of quantum effects are too short by orders of magnitude to account for the neural mechanisms that can be observed (link). Here is Tegmark's abstract:
Based on a calculation of neural decoherence rates, we argue that the degrees of freedom of the human brain that relate to cognitive processes should be thought of as a classical rather than quantum system, i.e., that there is nothing fundamentally wrong with the current classical approach to neural network simulations. We find that the decoherence time scales (∼10^−13–10^−20s) are typically much shorter than the relevant dynamical time scales (∼10^−3–10^−1s), both for regular neuron firing and for kinklike polarization excitations in microtubules. This conclusion disagrees with suggestions by Penrose and others that the brain acts as a quantum computer, and that quantum coherence is related to consciousness in a fundamental way. (link)
I am grateful to Atmanspacher for providing such a clear and logical presentation of some of the main ideas of quantum consciousness; but I continue to find myself sceptical. There is a risk in this field to succumb to the temptation towards unbounded speculation: "Maybe if X's could influence Y's, then we could explain Z" without any knowledge of how X, Y, and Z are related through causal pathways. And the field seems sometimes to be prey to this impulse: "If quantum events were partially mental, then perhaps mental events could influence quantum states (and from there influence macro-scale effects)."

In an upcoming post I'll look closely at what Alex Wendt makes of this body of theory in application to the level of social behavior and structure.