Tuesday, March 21, 2017

The soft side of critical realism


Critical realism has appealed to a range of sociologists and political scientists, in part because of the legitimacy it renders for the study of social structures and organizations. However, many of the things sociologists study are not "things" at all, but rather subjective features of social experience -- mental frameworks, identities, ideologies, value systems, knowledge frameworks. Is it possible to be a critical realist about "subjective" social experience and formations of consciousness? Here I want to argue in favor of a CR treatment of subjective experience and thought.

First, let's recall what it means to be realist about something. It means to take a cognitive stance towards the formation that treats it as being independent from the concepts we use to categorize it. It is to postulate that there are facts about the formation that are independent from our perceptions of it or the ways we conceptualize it. It is to attribute to the formation a degree of solidity in the world, a set of characteristics that can be empirically investigated and that have causal powers in the world. It is to negate the slogan, "all that is solid melts into air" with regard to these kinds of formations. "Real" does not mean "tangible" or "material"; it means independent, persistent, and causal.  

So to be realist about values, cognitive frameworks, practices, or paradigms is to assert that these assemblages of mental attitudes and features have social instantiation, that they persist over time, and that they have causal powers within the social realm. By this definition, mental frameworks are perfectly real. They have visible social foundations -- concrete institutions and practices through which they are transmitted and reproduced. And they have clear causal powers within the social realm.

A few examples will help make this clear.

Consider first the assemblage of beliefs, attitudes, and behavioral repertoires that constitute the race regime in a particular time and place. Children and adults from different racial groups in a region have internalized a set of ideas and behaviors about each other that are inflected by race and gender. These beliefs, norms, and attitudes can be investigated through a variety of means, including surveys and ethnographic observation. Through their behaviors and interactions with each other they gain practice in their mastery of the regime, and they influence outcomes and future behaviors. They transmit and reproduce features of the race regime to peers and children. There is a self-reinforcing discipline to such an assemblage of attitudes and behaviors which shapes the behaviors and expectations of others, both internally and coercively. This formation has causal effects on the local society in which it exists, and it is independent from the ideas we have about it. It is by this set of factors, a real part of local society. (If is also a variable and heterogeneous reality, across time and space.) We can trace the sociological foundations of the formation within the population, the institutional arrangements through which minds and behaviors are shaped. And we can identify many social effects of specific features of regimes like this. (Here is an earlier post on the race regime of Jim Crow; link, link.)

Here is a second useful example -- a knowledge and practice system like Six Sigma. This is a bundle of ideas about business management. It involves some fairly specific doctrines and technical practices. There are training institutions through which individuals become expert at Six Sigma. And there is a distributed group of expert practitioners across a number of companies, consulting firms, and universities who possess highly similar sets of knowledge, judgment, and perception.  This is a knowledge and practice community, with specific and identifiable causal consequences. 

These are two concrete examples. Many others could be offered -- workingclass solidarity, bourgeois modes of dress and manners, the social attitudes and behaviors of French businessmen, the norms of Islamic charity, the Protestant Ethic, Midwestern modesty. 

So, indeed, it is entirely legitimate to be a critical realist about mental frameworks. More, the realist who abjures study of such frameworks as social realities is doomed to offer explanations with mysterious gaps. He or she will find large historical anomalies, where available structural causes fail to account for important historical outcomes.

Consider Marx and Engels' words in the Communist Manifesto:
All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned, and man is at last compelled to face with sober senses his real conditions of life, and his relations with his kind.
This is an interesting riff on social reality, capturing both change and persistence, appearance and reality. A similar point of view is expressed in Marx's theory of the fetishism of commodities: beliefs exist, they have social origins, and it is possible to demystify them on occasion by uncovering the distortions they convey of real underlying social relations. 

There is one more perplexing twist here for realists. Both structures and features of consciousness are real in their social manifestations. However, one goal of critical philosophy is to show how the mental structures of a given class or gender are in fact false consciousness. It is a true fact that British citizens in 1871 had certain ideas about the workings of contemporary capitalism. But it is an important function of critical theory to demonstrate that those beliefs were wrong, and to more accurately account for the underlying social relations they attempt to describe. And it is important to discover the mechanisms through which those false beliefs came into existence.

So critical realism must both identify real structures of thought in society and demystify these thought systems when they systematically falsify the underlying social reality. Decoding the social realities of patriarchy, racism, and religious bigotry is itself a key task for a critical social sciences.

Dave Elder-Vass is one of the few critical realists who have devoted attention to the reality of a subjective social thing, a system of norms. In The Causal Power of Social Structures: Emergence, Structure and Agency he tries to show how the ideas of a "norm circle" helps explicate the objectivity, persistence, and reality of a socially embodied norm system. Here's is an earlier post on E-V's work (link).




Friday, March 17, 2017

Mechanisms according to analytical sociology


One of the distinguishing characteristics of analytical sociology is its insistence on the idea of causal mechanisms as the core component of explanation. Like post-positivists in other traditions, AS theorists specifically reject the covering law model of explanation and argues for a "realist" understanding of causal relations and powers: a causal relationship between x and y exists solely insofar as there exist one or more causal mechanisms producing it generating y given the occurrence of x. Peter Hedström puts the point this way in Dissecting the Social:
A social mechanism, as defined here, is a constellation of entities and activities that are linked to one another in such a way that they regularly bring about a particular type of outcome. (kl 181)
A basic characteristic of all explanations is that they provide plausible causal accounts for why events happen, why something changes over time, or why states or events co-vary in time or space. (kl 207)
The core idea behind the mechanism approach is that we explain not by evoking universal laws, or by identifying statistically relevant factors, but by specifying mechanisms that show how phenomena are brought about. (kl 334)
A social mechanism, as here defined, describes a constellation of entities and activities that are organized such that they regularly bring about a particular type of outcome. (kl 342)
So far so good. But AS ads another requirement about causal mechanisms in the social realm that is less convincing: that the only real or credible mechanisms are those involving the actions of individual actors. In other words, causal action in the social world takes place solely at the micro level. This assumption is substantial, non-trivial, and seemingly dogmatic. 
Sociological theories typically seek to explain social outcomes such as inequalities, typical behaviours of individuals in different social settings, and social norms. In such theories individuals are the core entities and their actions are the core activities that bring about the social-level phenomena that one seeks to explain. (kl 356)
Although the explanatory focus of sociological theory is on social entities, an important thrust of the analytical approach is that actors and actions are the core entities and activities of the mechanisms explaining plaining such phenomena. (kl 383)
The theory should also explain action in intentional terms. This means that we should explain an action by reference to the future state it was intended to bring about. Intentional explanations are important for sociological theory because, unlike causalist explanations of the behaviourist or statistical kind, they make the act 'understandable' in the Weberian sense of the term.' (kl 476)
Here is a table in which Hedström classifies different kinds of social mechanisms; significantly, all are at the level of actors and their mental states.


The problem with this "action-level" requirement on the nature of social mechanisms is that it rules out as a matter of methodology that there could be social causal processes that involve factors at higher social levels -- organizations, norms, or institutions, for example. (For that matter, it also rules out the possibility that some individual actions might take place in a way that is inaccessible to conscious knowledge -- for example, impulse, emotion, or habit.) And yet it is common in sociology to offer social explanations invoking causal properties of things at precisely these "meso" levels of the social world. For example:
Each of these represents a fairly ordinary statement of social causation in which a primary causal factor is an organization, an institutional arrangement, or a normative system.

It is true, of course, that such entities depends on the actions and minds of individuals. This is the thrust of ontological individualism (link, link): the social world ultimately depends on individuals in relation to each other and in relation to the modes of social formation through which their knowledge and action principles have been developed. But explanatory or methodological individualism does not follow from the truth of ontological individualism, any more than biological reductionism follows from the truth of physicalism. Instead, it is legitimate to attribute stable causal properties to meso-level social entities and to invoke those entities in legitimate social-causal explanations. Earlier arguments for meso-level causal mechanisms can be found here, here, and here.

This point about "micro-level dogmatism" leads me to believe that analytical sociology is unnecessarily rigid when it comes to causal processes in the social realm. Moreover, this rigidity leads it to be unreceptive to many approaches to sociology that are perfectly legitimate and insightful. It is as if someone proposed to offer a science of cooking but would only countenance statements at the level of organic chemistry. Such an approach would preclude the possibility of distinguishing different cuisines on the basis of the palette of spices and flavors that they use. By analogy, the many approaches to sociological research that proceed on the basis of an analysis of the workings of mid-level social entities and influences are excluded by the strictures of analytical sociology. Not all social research needs to take the form of the discovery of microfoundations, and reductionism is not the only scientifically legitimate strategy for explanation.

(The photo above of a moment from the Deepwater Horizon disaster is relevant to this topic, because useful accident analysis needs to invoke the features of organization that led to a disaster as well as the individual actions that produced the particular chain of events leading to the disaster. Here is an earlier post that explores this feature of safety engineering; link.)

Thursday, March 9, 2017

Moral limits on war


World War II raised great issues of morality in the conduct of war. These were practical issues during the war, because that conflict approached "total war" -- the use of all means against all targets to defeat the enemy. So the moral questions could not be evaded: are there compelling reasons of moral principle that make certain tactics in war completely unacceptable, no matter how efficacious they might be said to be?

As Michael Walzer made clear in Just and Unjust Wars: A Moral Argument with Historical Illustrations in 1977, we can approach two rather different kinds of questions when we inquire about the morality of war. First, we can ask whether a given decision to go to war is morally justified given its reasons and purposes. This brings us into the domain of the theory of just war--self-defense against aggression, and perhaps prevention of large-scale crimes against humanity. And second, we can ask whether the strategies and tactics chosen are morally permissible. This forces us to think about the moral distinction between combatant and non-combatant, the culpable and the innocent, and possibly the idea of military necessity. The principle of double effect comes into play here -- the idea that unintended but predictable civilian casualties may be permissable if the intended target is a legitimate military target, and the unintended harms are not disproportionate to the value of the intended target.

We should also notice that there are two ways of approaching both issues -- one on the basis of existing international law and treaty, and the other on the basis of moral theory. The first treats the morality of war as primarily a matter of convention, while the latter treats it as an expression of valued moral principles. There is some correspondence between the two approaches, since laws and treaties seek to embody shared norms about warfare. And there are moral reasons why states should keep their agreements, irrespective of the content. But the rationales of the two approaches are different.

Finally, there are two different kinds of reasons why a people or a government might care about the morality of its conduct of war. The first is prudential: "if we use this instrument, then others may use it against us in the future". The convention outlawing the use of poison gas may fall in this category. So it may be argued that the conventions limiting the conduct of war are beneficial to all sides, even when there is a shortterm advantage in violating the convention. The second is a matter of moral principle: "if we use this instrument, we will be violating fundamental normative ideals that are crucial to us as individuals and as a people". This is a Kantian version of the morality of war: there are at least some issues that cannot be resolved based solely on consequences, but rather must be resolved on the basis of underlying moral principles and prohibitions. So executing hostages or prisoners of war is always and absolutely wrong, no matter what military advantages might ensue. Preserving the lives and well-being of innocents seems to be an unconditional moral duty in war. But likewise, torture is always wrong, not only because it is imprudent, but because it is fundamentally incompatible with treating people in our power in a way that reflects their fundamental human dignity.

The means of war-making chosen by the German military during World War II were egregious -- for example, shooting hostages, murdering prisoners, performing medical experiments on prisoners, and unrestrained strategic bombing of London. But hard issues arose on the side of the alliance that fought against German aggression as well. Particularly hard cases during World War II were the campaigns of "strategic bombing" against cities in Germany and Japan, including the firebombing of Dresden and Tokyo. These decisions were taken in the context of fairly clear data showing that strategic bombing did not substantially impair the enemy's ability to wage war industrially, and in the context of the fact that its primary victims were innocent civilians. Did the Allies make a serious moral mistake by making use of this tactic? Did innocent children and non-combatant adults pay the price in these most horrible ways of the decision to incinerate cities? Did civilian leaders fail to exercise sufficient control to prevent their generals from inflicting pet theories like the presumed efficacy of strategic bombing on whole urban populations?

And how about the decision to use atomic bombs against Hiroshima and Nagasaki? Were these decisions morally justified by the rationale that was offered -- that they compelled surrender by Japan and thereby avoided tens of thousands of combatant deaths ensuing from invasion? Were two bombs necessary, or was the attack on Nagasaki literally a case of overkill? Did the United Stares make a fateful moral error in deciding to use atomic bombs to attack cities and the thousands of non-combatants who lived there?

These kinds of questions may seem quaint and obsolete in a time of drone strikes, cyber warfare, and renewed nuclear posturing. But they are not. As citizens we have responsibility for the acts of war undertaken by our governments. We need to be clear and insistent in maintaining that the use of the instruments of war requires powerful moral justification, and that there are morally profound reasons for demanding that war tactics respect the rights and lives of the innocent. War, we must never forget, is horrible.

Geoffrey Robertson's Crimes Against Humanity: The Struggle for Global Justice poses these questions with particular pointedness. Also of interest is John Mearsheimer's Conventional Deterrence.

Saturday, March 4, 2017

The atomic bomb


Richard Rhodes' history of the development of the atomic bomb, The Making of the Atomic Bomb, is now thirty years old. The book is crucial reading for anyone who has the slightest anxiety about the tightly linked, high-stakes world we live in in the twenty-first century. The narrative Rhodes provides of the scientific and technical history of the era is outstanding. But there are other elements of the story that deserve close thought and reflection as well.

One is the question of the role of scientists in policy and strategy decision making before and during World War II. Physicists like Bohr, Szilard, Teller, and Oppenheimer played crucial roles in the science, but they also played important roles in the formulation of wartime policy and strategy as well. Were they qualified for these roles? Does being a brilliant scientist carry over to being an astute and wise advisor when it comes to the large policy issues of the war and international policies to follow? And if not the scientists, then who? At least a certain number of senior policy advisors to the Roosevelt administration, international politics experts all, seem to have badly dropped the ball during the war -- in ignoring the genocidal attacks on Europe's Jewish population, for example. Can we expect wisdom and foresight from scientists when it comes to politics, or are they as blinkered as the rest of us on average?

A second and related issue is the moral question: do scientists have any moral responsibilities when it comes to the use, intended or otherwise, of the technologies they spawn? A particularly eye-opening part of the story Rhodes tells is the research undertaken within the Manhattan Project about the possible use of radioactive material as a poisonous weapon of war against civilians on a large scale. The topic seems to have arisen as a result of speculation about how the Germans might use radioactive materials against civilians in Great Britain and the United States. Samuel Goutsmit, scientific director of the US military team responsible for investigating German progress towards an atomic bomb following the Normandy invasion, refers to this concern in his account of the mission in Alsos (7). According to Rhodes, the idea was first raised within the Manhattan Project by Fermi in 1943, and was realistically considered by Groves and Oppenheimer. This seems like a clear case: no scientist should engage in research like this, research aimed at discovering the means of the mass poisoning of half a million civilians.

Leo Szilard played an exceptional role in the history of the quest for developing atomic weapons (link). He more than other physicists foresaw the implications of the possibility of nuclear fission as a foundation for a radically new kind of weapon, and his fear of German mastery of this technology made him a persistent and ultimately successful advocate for a major research and industrial effort towards creating the bomb. His recruitment of Albert Einstein as the author of a letter to President Roosevelt underlining the seriousness of the threat and the importance of establishing a full scale effort made a substantial difference in the outcome. Szilard was entirely engaged in efforts to influence policy, based on his understanding of the physics of nuclear fission; he was convinced very early that a fission bomb was possible, and he was deeply concerned that German physicists would succeed in time to permit the Nazis to use such a weapon against Great Britain and the United States. Szilard was a physicist who also offered advice and influence on the statesmen who conducted war policy in Great Britain and the United States.

Niels Bohr is an excellent example to consider with respect to both large questions (link). He was, of course, one of the most brilliant and innovative physicists of his generation, recognized with the Nobel Prize in 1922. He was also a man of remarkable moral courage, remaining in Copenhagen long after prudence would have dictated emigration to Britain or the United States. He was more articulate and outspoken than most scientists of the time about the moral responsibilities the physicists undertook through their research on atomic energy and the bomb. He was farsighted about the implications for the future of warfare created by a successful implementation of an atomic or thermonuclear bomb. Finally, he is exceptional, on a par with Einstein, in his advocacy of a specific approach to international relations in the atomic age, and was able to meet with both Roosevelt and Churchill to make his case. His basic view was that the knowledge of fission could not be suppressed, and that the Allies would be best served in the long run by sharing their atomic knowledge with the USSR and working towards an enforceable non-proliferation agreement. The meeting with Churchill went particularly badly, with Churchill eventually maintaining that Bohr should be detained as a security risk.

Here is the memorandum that Bohr wrote to President Roosevelt in 1944 (link). Bohr makes the case for public sharing of the scientific and technical knowledge each nation has gained about nuclear weapons, and the establishment of a regime among nations that precludes the development and proliferation of nuclear weapons. Here are a few key paragraphs from his memorandum to Roosevelt:
Indeed, it would appear that only when the question is raised among the united nations as to what concessions the various powers are prepared to make as their contribution to an adequate control arrangement, will it be possible for any one of the partners to assure himself of the sincerity of the intentions of the others.

Of course, the responsible statesmen alone can have insight as to the actual political possibilities. It would, however, seem most fortunate that the expectations for a future harmonious international co-operation, which have found unanimous expressions from all sides within the united nations, so remarkably correspond to the unique opportunities which, unknown to the public, have been created by the advancement of science.
These thoughts are not put forward in the spirit of high-minded idealism; they are intended to serve as sober, fact-based guides to a more secure future. So it is worth considering: do the facts about international behavior justify the recommendations?In fact the world has settled on a hybrid set of approaches: the doctrine of deterrence based on mutual assured destruction, and a set of international institutions to which nations are signatories, intended to prevent or slow the proliferation of nuclear weapons. Another brilliant thinker and 2005 Nobel Prize winner, Thomas Schelling, provided the analysis that expresses the current theory of deterrence in his 1966 book Arms and Influence (link).

So who is closer to the truth when it comes to projecting the behavior of partially rational states and their governing apparatuses? My view is that the author of Micro Motives and Macro Behavior has the more astute understanding of the logic of disaggregated collective action and the ways that a set of independent strategies aggregate to the level of organizational or state-level behavior. Schelling's analysis of the logic of deterrence and the quasi-stability that it creates is compelling -- perhaps more so than Bohr's vision which depends at critical points on voluntary compliance.


This judgment receives support from international relations scholars of the following generation as well. For example, in an extensive article published in 1981 (link) Kenneth Waltz argues that nuclear weapons have helped to make international peace more stable, and his argument turns entirely on the rational-choice basis of the theory of deterrence:
What will a world populated by a larger number of nuclear states look like? I have drawn a picture of such a world that accords with experience throughout the nuclear age. Those who dread a world with more nuclear states do little more than assert that more is worse and claim without substantiation that new nuclear states will be less responsible and less capable of self-­control than the old ones have been. They express fears that many felt when they imagined how a nuclear China would behave. Such fears have proved un­rounded as nuclear weapons have slowly spread. I have found many reasons for believing that with more nuclear states the world will have a promising future. I have reached this unusual conclusion for six main reasons.

First, international politics is a self-­help system, and in such systems the principal parties do most to determine their own fate, the fate of other parties, and the fate of the system. This will continue to be so, with the United States and the Soviet Union filling their customary roles. For the United States and the Soviet Union to achieve nuclear maturity and to show this by behaving sensibly is more important than preventing the spread of nuclear weapons.

Second, given the massive numbers of American and Russian warheads, and given the impossibility of one side destroying enough of the other side’s missiles to make a retaliatory strike bearable, the balance of terror is indes­tructible. What can lesser states do to disrupt the nuclear equilibrium if even the mighty efforts of the United States and the Soviet Union cannot shake it? The international equilibrium will endure. (concluding section)
The logic of the rationality of cooperation, and the constant possibility of defection, seems to undermine the possibility of the kind of quasi-voluntary nuclear regime that Bohr hoped for -- one based on unenforceable agreements about the development and use of nuclear weapons. The incentives in favor of defection are too great.So this seems to be a case where a great physicist has a less than compelling theory of how an international system of nations might work. And if the theory is unreliable, then so are the policy recommendations that follow from it.