Wednesday, June 28, 2017

Guest post by Guus Duindam

Guus Duindam is a J.D./Ph.D. student in philosophy at the University of Michigan. His primary areas of interest are Ethics and Kant. Thanks, Guus, for providing this rigorous treatment of Bhaskar's philosophical argument for critical realism.

Bhaskar contra Kant: Why Critical Realism is not Transcendental Realism

Let me start by thanking Dan Little for inviting me to write this guest-post. I’d like to take the opportunity to examine Roy Bhaskar’s arguments for critical realism, in particular those presented in his A Realist Theory of Science (RTS). The aim of that work is remarkable: to establish by transcendental argument the mind-independence and structured nature of the objects of science.

Bhaskar’s views are explicitly grounded in Kantian arguments. But the rejection of Kantian transcendental idealism is a central feature of Bhaskar’s critical realism. For Bhaskar, critical realism is also transcendental realism, a position he posits as an alternative to both Kantian and (neo-)Humean philosophy of science.

Transcendental idealism is, at minimum, the idea that the conditions on human cognition – especially space and time, the forms of human intuition – in part determine the objects of knowledge. According to transcendental idealism, we cannot know things as they are ‘in themselves’, but rather only as they appear to beings like us. Kant thus distinguishes between things-in-themselves, the epistemically inaccessible noumena, and phenomena, things as they appear to us given the conditions on human cognition. The former are transcendentally real – unknowable but entirely mind-independent. The latter are empirically real – knowable, but in part dependent on the conditions on cognition. For Kant, science can study only the empirically real: to study the transcendentally real would require that we transcend the conditions on our own cognition – that we erase the distinction between the knower and the object of knowledge – a mystical feat of which we are evidently incapable.

Bhaskar makes a different distinction, between the intransitive and the transitive. Intransitive objects do not depend on human activity; they are entirely mind-independent (RTS 21). To say that some object is intransitive is therefore equivalent to saying that it is transcendentally real (this is clear throughout RTS; see also The Possibility of Naturalism 6). Hence, it is Bhaskar’s aim to prove the transcendental reality (intransitivity) of the objects of science and perception. According to Bhaskar, we can know the objects of science as they are in themselves.

Bhaskar defends this ambitious thesis by means of transcendental arguments. An argument is transcendental insofar as it shows that some commonly accepted claim x necessarily presupposes a controversial claim y; where y is the conclusion of the argument. Thus, a transcendental argument claims that its conclusion is the only possible way to account for the uncontroversial phenomenon which it takes as its premise. Unlike other arguments for scientific realism, then, Bhaskar’s make a claim to necessity.

Bhaskar’s analysis of perception contains the first of his transcendental arguments: call it the argument from perception. It has roughly the following form: multiple agents can, at the same time, perceive the same object in different ways (x). This could be possible only given the mind-independence of the object (y). Therefore, given the occurrence of differential perception, the objects of perception must be transcendentally real.

Here’s Bhaskar himself making the argument:
If changing experience of objects is to be possible, objects must have a distinct being in space and time from the experience of which they are the objects. For Kepler to see the rim of the earth drop away, while Tycho Brahe watches the sun rise, we must suppose that there is something they both see. (RTS, 31)
Earlier, he appears to be making the even stronger claim that perception simpliciter presupposes the intransitivity of the perceived:
The intelligibility of sense-perception presupposes the intransitivity of the object perceived. For it is in the independent occurrence or existence of such objects that the meaning of ‘perception’, and the epistemic significance of perception, lies. (Ibid.)
Let’s take the argument from perception to involve the weaker claim that differential experience by different agents necessarily presupposes the intransitive nature of the object perceived. If the argument fails to ground this claim, we know a fortiori that it fails to ground the stronger conclusion.

If it is possible for Brahe and Kepler to have different perceptions of the same object, there must be an object which they both see: this much seems clear. But the inference from this to the object’s intransitivity is fallacious, for the presupposition that the objects of sense-perception are empirically real is sufficient to explain differential perception. For the transcendental idealist, there is something which Brahe and Kepler both see: they both see the sun. The sun is empirically real, i.e., it partially depends on the conditions on human cognition. But Brahe and Kepler, being human, share the conditions on cognition and interact with the same mind-independent reality. Thus, there is nothing unintelligible about their different perceptions under the assumption that what they perceive is empirically real (partially mind-dependent). Bhaskar supposes that we must assume it is also transcendentally real (i.e., that Brahe and Kepler see the sun ‘as it is in-itself’) but does nothing to establish this. The argument from perception does not show that the objects of knowledge must be intransitive given the occurrence of (differential) perception. It fails as a transcendental argument for critical realism.

Bhaskar’s second argument is much more central to the critical realist endeavor, and it is presented in his analysis of experimental activity. Call it the argument from experimentation. For Bhaskar, “two essential functions” are involved in an experiment:
First, [the experimental scientist] must trigger the mechanism under study to ensure that it is active; and secondly he must prevent any interference with the operation of the mechanism. […] Both involve changing or being prepared to change the ‘course of nature’, i.e. the sequence of events that would otherwise have occurred. […] Only if the mechanism is active and the system in which it operates is closed can scientists in general record a unique relationship between the antecedent and consequent of a lawlike statement. (RTS, 53)
Bhaskar notes that the experimenter who sets up a causally closed system thereby becomes causally responsible for a constant conjunction of events, but not for the underlying causal mechanism. Contra Humean accounts of law, Bhaskar’s account of experimentation entails an ontological distinction between constant conjunctions and causal mechanisms.

For Bhaskar, the intelligibility of such experimental activity can be used to transcendentally establish the intransitivity of the objects of science. “As a piece of philosophy,” he claims, “we can say (given that science occurs) that some real things and generative mechanisms must exist (and act),” where by ‘real’ Bhaskar means ‘intransitive’ (RTS 52). In “Transcendental Realisms in the Philosophy of Science: On Bhaskar and Cartwright,” Stephen Clarke provides the following helpful gloss on the argument:
Premise 1: Scientific explanatory practice (in particular the practice of exporting explanations from laboratory circumstances to general circumstances) is experienced by us as intelligible. 
Premise 2: Scientific explanatory practice could not be experienced by us as intelligible unless causal powers exist and those causal powers are governed by universal laws of nature.
______________________________________________________ 
Conclusion: causal powers exist and are governed by universal laws of nature. (Clarke 302)
Clarke calls this an “attack on idealism” (303) but Bhaskar explicitly frames it as an attack on transcendental idealism (RTS 27). Clarke’s gloss is telling, for it is indeed unclear how the argument could work as an attack on the latter view.

Bhaskar argues that we must suppose the world to be intransitively ordered if scientific explanatory practice is to be intelligible. But, he claims, “transcendental idealism maintains that this order is actually imposed by men in their cognitive activity” (RTS 27). And if order were imposed in cognitive activity, all experience would be ordered, eliminating the need for explanatory export from the closed causal systems of experimentation to the open causal systems of uncontrolled experience (RTS 27, Clarke 303).

This argument is invalid. It does not follow from the premise that all experience is ordered that there is no need for explanatory export from closed to open causal systems. To the contrary: the very occurrence of such export presupposes that experience is ordered. After all, the aim of experimentation is to discover causal mechanisms and universal laws of nature. But to suppose that the causal mechanism discovered in a replicable scientific experiment generalizes to open causal systems is to suppose that the same laws operate in open causal systems, even if other mechanisms sometimes obscure them. And to presuppose that there are such things as knowable universal laws of nature – operative in closed and open causal systems alike – just is to presuppose that all experience is ordered. The ordered nature of experience is, therefore, a necessary presupposition for experimentation.

Now there are at least two ways in which experience could be thus ordered: because order is imposed on it in cognitive activity, or because the order is intransitive. Bhaskar supposes the former would render experimentation superfluous. This is a flummoxing claim to make. Surely Bhaskar does not mean to accuse the transcendental idealist of the view that the projection of order onto the world is somehow a conscious activity – that we already know every scientific truth. That would render experimentation superfluous, but I don’t think it is a view anybody defends. Science is as much a process of gradual discovery for the Kantian as it is for everyone else.

Maybe confusion arises from the fact that for Kantians genuinely universal scientific laws must be synthetic a-priori. Perhaps Bhaskar supposes that, because positing a universal law involves making a claim to synthetic a-priori knowledge, we should be able to derive the laws of nature by a-priori deduction, rendering experimentation superfluous. But this would be a misunderstanding of transcendental idealism. Suppose that because my perceptions of sparks and wood are frequently followed by perceptions of conflagration, I come to associate sparks and wood with fire. I can ask whether this association is subjective or objective. To claim that it is objective is, for the Kantian, to apply one of the Categories. For instance, one way of taking my association of sparks and dry wood with fire to be objective is to make a claim like “sparks and wood cause fire,” applying the Category of causation. This claim is a-priori insofar as it involves the application of an a-priori (pure) concept, a-posteriori insofar as it is about the objects of experience.

Transcendental idealism entails we are entitled to make causal claims, but it does not entail the empirical truth of our claims. Experimentation with sparks and wood may lead me to modify my claim. For instance, I may discover that sparks and wet wood do not jointly give rise to fire, and adjust my claim to “sparks and dry wood cause fire.” Further experimentation may lead to further refinements. I could not have deduced any of these conclusions about sparks and wood a-priori. The thesis that scientific claims have an a-priori component does not render experimentation either superfluous or unintelligible.

As it turns out, Bhaskar supposes that, for the Kantian, causal mechanisms are mere “figment[s] of the imagination” (RTS 45). If true, this would provide an independent argument against the intelligibility of experimentation on a transcendentally idealist account. But, as should by now be clear, this is an incorrect characterization of transcendental idealism. It is only for skeptics and solipsistic idealists that causal mechanisms are figments of the imagination. Kantians and transcendental realists agree causal mechanisms exist: they disagree only about whether they are transcendentally or empirically real.

Bhaskar’s transcendental arguments for critical realism fail, and the Kantian view to which Bhaskar opposes his own is frequently misinterpreted. Most problematically, the meaning of the Kantian distinction between the transcendentally and empirically real is ignored, and the latter category is treated as if it contained only figments of our imagination. Bhaskar maintains that epistemic access to the transcendentally real is a necessary condition for science and perception. But, as we have seen, it is merely epistemic access to the empirically real that is necessary. Bhaskar does not prove that we have knowledge of things as they are in-themselves. Critical realism is not transcendental realism.

Tuesday, June 27, 2017

Sociology of life expectations


Each individual has a distinctive personality and orienting set of values. It is intriguing to wonder how these features take shape in the individual's development through the experiences of childhood, adolescence, and early adulthood. But we can also ask whether there are patterns of mentality and orienting values across many or most individuals in a cohort. Are there commonalities in the definition of a good life across a cohort? Is there such a thing as the millennial generation or the sixties generation, in possession of distinctive and broadly shared sets of values, frameworks, and dispositions?

These are questions that sociologists have attempted to probe using a range of tools of inquiry. It is possible to use survey methodology to observe shifts in attitudes over time, thereby pinpointing some important cohort differences. But qualitative tools seem the most appropriate for this question, and in fact sociologists have conducted extensive interviews with a selected group of individuals from the indicated group, and have used qualitative methods to analyze and understand the results.

A very interesting example of this kind of research is Jennifer Silva's Coming Up Short: Working-Class Adulthood in an Age of Uncertainty. Silva is interested in studying the other half of the millennial generation -- the unemployed and underemployed young people, mostly working class, whom the past fifteen years have treated harshly. What she finds in this segment of the cohort born in the late 1970s and early 1980s is an insecure and precarious set of life circumstances, and new modes of transition to adulthood that don't look very much like the standard progress of family formation, career progress, and rising affluence that was perhaps characteristic of this same social segment in the 1950s.

Here is how Silva frames the problem she wants to better understand:
What, then, does it mean to “grow up” today? Even just a few decades ago, the transition to adulthood would not have been experienced as a time of confusion, anxiety, or uncertainty. In 1960, the vast majority of women married before they turned twenty-one and had their first child before twenty-three. By thirty, most men and women had moved out of their parents’ homes, completed school, gotten married, and begun having children. Completing these steps was understood as normal and natural, the only path to a complete and respectable adult life: indeed, half of American women at this time believed that people who did not get married were “selfish and peculiar,” and a full 85 percent agreed that women and men should get married and have children (Furstenberg et al. 2004). (6)
Silva is interested in exploring in detail the making of "working class life adulthood" in the early twenty-first century. And her findings are somewhat bleak:
Experiences of powerlessness, confusion, and betrayal within the labor market, institutions such as education and the government, and the family teach young working-class men and women that they are completely alone, responsible for their own fates and dependent on outside help only at their peril. They are learning the hard way that being an adult means trusting no one but yourself. (9)
At its core, this emerging working-class adult self is characterized by low expectations of work, wariness toward romantic commitment, widespread distrust of social institutions, profound isolation from others, and an overriding focus on their emotions and psychic health. Rather than turn to politics to address the obstacles standing in the way of a secure adult life, the majority of the men and women I interviewed crafted deeply personal coming of age stories, grounding their adult identities in recovering from their painful pasts—whether addictions, childhood abuse, family trauma, or abandonment—and forging an emancipated, transformed, and adult self. (10)
Key to Silva's interpretation is the importance and coherence of the meanings that young people create for themselves -- the narratives through which they make sense of the unfolding of their lives and where they are going. She locates the context and origins of these self-stories in the structural circumstances of the American economy of the 1990s; but her real interest is in finding the recurring themes in the stories and descriptions these young people tell about themselves and their lives.

For Silva, the bleakness of this generation of young working class adults has structural causes: economic stagnation, dissolution of safety nets, loss of decent industrial-sector jobs and the rise of insecure service-sector jobs, and neoliberalism as a guiding social philosophy that systematically turns its back on under-class young people. It is sobering that her research is based on interviews carried out in a few cities in the United States, but the findings seem valid for many countries in western Europe as well (Britain, Germany, France). And this in turn may have relevance for the rise of populism in many countries as well.

What is most worrisome about Silva's account is the very limited opportunities for social progress that it implies. As progressives we would like to imagine our democracy has the potential of evolving towards greater social dignity and opportunity for all segments of society. But what Silva describes is unpromising for this hopeful scenario. The avenues of higher education, skills-intensive work, and better life circumstances seem unlikely as a progressive end of this story. And the Sprawl of the grim anti-utopian novels of William Gibson (Neuromancer, Count Zero) seem to fit the world Silva describes better than the usual American optimism about the inevitability of progress. Significantly, the young people whom Silva interviews have very little interest in engagement in politics and supporting candidates who are committed to real change; they do not really believe in the possibility of change.

It is worth noticing the parallel in findings and methodology between Silva's work on young working class men and women and Al Young's studies of inner city black men (The Minds of Marginalized Black Men: Making Sense of Mobility, Opportunity, and Future Life Chances). Both fall within the scope of cultural sociology. Both proceed on the basis of extensive interviews with 50-100 subjects, both make use of valuable tools of qualitative analysis to make sense of the interviews, and both arrive at important new understandings of the mentalities of these groups of young Americans.

(Here is a prior post on cultural sociology and its efforts to "get inside the frame" (link); and here is a post on "disaffected youth" that touches on some of these themes in a different way; link.)

Thursday, June 22, 2017

Explanation and critical realism


To explain something is to provide a true account of the causes and circumstances that brought it about. There is of course more to say on the subject, but this is the essential part of the story. And this normative account of explanation should work as well for investigations created within the framework of critical realism as any other scientific framework.

Moreover, CR is well equipped with intellectual resources to produce explanations of social outcomes based on this understanding. In particular, CR emphasizes the reality of causal mechanisms in the social world. To explain a social outcome, then -- perhaps the rise of Trumpism -- we are instructed to identify the causal mechanisms and conditions that were in play such that a novice from reality television would gain the support of millions of voters and win the presidency. So far, so good.

But a good explanation of an outcome is not just a story about mechanisms that might have produced the outcome; instead, we need a true story: these mechanisms existed and occurred, they brought about the outcome, and the outcome would not have occurred in the absence of this combination of mechanisms. Therefore we need to have empirical methods to allow us to evaluate the truth of these hypotheses.

There is also the important and interesting point that Bhaskar makes to the effect that the social world involves open causal configurations, not closed causal configurations. This appears to me to be an important insight into the social world; but it makes the problem of validating causal explanations even more challenging.

This brings us to a point of contact with the theme of much current work in critical realism: a firm opposition to positivism and an allegiance to post-positivism. Because a central thrust of positivism was the demand for substantive empirical confirmation or verification of substantive claims; and that is precisely where we have arrived in this rapid analysis of explanation as well. In fact, it is quite obvious that CR theories and explanations require empirical validation no less than positivistic theories. We cannot dispense with empirical validation and continue to believe we are involved in science.

Put the point another way: there is no possible avenue of validation of substantive explanatory hypotheses that proceeds through purely intuitive or theoretical avenues. At some point a good explanation requires empirical assessment.

For example, it is appealing in the case of Trumpism to attribute Trump's rise to the latent xenophobia of the disaffected lower working class. But is this true? And if true, is it critical as a causal factor in his rise? How would we confirm or disconfirm this hypothetical mechanism? Once again, this brings us into proximity to a few core commitments of empiricism and positivism -- confirmation theory and falsifiability. And yet, a rational adherence to the importance of empirical validation takes us in this direction ineluctably.

It is worth pointing out that the social and historical sciences have indeed developed empirical methods that are both rigorous and distinctive to the domain of the social: process tracing, single-case and small-N studies, comparative analysis, paired comparisons, and the like. So the demand for empirical methods does not imply standard (and simplistic) models of confirmation like the H-D model. What it does imply is that it is imperative to use careful reasoning, detailed observation, and discovery of obscure historical facts to validate one's hypotheses and claims.

Bhaskar addresses these issues in his appendix on the philosophy of science in RTS. He clearly presupposes two things: that rigorous evidence must be used in assessment of explanatory hypotheses in social science; and flat-footed positivism fails in providing an appropriate account of what that empirical reasoning ought to look like. And, as indicated above, the open character of social causation presents the greatest barrier to the positivist approach. Positivism assets that the task of confirmation and refutation concerns only the empirical correspondence between hypothesis and observation.

Elsewhere I have argued for the piecemeal validation of social theories and hypotheses (link). This is possible because we are not forced to adopt the assumption of holism that generally guides philosophy in the consideration of physical theory. Instead, hypotheses about mechanisms and processes can be evaluated and confirmed through numerous independent lines of investigation. Duhem may have been right about physics, but he is not right about our knowledge of the social world.

Sunday, June 18, 2017

Cacophony of the social


Take a typical day in a major city -- a busy street with a subway stop, a park, a coffee bar, and a large consumer financial office. There are several thousand people in view, mostly in ones and twos. Some people are rushing to an appointment with a doctor, a job interview, a drug dealer in the park. A group of young men and women are beginning to chant in a demonstration in the park against a particularly egregious announcement of government policy on contraception.

There is a blooming, buzzing confusion to the scene. And yet there are overlapping forms of order -- pedestrians crossing streets at the crosswalks, surges of suits and ties at certain times of day, snatch and grab artists looking for an unguarded cell phone. The brokers in the financial office are more coordinated in their actions, tasked to generate sales with customers who walk in for service. The demonstrators have assembled from many parts of the city, arriving by subway in the previous hour. Their presence is, of course, coordinated; they were alerted to the demo by a group text from the activist organization they belong to. 

What are the opportunities for social science investigation here? What possibilities exist for explanation of some of the phenomena on display?

For one thing there is an interesting opportunity for ethnographic study presented here. A micro-sociologist or urban anthropologist may find it very interesting to look closely to see what details of dress and behavior are on display. This is the kind of work that sociologists inspired by Erving Goffman have pursued.

Another interesting possibility is to see what coordinated patterns of behavior can be observed. Do people establish eye contact as they pass? Are the suits more visibly comfortable with other suits than with the street people and panhandlers with whom they cross paths? Is there a subtle racial etiquette at work among these urban strangers?

These considerations fall at the "micro" end of the spectrum. But it is clear enough that the snapshots we gain from a few hours on the street also illustrate a number of background features of social structure. There is differentiation among actors in these scenes that reflects various kinds of social inequalities. There are visible inequalities of income and quality of life that can be observed. These inequalities in turn can be associated with current activities -- where the various actors work, how much education they have, what schools they attended, their overall state of health. There are spatial indicators of interest as well -- what kinds of neighborhoods, in what parts of the city, did these various actors wake up in this morning?

And for all of these structural differentiators we can ask the question, what were the social mechanisms and processes that performed the sorting of new-borns into affluent/poor, healthy/sick, well educated/poorly educated, and so forth? In other words, how did social structure impose a stamp on this heterogeneous group of people through their own distinctive histories?

We can also ask a series of questions about social networks and social data about these actors. How large are their personal social networks? What are the characteristics of other individuals within various individual networks? How deep do we need to go before we begin to find overlap across the networks of individuals on the street? This is where big data comes in; Amazon, credit agencies, and Verizon know vastly more about these individuals, their habits, and their networks than a social science researcher is likely to discover through a few hundred interviews. 

I'd like to think this disorderly ensemble of purposive but uncoordinated action by several thousand people is highly representative of the realities of the social world. And this picture in turn gives support to the ontology of heterogeneity and contingency that is a core theme here. 


Wednesday, June 14, 2017

Organizational learning


 

I've posed the question of organizational learning several times in recent months: are there forces that push organizations towards changes leading to improvements in performance over time? Is there a process of organizational evolution in the social world? So where do we stand on this question?

There are only two general theories that would lead us to conclude affirmatively. One is a selection theory. According to this approach, organizations undergo random changes over time, and the environment of action favors those organizations whose changes are functional with respect to performance. The selection theory itself has two variants, depending on how we think about the unit of selection. It might be hypothesized that the firm itself is the unit of selection, so firms survive or fail based on their own fitness. Over time the average level of performance rises through the extinction of low-performance organizations. Or it might be maintained that the unit is at a lower level -- the individual alternative arrangements for performing various kinds of work, which are evaluated and selected on the basis of some metric of performance. On this approach, individual innovations are the object of selection. 

The other large mechanism of organizational learning is quasi-intentional. We postulate that intelligent actors control various aspects of the functioning of an organization; these actors have a set of interests that drive their behavior; and actors fine-tune the arrangements of the organization so as to serve their interests. This is a process I describe as quasi-intentional to convey that the organization itself has no intentionality, but its behavior and arrangements are under the control of a loosely connected set of actors who are individually intentional and purposive. 

In a highly idealized representation of organizations at work, these quasi-intentional processes may indeed push the organization towards higher functioning. Governance processes -- boards of directors, executives -- have a degree of influence over the activities of other actors within and adjacent to the organization, and they are able to push some subordinate behavior in the direction of higher performance and innovation if they have an interest in doing so. And sometimes these governance actors do in fact have an interest in higher performance -- more revenue, less environmental harm, greater safety, gender and racial equity. Under these circumstances it is reasonable to expect that arrangements will be modified to improve performance, and the organization will "evolve".

However, two forms of counter-intentionality arise. The interests of the governing actors are not perfectly aligned with increasing performance. Substantial opportunities for conflict of interest exist at every level, including the executive level (e.g. Enron). So the actions of executives are not always in concert with the goal of improving performance. Second, other actors within the organization are often beyond control of executive actors and are motivated by interests that are quite separate from the goal of increasing performance. Their actions may often lead to status quo performance or even degradation of performance. 

So the question of whether a given organization will change in the direction of higher performance is highly sensitive to (i) the alignment of mission interest and personal interest for executive actors, (ii) the scope of control executive actors are able to exercise over subordinates, and (iii) the strength and pervasiveness of personal interests among subordinates within the organization and the capacity these subordinates have to select and maintain arrangements that favor their interests.

This represents a highly contingent and unpredictable situation for the question of organizational learning. We might regard the question as an ongoing struggle between local private interest and the embodiment of mission-defined interest. And there is no reason at all to believe that this struggle is biased in the direction of enhancement of performance. Some organizations will progress, others will be static, and yet others will decline over time. There is no process of evolution, guided or invisible, that leads inexorably towards improvement of arrangements and performance.

So we might formulate this conclusion in a fairly stark way. If organizations improve in capacity and performance over time in a changing environment, this is entirely the result of intelligent actors undertaking to implement innovations that will lead to these outcomes, at a variety of levels of action within the organization. There is no hidden process that can be expected to generate an evolutionary tendency towards higher organizational performance. 

(The images above are of NASA headquarters and Enron headquarters -- two organizations whose histories reflect the kinds of dysfunctions mentioned here.)


Thursday, June 1, 2017

Social change and leadership


Historians pay a lot of attention to important periods of social change -- the emergence of new political movements, the development of a great city, the end of Jim Crow segregation. There is an inclination to give a lot of weight to the importance of leaders, visionaries, and change-makers in driving these processes to successful outcomes. And, indeed, history correctly records the impact of charismatic and visionary leaders. But consider the larger question: are large social changes amenable to design by a small number of actors?

My inclination is to think that the capacity of calculated design for large, complex social changes is very much more limited than we often imagine. Instead, change more often emerges from the independent strategies and actions of numerous actors, only loosely coordinated with others, and proceeding from their own interests and framing assumptions. The large outcome -- the emergence of Chicago as the major metropolis of the Midwest, the forging of the EU and the monetary union, the coalescence of nationalist movements in France and Germany -- are the resultant of multiple actors and causes. Big outcomes are contingent outcomes of multiple streams of action, mobilization, business decisions, political parties, etc.

There are exceptions, of course. Italy's political history would have been radically different without Mussolini, and the American Civil War would probably have had a different course if Douglas had won the 1860 presidential election. 

But these are exceptions, I believe. More common is the history of Chicago, the surge of right-wing nationalism, or the collapse of the USSR. These are all multi-causal and multi-actor outcomes, and there is no single, unified process of development. And there is no author, no architect, of the outcome. 

So what does this imply about individual leaders and organizations who want to change the social and political environment facing them? Are their aspirations for creating change simply illusions? I don't think so. To deny that single visionaries cannot write the future does not imply they cannot nudge it in a desirable direction. And these effects can indeed alter the future, sometimes in the desired direction. An anti-racist politician can influence voters and institutions in ways that inflect the arc of his or her society in a less racist way. This doesn't permanently solve the problem, but it helps. And with good fortune, other actors will have made similar efforts, and gradually the situation of racism changes. 

This framework for thinking about large social change raises large questions about how we should think about improving the world around us. It seems to imply the importance of local and decentralized social change. We should perhaps adjust our aspirations for social progress around the idea of slow, incremental change through many actors, organizations, and coalitions. As Marx once wrote, "men make their own history, but not in circumstances of their own choosing." And we can add a qualification Marx would not have appreciated: change makers are best advised to construct their plans around long, slow, and incremental change instead of blueprints for unified, utopian change. 



Friday, May 26, 2017

Proliferation of hate and intolerance


Paul Brass provides a wealth of ethnographic and historical evidence on the causes of Hindu-Muslim violence in India in The Production of Hindu-Muslim Violence in Contemporary India. His analysis here centers on the city of Aligarh in Uttar Pradesh, and he believes that his findings have broad relevance in many parts of India. His key conclusion is worth quoting:
It is a principal argument of this book that the whole political order in post-Independence north India and many, if not most of its leading as well as local actors -- more markedly so since the death of Nehru -- have become implicated in the persistence of Hindu-Muslim riots. These riots have had concrete benefits for particular political organizations as well as larger political uses. Hindu-Muslim opposition, tensions, and violence have provided the principal justification and the primary source of strength for the political existence of some local political organizations in many cities and towns in north India linked to a family of militant Hindu nationalist organizations whose core is an organization founded in 1925, known as the Rashtriya Swayamsevak Sangh (RSS). Included in this family, generally called the Sangh Parivar, are an array of organizations devoted to different tasks: mass mobilization, political organization, recruitment of students, women, and workers, and paramilitary training. The leading political organization in this family, originally called the Jan Sangh, is now the Bharatiya Janata Party (BJP), currently (2001) the predominant party in India's governing coalition. All the organizations in the RSS family of militant Hindu organizations adhere to a broader ideology of Hindutva, of Hindu nationalism that theoretically exists independently of Hindu-Muslim antagonisms, but in practice has thrived only when that opposition is explicitly or implicitly present. (6-7)
Brass provides extensive evidence, that is, for the idea that a key cause and stimulant to ethnic and religious conflict derives from the political entrepreneurs and organizations who have a political interest in furthering conflict among groups.

Let's think about the mechanics of the spread of attitudes of intolerance, distrust, and hate throughout a population. What kinds of factors and interactions lead individuals to increase the intensity of their negative beliefs and attitudes towards other groups? What drives the spread of hate and intolerance through a population? (Donatella della Porta, Manuela Caiani and Claudius Wagemann's Mobilizing on the Extreme Right: Germany, Italy, and the United States is a valuable recent effort at formulating a political sociology of right-wing extremism in Italy, Germany, and the United States. Here is an earlier post that also considers this topic; link.)

Here are several mechanisms that recur in many instances of extremist mobilization.

Exposure to inciting media. Since the Rwandan genocide the role of radio, television, and now the internet has been recognized in the proliferation and intensification of hate. The use of fake news, incendiary language, and unfounded conspiracy theories seems to have accelerated the formation of constituencies for the beliefs and attitudes of hate. Breitbart News is a powerful example of a media channel specifically organized around conveying suspicion, mistrust, disrespect, and alienation among groups. ("Propaganda and conflict: Evidence from the Rwandan genocide" is a finegrained study of Rwandan villages that attempts to estimate the impact of a radio station on violent participation by villagers; link.)

Incidents. People who have studied the occurrence of ethnic violence in India have emphasized the role played by various incidents, real or fictitious, that have elevated emotions and antagonisms in one community or another. An assault or a rape, a house or shop being burned, even an auto accident can lead to a cascade of heightened emotions and blame within a community, communicated by news media and word of mouth. These sorts of incidents play an important role in many of the conflicts Brass describes.

Organizations and leaders. Organizations like white supremacist clubs and their leaders make deliberate attempts to persuade outsiders to join their beliefs. Leaders make concerted and intelligent attempts to craft messages that will appeal to potential followers, deliberately cultivating the themes of hate and racism that they advocate. Young people are recruited at the street level into groups and clubs that convey hateful symbols and rhetoric. Political entrepreneurs take advantage of the persuasive power of mobilization efforts based on divisiveness and intolerance. In Brass's account of Hindu-Muslim conflict, that role is played by RSS, BJP, and many local organizations motivated by this ideology.

Music, comics, and video games. Anti-hate organizations like the Southern Poverty Law Center have documented the role played by racist and anti-Semitic or anti-Muslim themes in popular music and other forms of entertainment (link). These creations help to create a sense of shared identity among members as they enjoy the music or immerse themselves in the comics and games. Blee and Creasap emphasize the importance of the use of popular culture forms in mobilization strategies of the extreme right in "Conservative and right-wing movements"; link.

The presence of a small number of "hot connectors". It appears to be the case that attitudes of intolerance are infectious to some degree. So the presence of a few outspoken bigots in a small community may spread their attitudes to others, and the density of local social networks appears to be an important factor in the spread of hateful attitudes. The broader the social network of these individuals, the more potent the infective effects of their behavior are likely to be. (Here is a recent post on social-network effects on mobilization; link.)

There is a substantial degree of orchestration in most of these mechanisms -- deliberate efforts by organizations and political entrepreneurs to incite and channel the emotions of fear, hostility, and hate among their followers and potential followers. Strategies of recruitment for extremist and hate-based parties deliberately cultivate the mindset of hate among young people and disaffected older people (link). And the motivations seem to be a mix of ideological commitment to a worldview of hate and more prosaic self-interest -- power, income, resources, publicity, and influence. 

But the hard questions remaining are these: how does intolerance become mainstream? Is this a "tipping point" phenomenon? And what mechanisms and forces exist to act as counter-pressures against these mechanisms, and promulgate attitudes of mutual respect and tolerance as affirmative social values?

*          *          *

Here is a nice graphic from Arcand and Chakraborty, "What Explains Ethnic Violence? Evidence from Hindu-Muslim Riots in India"; link. Gujarat, Maharashtra, and Uttar Pradesh show the largest concentration of riots over the period 1960-1995. There appears to be no correlation by time in the occurrence of riots in the three states.


And here is a 1996 report on the incidence of religious violence in India by Human Rights Watch; link

Wednesday, May 24, 2017

Democracy and the politics of intolerance


A democracy allows government to reflect the will of the people. Or does it? Here I would like to understand a bit better the dynamics through which radical right populism has come to have influence, even dominance, in a number of western democracies -- even when the percentage of citizens with radical right populist attitudes generally falls below the range of 35% of the electorate.

There are well known bugs in the ways that real democracies work, leading to discrepancies between policy outcomes and public preferences. In the United States, for example, we find:
  • Gerrymandered Congressional districts that favor Republican incumbents
  • Over-representation of rural voters in the composition of the Senate (Utah has as many senators as California)
  • Organized efforts to suppress voting by poor and minority voters
  • The vast influence of corporate and private money in shaping elections and public attitudes
  • An electoral-college system that easily permits the candidate winning fewer votes to nonetheless win the Presidency
So it is evident that the system of electoral democracy institutionalized in the United States is far from a neutral, formal system conveying citizen preferences onto outcomes in a fair and equal way. The rules as well as the choices are objects of contention.

But to understand the ascendancy of the far right in US politics we need to go beyond these defects. We need to understand the processes through which citizens acquire their political attitudes -- thereby explaining their likelihood of mobilization for one party or candidate or another. And we need to understand the mechanisms through which elected representatives are pushed to the extreme positions that are favored by only a minority of their own supporters.

First, what are the mechanisms that lead to the formation of political attitudes and beliefs in individual citizens? That is, of course, a huge question. People have religious values, civic values, family values, personal aspirations, bits of historical knowledge, and so on, all of which come into play in a wide range of settings through personal development. And all of these value tags may serve as a basis for mobilization by candidates and parties. That is the rationale for "dog-whistle" politics -- to craft messages that resonate with small groups of voters without being noticed by larger groups with different values. So let's narrow it a bit: what mechanisms exist through which activist organizations and leaders can promote specific hateful beliefs and attitudes within a population with a range of existing attitudes, beliefs, and values? In particular, how can radical-right populist organizations and parties increase the appeal of their programs of intolerance to voters who are not otherwise pre-disposed to the extremes of populism?

Here the potency of appeals to division, intolerance, and hate is of particular relevance. Populism has almost always depended on a simplistic division between "us" and "them". The rhetoric and themes of nationalism and racism represent powerful tools in the arsenal of populist mobilization, preying upon suspicion, resentment, and mistrust of "others" in order to gain adherents to a party that promises to take advantages away from those others. The right-wing media play an enormous role in promulgating these messages of division and intolerance in many countries. The conspiracy theories and false narratives conveyed by right-wing media and commentators are powerfully persuasive in setting the terms of political consciousness for millions of people. Fox News set the agenda for a large piece of the American electorate. And the experience of having been left out of a fair share of economic advantages leaves some segments of the population particularly vulnerable to these kinds of appeals. Finally, the under-currents of racism and prejudice are of continuing importance in the political and social identities of many citizens -- again leaving them vulnerable to appeals that cater to these prejudices. This is how Breitbart News works. (An earlier post treated this factor; link.)

Let's next consider the institutional mechanisms through which activist advocacy can be turned into disproportionate effects in legislation. Suppose Representative Smith has been elected on the Republican ticket in a close contest over his Democrat opponent with 51% of the vote. And suppose his constituency includes 15% extreme right voters, 20% moderate right voters, and 16% conservative-leaning independents. Why does Smith go on to support the agenda of the far right, who are after all only less than a third of his own supporters in his district? This results from a mechanism that political scientists seem to understand; it involves the dynamics of the primary system. The extreme right is highly activated, while the center is significantly less so. A candidate who moves to the center is in danger of losing his seat in the next primary to a far-right candidate who can depend upon the support of his or her activist base to defeat Smith. So the 15% of extreme-right voters determine the behavior of the representative. (McAdam and Kloos consider these dynamics in Deeply Divided: Racial Politics and Social Movements in Postwar America; link.)

Gerrymandering plays an important role in these dynamics as well. Smith doesn't have to moderate his policy choices out of concern that he will lose the general election to a more moderate Democrat, because the Republican legislature in his state has ensured that this is a safe seat for the candidate chosen by the party.

So here we are -- in a nation governed by an extreme-right party in control of both House and Senate, with a President espousing xenophobic and anti-immigrant intentions and a budget that severely cuts back on the social safety net, and dozens of state governments dominated by the same forces. And yet the President is profoundly unpopular, confidence in Congress is at an abysmal low point, and the majority of Americans favor a more progressive set of policies on women's health, health policy, immigration, and international security than the governing party is proposing. How did democratic processes bring us to this paradoxical point?

In 1991 political scientist Sam Popkin published a short book called The Reasoning Voter: Communication and Persuasion in Presidential Campaigns. The title captures Popkin's central hypothesis: that voters make choices on the basis of rational assessment of available evidence. What he adds to this old theory of democratic behavior is the proviso that often the principle of reasoning in question is what he calls "low-information rationality". Unlike traditional rational-choice theories of political behavior, Popkin proposes to make use of empirical results from cognitive psychology -- insights into how real people make practical decisions of importance. It is striking how much the environment of political behavior has changed since Popkin's reflections in the 1980s and 1990s. "Most Americans watch some network television news and scan newspapers several times every week" (25). In a 2015 New Yorker piece on the populism of Donald Trump Evan Osnos quotes Popkin again -- but this time in a way that emphasizes emotions rather than evidence-based rationality (link). The passage is worth quoting:
“The more complicated the problem, the simpler the demands become,” Samuel Popkin, a political scientist at the University of California in San Diego, told me. “When people get frustrated and irritated, they want to cut the Gordian knot.” 
Trump has succeeded in unleashing an old gene in American politics—the crude tribalism that Richard Hofstadter named “the paranoid style”—and, over the summer, it replicated like a runaway mutation. Whenever Americans have confronted the reshuffling of status and influence—the Great Migration, the end of Jim Crow, the end of a white majority—we succumb to the anti-democratic politics of absolutism, of a “conflict between absolute good and absolute evil,” in which, Hofstadter wrote, “the quality needed is not a willingness to compromise but the will to fight things out to a finish. Nothing but complete victory will do.” Trump was born to the part. “I’ll do nearly anything within legal bounds to win,” he wrote, in “The Art of the Deal.” “Sometimes, part of making a deal is denigrating your competition.” Trump, who long ago mastered the behavioral nudges that could herd the public into his casinos and onto his golf courses, looked so playful when he gave out Lindsey Graham’s cell-phone number that it was easy to miss just how malicious a gesture it truly was. It expressed the knowledge that, with a single utterance, he could subject an enemy to that most savage weapon of all: us. (link)
The gist is pretty clear: populism is not primarily about rational consideration of costs and benefits, but rather the political emotions of mistrust, intolerance, and fear.

Saturday, May 20, 2017

Is there a new capitalism?



An earlier post considered Dave Elder-Vass’s very interesting treatment of the contemporary digital economy. In Profit and Gift in the Digital Economy Elder-Vass argues that the vast economic significance of companies like Google, FaceBook, and Amazon in today's economy is difficult to assimilate within the conceptual framework of Marx’s foundational ideas about capitalism, constructed as they were around manufacturing, labor, and ownership of capital, and that we need some new conceptual tools in order to make sense of the economic system we now confront. (Elder-Vass responded to my earlier post here.)

A new book by Nick Srnicek looks at this problem from a different point of view. In Platform Capitalism Srnicek proposes to understand the realities of our current “digital economy” according to traditional ideas about capitalism and profit. Here is a preliminary statement of his approach:
The simple wager of the book is that we can learn a lot about major tech companies by taking them to be economic actors within a capitalist mode of production. This means abstracting from them as cultural actors defined by the values of the Californian ideology, or as political actors seeking to wield power. By contrast, these actors are compelled to seek out profits in order to fend off competition. This places strict limits on what constitutes possible and predictable expectations of what is likely to occur. Most notably, capitalism demands that firms constantly seek out new avenues for profit, new markets, new commodities, and new means of exploitation. For some, this focus on capital rather than labour may suggest a vulgar econo-mism; but, in a world where the labour movement has been significantly weakened, giving capital a priority of agency seems only to reflect reality. (Kindle Locations 156-162)
In other words, there is not a major break from General Motors, with its assembly lines, corporate management, and vehicles, to IBM, with its products, software, and innovations, to Google, with its purely abstract and information-intensive products. All are similar in their basic corporate navigation systems: make decisions today that will support or increase profits tomorrow. In fact, each of these companies falls within the orbit of the new digital economy, according to Srnicek:
As a preliminary definition, we can say that the digital economy refers to those businesses that increasingly rely upon information technology, data, and the internet for their business models. This is an area that cuts across traditional sectors – including manufacturing, services, transportation, mining, and telecommunications – and is in fact becoming essential to much of the economy today. (Kindle Locations 175-177).
What has changed, according to the economic history constructed by Srnicek, is that the creation and control of data has suddenly become a vast and dynamic source of potential profit, and capitalist firms have adapted quickly to capture these profits.

The restructuring associated with the rise of information-intensive economic activity has greatly changed the nature of work:
Simultaneously, the generalised deindustrialisation of the high-income economies means that the product of work becomes immaterial: cultural content, knowledge, affects, and services. This includes media content like YouTube and blogs, as well as broader contributions in the form of creating websites, participating in online forums, and producing software. (Kindle Locations 556-559)
But equally it takes the form of specialized data-intensive work within traditional companies: design experts, marketing analysis of “big data” on consumer trends, the use of large simulations to guide business decision-making, the use of automatically generated data from vehicles to guide future engineering changes.

In order to capture the profit opportunities associated with the availability of big data, something else was needed: an organizational basis for aggregating and monetizing the data that exist around us. This is the innovation that comes in for Srnicek's greatest focus of attention: the platform.
This chapter argues that the new business model that eventually emerged is a powerful new type of firm: the platform. Often arising out of internal needs to handle data, platforms became an efficient way to monopolise, extract, analyse, and use the increasingly large amounts of data that were being recorded. Now this model has come to expand across the economy, as numerous companies incorporate platforms: powerful technology companies (Google, Facebook, and Amazon), dynamic start-ups (Uber, Airbnb), industrial leaders (GE, Siemens), and agricultural powerhouses (John Deere, Monsanto), to name just a few. (Kindle Locations 602-607).
What are platforms? At the most general level, platforms are digital infrastructures that enable two or more groups to interact. They therefore position themselves as intermediaries that bring together different users: customers, advertisers, service providers, producers, suppliers, and even physical objects. More often than not, these platforms also come with a series of tools that enable their users to build their own products, services, and marketplaces. Microsoft’s Windows operating system enables software developers to create applications for it and sell them to consumers; Apple’s App Store and its associated ecosystem (XCode and the iOS SDK) enable developers to build and sell new apps to users; Google’s search engine provides a platform for advertisers and content providers to target people searching for information; and Uber’s taxi app enables drivers and passengers to exchange rides for cash. (Kindle Locations 607-616)
Srnicek distinguishes five large types of digital data platforms that have been built out as business models: advertising, cloud, industrial, product, and "lean" platforms (the latter exemplified by Uber).

Srnicek believes that firms organized around digital platforms are subject to several important dynamics and tendencies: "expansion of extraction, positioning as a gatekeeper, convergence of markets, and enclosure of ecosystems" (kl 1298). These tendencies are created by the imperative by the platform-based firm to generate profits. Profits depend upon monetizing data; and data has little value in small volume. So the most fundamental imperative is -- mass collection of data from individual consumers.
If data collection is a key task of platforms, analysis is the necessary correlate. The proliferation of data-generating devices creates a vast new repository of data, which requires increasingly large and sophisticated storage and analysis tools, further driving the centralisation of these platforms. (kl 1337-1339)
So privacy threats emerging from the new digital economy are not a bug; they are an inherent feature of design.

This appears to lead us to Srnicek's most basic conclusion: the new digital economy is just like the old industrial economy in one important respect. Firms are wholly focused on generating profits, and they design intelligent strategies to permit themselves to appropriate ever-larger profits from the raw materials they process. In the case of the digital economy the raw material is data, and the profits come from centralizing and monopolizing access to data, and deploying data to generate profits for other firms (who in turn pay for access to the data). And revenues and profits have no correspondence to the size of the firm's workforce:
Tech companies are notoriously small. Google has around 60,000 direct employees, Facebook has 12,000, while WhatsApp had 55 employees when it was sold to Facebook for $ 19 billion and Instagram had 13 when it was purchased for $ 1 billion. By comparison, in 1962 the most significant companies employed far larger numbers of workers: AT& T had 564,000 employees, Exxon had 150,000 workers, and GM had 605,000 employees. Thus, when we discuss the digital economy, we should bear in mind that it is something broader than just the tech sector defined according to standard classifications. (Kindle Locations 169-174)
Marx's theory of capitalism fundamentally originates in a theory of conflict of interest and a theory of exploitation. In Capital that conflict exists between capitalists and workers, and consumers are essentially ignored (except when Marx sometimes refers to the deleterious effects of competition on public health; link). But in Srnicek's reading of the contemporary digital economy (and Elder-Vass's as well) the focus shifts away from labor and towards the consumer. The primary conflict in the digital economy is between the platform firm that seeks to acquire our data and the consumers who want the digital services but who are poorly aware of the cost to their privacy. And here it is more difficult to make an argument about exploitation. Are consumers being exploited in this exchange? Or are they getting fair value through extensive and valuable digital services, for the surrender of their privacy in the form of data collection of clicks, purchases, travel, phone usage, and the countless other ways in which individual data winds up in the aggregation engines?

In an unexpected way, this analysis leads us back to a question that seems to belong in the nineteenth century: what after all is the source of value and wealth? And who has a valid claim on a share? What principles of justice should govern the distribution of the wealth of society? The labor theory of value had an answer to the question, but it is an answer that didn't have a lot of validity in 1850 and has none today. But in that case we need to address the question again. The soaring inequalities of income and wealth that capitalism has produced since 1980 suggest that our economy has lost its control mechanisms for equity; and perhaps this has something to do with the fact that a great deal of the money being generated in capitalism today comes from control of data rather than the adding of value to products through labor. Oddly enough, perhaps Marx's other big idea is relevant here: social ownership of the means of production. If there were a substantial slice of public-sector ownership of big data firms, including financial institutions, the resulting flow of income and wealth might be expected to begin to correct the hyper-inequalities our economy is currently generating.

Friday, May 12, 2017

Brian Epstein's radical metaphysics


Brian Epstein is adamant that the social sciences need to think very differently about the nature of the social world. In The Ant Trap: Rebuilding the Foundations of the Social Sciences he sets out to blow up our conventional thinking about the relation between individuals and social facts. In particular, he is fundamentally skeptical about any conception of the social world that depends on the idea of ontological individualism, directly or indirectly. Here is the plainest statement of his view:
When we look more closely at the social world, however, this analogy [of composition of wholes out of independent parts] falls apart. We often think of social facts as depending on people, as being created by people, as the actions of people. We think of them as products of the mental processes, intentions, beliefs, habits, and practices of individual people. But none of this is quite right. Research programs in the social sciences are built on a shaky understanding of the most fundamental question of all: What are the social sciences about? Or, more specifically: What are social facts, social objects, and social phenomena—these things that the social sciences aim to model and explain? 
My aim in this book is to take a first step in challenging what has come to be the settled view on these questions. That is, to demonstrate that philosophers and social scientists have an overly anthropocentric picture of the social world. How the social world is built is not a mystery, not magical or inscrutable or beyond us. But it turns out to be not nearly as people-centered as is widely assumed. (p. 7)
Here is one key example Epstein provides to give intuitive grasp of the anti-reductionist metaphysics he has in mind -- the relationship between "the Supreme Court" and the nine individuals who make it up.
One of the examples I will be discussing in some detail is the United States Supreme Court. It is small— nine members— and very familiar, so there are lots of facts about it we can easily consider. Even a moment’s reflection is enough to see that a great many facts about the Supreme Court depend on much more than those nine people. The powers of the Supreme Court are not determined by the nine justices, nor do the nine justices even determine who the members of the Supreme Court are. Even more basic, the very existence of the Supreme Court is not determined by those nine people. In all, knowing all kinds of things about the people that constitute the Supreme Court gives us very little information about what that group is, or about even the most basic facts about that group. (p. 10)
Epstein makes an important observation when he notes that there are two "consensus" views of the individual-level substrate of the social world, not just one. The first is garden-variety individualism: it is individuals and their properties (psychological, bodily) involved in external relations with each other that constitute the individual-level substrate of the social. In this case is reasonable to apply the supervenience relation to the relation between individuals and higher-level social facts (link).

The second view is more of a social-constructivist orientation towards individuals: individuals are constituted by their representations of themselves and others; the individual-level is inherently semiotic and relational. Epstein associates this view with Searle (50 ff.); but it seems to characterize a range of other theorists, from Geertz to Goffman and Garfinkel. Epstein refers to this approach as the "Standard Model" of social ontology. Fundamental to the Standard View is the idea of institutional facts -- the rules of a game, the boundaries of a village, the persistence of a paper currency. Institutional facts are held in place by the attitudes and performances of the individuals who inhabit them; but they are not reducible to an ensemble of individual-level psychological facts. And the constructionist part of the approach is the idea that actors jointly constitute various social realities -- a demonstration against the government, a celebration, or a game of bridge. And Epstein believes that supervenience fails in the constructivist ontology of the Standard View (57).

Both views are anti-dualistic (no inherent social "stuff"); but on Epstein's approach they are ultimately incompatible with each other.

But here is the critical point: Epstein doesn't believe that either of these views is adequate as a basis for social metaphysics. We need a new beginning in the metaphysics of the social world. Where to start this radical work? Epstein offers several new concepts to help reshape our metaphysical language about social facts -- what he refers to as "grounding" and "anchoring" of social facts. "Grounding" facts for a social fact M are lower-level facts that help to constitute the truth of M. "Bob and Jane ran down Howe Street" partially grounds the fact "the mob ran down Howe Street" (M). The fact about Bob and Jane is one of the features of the world that contributes to the truth and meaning of M. "Full grounding" is a specification of all the facts needed in order to account for M. "Anchoring" facts are facts that characterize the constructivist aspect of the social world -- conformance to meanings, rules, or institutional structures. An anchoring fact is one that sets the "frame" for a social fact. (An earlier post offered reflections on anchor individualism; link.)

Epstein suggests that "grounding" corresponds to classic ontological individualism, while "anchoring" corresponds to the Standard View (the constructivist view).
What I will call "anchor individualism" is a claim about how frame principles can be anchored. Ontological individualism, in contrast, is best understood as a claim about how social facts can be grounded. (100)
And he believes that a more adequate social ontology is one that incorporates both grounding and anchoring relations. "Anchoring and grounding fit together into a single model of social ontology" (82).

Here is an illustrative diagram of how the two kinds of relations work in a particular social fact (Epstein 94):


So Epstein has done what he set out to do: he has taken the metaphysics of the social world as seriously as contemporary metaphysicians do on other important topics, and he has teased out a large body of difficult questions about constitution, causation, formation, grounding, and anchoring. This is a valuable and innovative contribution to the philosophy of social science.

But does this exercise add significantly to our ability to conduct social science research and theory? Do James Coleman, Sam Popkin, Jim Scott, George Steinmetz, or Chuck Tilly need to fundamentally rethink their approach to the social problems they attempted to understand in their work? Do the metaphysics of "frame", "ground", and "anchor" make for better social research?

My inclination is to think that this is not an advantage we can attribute to The Ant Trap. Clarity, precision, surprising conceptual formulations, yes; these are all virtues of the book. But I am not convinced that these conceptual innovations will actually make the work of explaining industrial actions, rebellious behavior, organizational failures, educational systems that fail, or the rise of hate-based extremism more effective or insightful.

In order to do good social research we do of course need to have a background ontology. But after working through The Ant Trap several times, I'm still not persuaded that we need to move beyond a fairly commonsensical set of ideas about the social world:
  • individuals have mental representations of the world they inhabit
  • institutional arrangements exist through which individuals develop, form, and act
  • individuals form meaningful relationships with other individuals
  • individuals have complicated motivations, including self-interest, commitment, emotional attachment, political passion
  • institutions and norms are embodied in the thoughts, actions, artifacts, and traces of individuals (grounded and anchored, in Epstein's terms)
  • social causation proceeds through the substrate of individuals thinking, acting, re-acting, and engaging with other individuals
These are the assumptions that I have in mind when I refer to "actor-centered sociology" (link). This is not a sophisticated philosophical theory of social metaphysics; but it is fully adequate to grounding a realist and empirically informed effort to understand the social world around us. And nothing in The Ant Trap leads me to believe that there are fundamental conceptual impossibilities embedded in these simple, mundane individualistic ideas about the social world.

And this leads me to one other conclusion: Epstein argues the social sciences need to think fundamentally differently. But actually, I think he has shown at best that philosophers can usefully think differently -- but in ways that may in the end not have a lot of impact on the way that inventive social theorists need to conceive of their work.

(The photo at the top is chosen deliberately to embody the view of the social world that I advocate: contingent, institutionally constrained, multi-layered, ordinary, subject to historical influences, constituted by indefinite numbers of independent actors, demonstrating patterns of coordination and competition. All these features are illustrated in this snapshot of life in Copenhagen -- the independent individuals depicted, the traffic laws that constrain their behavior, the polite norms leading to conformance to the crossing signal, the sustained effort by municipal actors and community based organizations to encourage bicycle travel, and perhaps the lack of diversity in the crowd.)

Tuesday, May 9, 2017

Generativism



There is a seductive appeal to the idea of a "generative social science". Joshua Epstein is one of the main proponents of the idea, most especially in his book, Generative Social Science: Studies in Agent-Based Computational Modeling. The central tool of generative social science is the construction of an agent-based model (link). The ABM is said to demonstrate the way in which an observable social outcome of pattern is generated by the properties and activities of the component parts that make it up -- the actors. The appeal comes from the notion that it is possible to show how complicated or complex outcomes are generated by the properties of the components that make them up. Fix the properties of the components, and you can derive the properties of the composites. Here is Epstein's capsule summary of the approach:
The agent-based computational model -- or artificial society -- is a new scientific instrument. It can powerfully advance a distinctive approach to social science, one for which the term "generative" seems appropriate. I will discuss this term more fully below, but in a strong form, the central idea is this: To the generativist, explaining the emergence of macroscopic societal regularities, such as norms or price equilibria, requires that one answer the following question: 
The Generativist's Question 
*How could the decentralized local interactions of heterogeneous autonomous agents generate the given regularity?  
The agent-based computational model is well-suited to the study of this question, since the following features are characteristic: [heterogeneity, autonomy, explicit space, local interactions, bounded rationality] (5-6)
And a few pages later:
Agent-based models provide computational demonstrations that a given microspecification is in fact sufficient to generate a macrostructure of interest. . . . To the generativist -- concerned with formation dynamics -- it does not suffice to establish that, if deposited in some macroconfiguration, the system will stay there. Rather, the generativist wants an account of the configuration's attainment by a decentralized system of heterogeneous autonomous agents. Thus, the motto of generative social science, if you will, is: If you didn't grow it, you didn't explain its emergence. (8)
Here is how Epstein describes the logic of one of the most extensive examples of generative social science, the attempt to understand the disappearance of Anasazi population in the American Southwest nearly 800 years ago.
The logic of the exercise has been, first, to digitize the true history -- we can now watch it unfold on a digitized map of Longhouse Valley. This data set (what really happened) is the target -- the explanandum. The aim is to develop, in collaboration with anthropologists, microspecifications -- ethnographically plausible rules of agent behavior -- that will generate the true history. The computational challenge, in other words, is to place artificial Anasazi where the true ones were in 80-0 AD and see if -- under the postulated rules -- the simulated evolution matches the true one. Is the microspecification empirically adequate, to use van Fraassen's phrase? (13)
Here is a short video summarizing the ABM developed under these assumptions:



The artificial Anasazi experiment is an interesting one, and one to which the constraints of an agent-based model are particularly well suited. The model follows residence location decision-making based on ground-map environmental information.

But this does not imply that the generativist interpretation is equally applicable as a general approach to explaining important social phenomena.

Note first how restrictive the assumption is of "decentralized local interactions" as a foundation to the model. A large proportion of social activity is neither decentralized nor purely local: the search for muons in an accelerator lab, the advance of an armored division into contested territory, the audit of a large corporation, preparations for a strike by the UAW, the coordination of voices in a large choir, and so on, indefinitely. In all these examples and many more, a crucial part of the collective behavior of the actors is the coordination that occurs through some centralized process -- a command structure, a division of labor, a supervisory system. And by its design, ABMs appear to be incapable of representing these kinds of non-local coordination.

Second, all these simulation models proceed from highly stylized and abstract modeling assumptions. And the outcomes they describe capture at best some suggestive patterns that might be said to be partially descriptive of the outcomes we are interested in. Abstraction is inevitable in any scientific work, of course; but once recognizing that fact, we must abandon the idea that the model demonstrates the "generation" of the empirical phenomenon. Neither premises nor conclusions are fully descriptive of concrete reality; both are approximations and abstractions. And it would be fundamentally implausible to maintain that the modeling assumptions capture all the factors that are causally relevant to the situation. Instead, they represent a particular stylized hypothesis about a few of the causes of the situation in question.  Further, we have good reason to believe that introducing more details at the ground level will sometimes lead to significant alteration of the system-level properties that are generated.

So the idea that an agent-based model of civil unrest could demonstrate that (or how) civil unrest is generated by the states of discontent and fear experienced by various actors is fundamentally ill-conceived. If the unrest is generated by anything, it is generated by the full set of causal and dynamic properties of the set of actors -- not the abstract stylized list of properties. And other posts have made the point that civil unrest or rebellion is rarely purely local in its origin; rather, there are important coordinating non-local structures (organizations) that influence mobilization and spread of rebellious collective action. Further, the fact that the ABM "generates" some macro characteristics that may seem empirically similar to the observed phenomenon is suggestive, but far from a demonstration that the model characteristics suffice to determine some aspect of the macro phenomenon. Finally, the assumption of decentralized and local decision-making is unfounded for civil unrest, given the important role that collective actors and organizations play in the success or failure of social mobilizations around grievances (link).

The point here is not that the generativist approach is invalid as a way of exploring one particular set of social dynamics (the logic of decentralized local decision-makers with assigned behavioral rules). On the contrary, this approach does indeed provide valuable insights into some social processes. The error is one of over-generalization -- imagining that this approach will suffice to serve as a basis for analysis of all social phenomena. In a way the critique here is exactly parallel to that which I posed to analytical sociology in an earlier post. In both cases the problem is one of asserting priority for one specific approach to social explanation over a number of other equally important but non-equivalent approaches.

Patrick Grim et al provide an interesting approach to the epistemics of models and simulations in "How simulations fail" (link). Grim and his colleagues emphasize the heuristic and exploratory role that simulations generally play in probing the dynamics of various kinds of social phenomena.


Friday, May 5, 2017

Snippets from the Roman world

image: Arch of Septimius Severus, Forum (203 CE)

The history of Rome has a particular fascination for twenty-first century readers, especially in the West. Roman law, Roman philosophy, Roman legions, and Roman roads all have a powerful resonance for our imaginations today. Mary Beard's recent SPQR: A History of Ancient Rome is an interesting recent synthesis of the long sweep of Rome's history.

Beard affirms the continuing importance of Roman history in these terms:
Ancient Rome is important. To ignore the Romans is not just to turn a blind eye to the distant past. Rome still helps to define the way we understand our world and think about ourselves, from high theory to low comedy. After 2,000 years, it continues to underpin Western culture and politics, what we write and how we see the world, and our place in it.... The layout of the Roman imperial territory underlies the political geography of modern Europe and beyond. The main reason that London is the capital of the United Kingdom is that the Romans made it the capital of their province Britannia – a dangerous place lying, as they saw it, beyond the great Ocean that encircled the civilised world. Rome has bequeathed to us ideas of liberty and citizenship as much as of imperial exploitation, combined with a vocabulary of modern politics, from ‘senators’ to ‘dictators’. (15)
Much of Beard's treatment is deflationary: she demonstrates that Rome's reality in the first five hundred years was substantially more ordinary and less grand than Roman historians from the time of the Republic and Empire wanted to believe. The population of the city was in the tens of thousands; the armies were more often the retainers of local "big men" (and as often as not ran away when confronted with superior forces); and law and political institutions were very little developed. And yet by the end of the period of the Republic in the first century BCE, Rome had in fact become grand: grand in population (more than a million inhabitants), grand in military power, and grand in the scope of control it exercised over other parts of the known world.

One of the events that Beard deflates is the slave rebellion of Spartacus.
In 73 BCE, under the leadership of Spartacus, fifty or so slave gladiators, improvising weapons out of kitchen equipment, escaped from a gladiatorial training school at Capua in southern Italy and went on the run. They spent the next two years gathering support and withstanding several Roman armies until they were eventually crushed in 71 BCE, the survivors crucified in a grisly parade along the Appian Way. 
It is hard now to see through the hype, both ancient and modern, to what was really going on. Roman writers, for whom slave uprisings were probably the most alarming sign of a world turned upside down, wildly exaggerate the number of supporters Spartacus attracted; estimates go as high as 120,000 insurgents. Modern accounts have often wanted to make Spartacus an ideological hero, even one who was fighting the very institution of slavery. That is next to impossible. Many slaves wanted freedom for themselves, but all the evidence from ancient Rome suggests that slavery as an institution was taken for granted, even by slaves. If they had a clearly formulated aim, the best guess is that Spartacus and his fellow escapees wanted to return to their various homes – in Spartacus’ case probably Thrace in northern Greece; for others, Gaul. One thing is certain, though: they managed to hold out against Roman forces for an embarrassingly long time. 
What explains that success? It was not simply that the Roman armies sent out against them were ill trained. Nor was it just that the gladiators had discipline and fighting skills developed in the arena and were powered by the desire for freedom. Almost certainly the rebel forces were stiffened with the discontented and the dispossessed among the free, citizen population of Italy, including some of Sulla’s ex-soldiers, who may well have felt more at home on military campaign, even against the legions in which they had once served, than on the farm. Seen in these terms, Spartacus’ uprising was not only an ultimately tragic slave rebellion but also the final round in a series of civil wars that had started twenty years earlier with the massacre of Romans at Asculum that marked the beginning of the Social War. (pp. 248-249). 

Her view is, apparently, that there was a great deal of hype surrounding the revolt of Spartacus even among the ancients -- embellishment of the size and ideological purposes of the revolt and the heroism of the gladiators. She wants us to understand the ordinary significance of the uprising. But in turn she gives the revolt a larger social significance -- it was a part of the "civil wars" that had wracked Rome for twenty years prior.

From the tragic to the comic -- Beard spends a few pages on the bar scene in the early Empire.
Elite Romans were often even more dismissive – and anxious – about what the rest of the population got up to when they were not working. Their keenness for shows and spectacles was one thing, but even worse were the bars and cheap cafés and restaurants where ordinary men tended to congregate. Lurid images were conjured up of the types of people you were likely to meet there. Juvenal, for example, pictures a seedy drinking den at the port of Ostia patronised, he claims, by cut-throats, sailors, thieves and runaway slaves, hangmen and coffin makers, plus the occasional eunuch priest (presumably off duty from the sanctuary of the Great Mother in the town). And writing later, in the fourth century CE, one Roman historian complained that the ‘lowest’ sort of person spent the whole night in bars, and he picked out as especially disgusting the snorting noise the dice players made as they concentrated on the board and drew in breath through their snotty noses.  
There are also records of repeated attempts to impose legal restrictions or taxes on these establishments. Tiberius, for example, apparently banned the sale of pastries; Claudius is supposed to have abolished ‘taverns’ entirely and to have forbidden the serving of boiled meat and hot water (presumably to be mixed, in the standard Roman fashion, with wine – but then why not ban the wine?); and Vespasian is said to have ruled that bars and pubs should sell no form of food at all except peas and beans. Assuming that all this is not a fantasy of ancient biographers and historians, it can only have been fruitless posturing, legislation at its most symbolic, which the resources of the Roman state had no means to enforce.
Elites everywhere tend to worry about places where the lower orders congregate, and – though there was certainly a rough side and some rude talk – the reality of the normal bar was tamer than its reputation. For bars were not just drinking dens but an essential part of everyday life for those who had, at best, limited cooking facilities in their lodgings. As with the arrangement of apartment blocks, the Roman pattern is precisely the reverse of our own: the Roman rich, with their kitchens and multiple dining rooms, ate at home; the poor, if they wanted much more than the ancient equivalent of a sandwich, had to eat out. Roman towns were full of cheap bars and cafés, and it was here that a large number of ordinary Romans spent many hours of their non-working lives. (pp. 455-456)
There is much to enjoy and to reflect upon in Beard's narrative. It is a thought-provoking book. But it is worth asking -- what kind of history is SPQR? Essentially it is the product of a deeply learned historian, a distinguished classicist, who has set out to write an engaging narrative telescoping the history of a thousand years of Roman life into a single volume, necessarily providing a very selective set of stories and themes. It is not a detailed work of scholarship itself; rather, it is a selective narrative presenting description and commentary on some of the outlines of this world-historical tapestry. A large portion of the book takes the form of stories and snippets of ordinary life, intended to give the reader a more vivid engagement with the lives of these long-dead Romans. And the reader can bring a degree of confidence to the reading, knowing that Beard is a genuine expert bringing to bear the most recent historical and archeological evidence to the main questions of interpretation. Many of the powerful themes that have interested observers for a century are there -- the social conflicts, the emerging institutions of governance, the arrangements of military power -- but none are treated in the detail that would be expected of a monograph. Instead, the reader is offered a story with many strands, interesting and engaging, but an appetizer rather than the main course for a thorough study of Roman history.