Taking Conceptual Analyses Seriously
Willis F. Overton
We do not take the conceptual component of our science as seriously as we take the empirical component. The reason for this failure to take conceptual analyses seriously is partially historical. Beginning with 19th-century positivism, followed by early 20th-century neo-positivism, and later conventionalism or instrumentalism the Cartesian paradigm dictated the avoidance of any deep conceptual analyses. The reason is also partially due to inertia. Beginning in the 1950s the rules concerning âgood scienceâ underwent transformational changes, and these new rules brought conceptual analyses into science as constitutive such that the conceptual and the empirical form a relational indissociable complementarity. However, the structure of our PhD training programs generally continues to hold on to the outmoded anchors of neo-positivists and conventionalists doctrines. The author argues for the redesign of graduate training with the aim of moving toward a parity between training in conceptual components and training in methods/statistical components of science.
My wish is that the behavioral sciences generally, and developmental science specifically, take conceptual analyses seriously. This wish has been framed by two observations that though made some time ago continue to be relevant. Wittgenstein (1953), in his Philosophical Investigations argued that âin psychology there are experimental methods and conceptual confusion. ⌠The existence of the experimental method makes us think we have the means of solving the problems that trouble us; though problem and method pass one another byâ (p. 232). Robert Hogan (2001) in a more recent brief commentary titled âWittgenstein was rightâ elaborated and updated Wittgensteinâs sentiment, noting that âOur training and core practices concern research methods; the discipline is ⌠deeply skeptical of philosophy. We emphasize methods for the verification of hypotheses and minimize the analysis of the concepts entailed by the hypothesesâ (p. 27). But, Hogan continued âAll the empiricism in the world canât salvage a bad ideaâ (p. 27).
The refusal to take conceptual analyses seriously and the impact of this refusal can be demonstrated through many examples, some of which I explore in a recent publication (Overton, 2015). In this brief space one illustration will suffice. In a recent book titled Misbehaving Science, Aaron Panofsky (2014) explored the question of how the field of behavior genetics, and particularly its reductionist stance, has survived decades of devastating conceptual critiques. Although his answer is complex, its core is that the behavior geneticists have been able to focus attention on tractable empirical and technical critiques, and have avoided responding to deeper epistemological issues. In other words, they avoid any conceptual analysis and pay no penalty for this error of omission. As Panofsky said, âBehavior geneticists did not convince their opponents, settle controversies, and resolve the critiques of their paradigm, instead, they buried their opponents under a pile of repetitive resultsâ (p. 145). Further, Panofsky argued that even the peer review process conspires to facilitate the avoidance of conceptual analysis: âPeer review is particularly constrained in its reach or effectiveness. In behavior genetics it tends to become focused on a narrow range of technical matters while deeper critical questions are not raisedâ (p. 147).
Taking conceptual analyses seriously entails demonstrating in any research project as strong a concern for epistemological and ontological issues as for issues of study design, sample characteristics, measurement issues, data collection, and statistical analyses. Deep conceptual issues need to be understood as integral to, and constitutive of, the entire research process, not as peripheral addenda or convenient heuristics.
HISTORICAL REASONS FOR THE MARGINALIZATION OF CONCEPTUAL ANALYSES
In advancing this wish it is important to understand why conceptual analyses have been and, to a significant extent, continue to be marginalized and trivialized in our science. It began with the 18th-century radical empiricist movementâs (John Locke, David Hume) insistence that reason be split off from pristine sensory experience and understood as derivative: a mindless inductive generalization from that sensory experience. The splitting of reason and pristine observation was itself predicated on Rene Descartesâ earlier introduction of the very idea of splitting and was further advanced by Isaac Newtonâs introduction of âmechanical explanationâ (i.e., reduce the phenomenon to its foundational fixed atoms, observe the forces acting on the atoms, induce the law) as well as his assertion that he formed no hypotheses as the laws of nature stood forth as observed correlations (Prosch, 1964). Thus, what was formed in the early period of what came to be known as the Cartesian-split-mechanistic paradigm (Lakatos, 1978; Overton, 2015) was a view according to which conceptual clarification through conceptual analysis was replaced by the reduction of concepts to pristine observation. The irony in all of this reduction is that there was a great deal of conceptual analysis that went into the denial of the value of conceptual analyses.
The reductionist theme continued through Auguste Comteâs 19th-century articulation of âpositivismâ and its elaboration in the works of John Stuart Mill, Richard Avenarius, and Ernst Mach (Overton, 1998). This original form of positivism faded toward the end of the 19th century due primarily to the criticisms of the neo-Kantians (von Wright, 1971). However, between the two world wars a new form of positivism arose as a part of analytic philosophy. This new form, termed neo-positivism, or logical positivism, or logical empiricism (all equivalent terms) was developed by the group of philosophers known as the Vienna Circle and included Moritz Schlick (organizer and personal center), Rudolph Carnap, Philip Frank, Kurt Godel, Hans Hahn, Gustav Bergmann, and Herbert Feigl among others.
This neo-positivistâs focus on reductionism as analysis and inductive logic as synthesis led to the postulation of two complementary criteria designed to demarcate science from nonscience. The first criterion was that a proposition (e.g., a concept, an hypothesis, a theoretical statement) was acceptable as scientifically meaningful, if and only if, it could be reduced to words whose meaning could be directly observed and pointed to. âThe meaning of the word must ultimately be shown, it has to be given. This takes place through the act of pointing or showingâ (Schlick, 1991, p. 40). The phrase âwhose meaning could be directly observedâ constituted a âneutral observation languageâ completely objective and free from subjective or mind-dependent interpretation. Thus, all theoretical language required reduction to pristine observations framed by the neutral observational language. The second complementary criterion of demarcation was that a proposition was scientifically meaningful, if and only if, it could be shown to be a strictly inductive generalization, drawn directly from the pristine sensory observations. Therefore, a scientifically meaningful universal concept was nothing more than a summary statement of the pristine observations themselves.
It must be quite clear that the principles of neo-positivism negate any possibility of taking conceptual analyses seriously, and though neo-positivism ultimately died under a barrage of withering criticism (reviewed in Overton, 1998), its ghostly hand continues to shape the minds of many behavioral and developmental scientists. The case for conceptual analysis became only minimally improved with the advent in the 1950s of what has come to be called âconventionalismâ or âinstrumentalism,â or sometimes âpostpositivism,â which is best exemplified in the work of Karl Popper (1959). Conventionalism, like neo-positivism, maintains that ideally all scientific knowledge should be based on pristine observations. However, it permits into science, as a kind of stop-gap measure, theoretical terms (theories) that are not reducible to such observations. The feature of fundamental importance here is that theoretical terms and proposition are allowed to operate only as convenient and conventional ways of ordering and organizing hard data (i.e., pristine observations). A fundamental assumption of conventionalism is that these theoretical concepts do not influence the data base itself, rather they operate like pigeonholes or coat racks to classify, arrange, and organize hard data into coherent units. Thus, theories generally serve merely a heuristic function. This attitude is nicely captured in a statement the behaviorist B. F. Skinner (1971) made in 1971, that âno theory changes what it is a theory is aboutâ (p. 206). In this context any conceptual analysis might be nice, but it would hardly be necessary or taken seriously as integral to the science.
CONTEMPORARY ARGUMENTS FOR THE CENTRALITY OF CONCEPTUAL ANALYSES
Were this the end of the historical narrative, my wish that conceptual analyses be taken seriously would simply be silly or fantasy. Scientists would, as many do, merely continue to admonish each other, and their students, to do âgood science by following the scientific method,â where âscientific methodâ is often given the simple (minded?) Wikipedia like definition of âdoing experiments and letting reality speak for itself.â Fortunately, as least for my wish, transformational events in the history, philosophy, and sociology of science also began in the 1950s and continue today. Major contributors to the transformation of how science is understood included, among many others, Hans Georg Gadamer (1989, 1993), N. R. Hanson (1958), Thomas Kuhn (1970, 1977), Imre Lakatos (1978), Bruno Latour (1993), Larry Laudan (1977), Hilary Putnam (1983), Paul Ricoeur (1984), Steven Toulmin (1953), and Ludwig Wittgenstein (1958).
At the core of this transformed, more relational, understanding of science is the idea that nonreducible concepts do, in fact, enter the scientific process as indissociable complementary features of the process, not as merely heuristic devices. Nonreducible concepts are as necessary to the scientific process as are empirical observations; concepts and observations form a relational complementarity (â). This core idea has been expressed in many forms including Hansonâs (1958) fortuitous phrase, âall data are theory laden,â and Taylorâs (1995) âbackground ideas,â Wittgensteinâs (1958) âlanguage games,â Gadamerâs (1989) âpreunderstanding,â Kuhnâs (1970, 1977) âscientific paradigm,â Lakatosâ (1978) âscientific research program,â and Laudanâs (1977) âscientific research tradition.â
THE ROLE OF THE METATHEORETICAL IN TAKING CONCEPTUAL ANALYSES SERIOUSLY AND THE STRUCTURE OF SCIENTIFIC DISCOURSE
In my own work (e.g., Overton, 1998, 2015) I have tried to demonstrate the necessary centrality of universal bedrock concepts and the relational to and fro movement between these and empirical observations. In these demonstrations I rely on the notion of the âmetatheoreticalâ to capture concepts whose scope is broader than any particular theory, and which form the essential conceptual core within which scientific theory and observation function. Any broad scientific research programs is composed of coherent nested layers of metatheoretical concepts, along with the specific theories they contextualize, and the empirical hypotheses drawn from the theories. Figure 1a illustrates this structure of scientific discourse, including the empirical observational/experimental level. Metatheories of the broadest scope are generally termed worldviews. Lerner (2015) and I (2015) have generally termed these âparadigms.â Figure 1b illustrates two contrasting paradigms, the Cartesian-split-mechanistic worldview and the process-relational worldview, which contain the bedrock epistemological and ontological commitments of the paradigm. For example, the classic Cartesian-split-mechanistic worldview is ontologically committed to a fixity of substance, where change is considered the result of mechanical forces. On the other hand, the Process-relational worldview (see Lerner (2015) and Overton (2015)) is committed to an ontology of change. Epistemologically, the Cartesian paradigm is committed to a reductionism, whereas the process-relational paradigm is committed to a relational holism.
Nested within and narrower than worldviews, but more general than specific theories are metatheories of a middle range, (see Figure 1a, also termed âmetamodelsâ). These conceptual systems are consistent with, but less general than worldviews, and entail principles that are identifiably more specific to the observational domain of interest. Thus, for example, nested within the Cartesian paradigm a middle-range metatheory represents the human organism as an inputâoutput computational recording device. In contrast, nested within the process-relational paradigm is a middle range metatheory that Lerner (2015), and I (2015) have referred to as the relational-developmental-systems metatheory, or metamodel (see Lerner, Agans, DeSouza, & Hershberg, 2014; Overton, 2014), which ...