Every snowflake is unique, as is every wave in the sea.
Yet all snowflakes, like all waves,
are governed by the same immutable laws of nature.
Human beings are like this too.
(Marinoff 2003)
In social science, a longstanding debate has developed around the methodological choices that drive empirical research, as Marinoff’s epigraph poetically evocates (Marinoff 2003). Much has been written about what should be considered the realm of observations, what constitutes a unit of analysis, how we should measure it, how to abstract these measurements in formalised schema that can represent social reality, and how to build theories that can be transposed across settings and explain or predict social phenomena. Luckily for someone who wants to write a book about mixed methods, these problems are far from being solved, giving space for more discussion and possibly a small step in advancing the debate. Unluckily for science though, the argument has also witnessed some harsh and fiery moments, building boundaries around methods and disciplines that are sometimes so hard to cross that they seem more like solid walls than matters of scientific discussion.
Network science has always spanned boundaries, both disciplinary and methodological. Despite its stereotyped perception as a hard, mathematical, arid and abstract quantitative approach, or alternatively as a fancy way to visualise data, this unique perspective has long proved to be fertile and versatile in many fields and in combination with many methods.
Network science is a scientific approach with clear definitions of what needs to be studied and how to study it: it is defined by the type of data that it aims to analyse (Brandes et al. 2013) in order to discover the mechanisms that regulate empirical phenomena. Being empirical, network science is not simply an abstract and formal exercise of logic and mathematical thinking: it produces theories, but these emerge from observations and are conversely tested against them. And the work of observations and abstraction, especially when dealing with the complexities of the human world, is a challenging task. Sometimes it observes empirical objects with definable properties, for which the process of formalising and testing theories is relatively straightforward. But more often, social scientists do not know much about the phenomena they want to explain; they thus need to dedicate hard and detailed work to describe such phenomena before they can attempt any measurement and formalisation.
As such, network science is not different from any other science. In a similar way, physics needs to dedicate many resources to measure the trajectories of invisible particles; astronomy requires many hours to observe untouchable objects; and archaeology has to deal with sparse and incomplete sources of information. Once they have robust observations, they can build theories, and once they have theories, they have to empirically prove that they can stand up against falsifications.
Thus, what is unique to network science is not its method of enquiry but the specific type of assumptions that define the field and the type of empirical data to be observed. That is, network science is interested in studying associations, dependencies and relations. Regardless of the kinds of phenomena it wants to explain, the foundational elements that characterise such phenomena need to be related to each other, where the pattern of relations is what distinguishes them and what is interesting to study (Brandes et al. 2013). Defining, observing, measuring and modelling associations and dependencies is not an easy task. But the main implication of this assumption is that we cannot consider the world (any kind of world—the physical and the social, the past and the present, the one we can directly observe and the one we can only indirectly assume) as a linear aggregation of independent elements whose properties sum up in a meaningful whole. Meaning emerges from the relational texture, and we need theories and methods that allow us to explain such texture.
Defining associations and dependencies entails taking into account that elements are interlocked in a spatial way, which means that they form networks of relationships by virtue of being connected with each other synchronically. However, they are also interlocked in a temporal way: what happens in the present is dependent on what happened in the past and influences what will happen in the future. Therefore, interlocking mechanisms are highly contextual. Despite their localised nature, quantitative research in social science generally tends to study mechanisms by developing either formal models that are too generic to take into account context dependencies or statistical approaches that try to fit data obtained from several contexts into a single model at a time (Edmonds 2012). Against these tendencies, qualitative research has had the invaluable merit of bringing local properties into focus. However, as we will see in the Chapter 2, the fruitful debate between general models and contextualised observations has historically drifted toward a paradigmatic opposition that claimed the ontological, epistemological and methodological incompatibility between natural and social sciences on the one hand and between quantitative and qualitative methods on the other.
This book has the ambitious aim of overcoming this unfruitful drift. Instead of simply discussing the technical requirements that are needed when methods are mixed in social research, it does so by rejecting incompatible paradigmatic stances and by looking at the peculiar ontological, epistemological and methodological dimensions of network science. It does so by starting from the observation that network analysis is not a quantitative method strictly speaking. Although it formalises data into numbers, it radically departs from the classic statistical approaches of categorical and variable analysis, with...