Introduction
This chapter is a reflection on the granular nature of discreteness, resulting from the use of extensive computational power (for example in the case of big data) in digital processes, as a logical evolution of the most recent computer-based advancements.
The numerous studies on the notion of post-Cartesian mathematics have been beneficial for the development of the idea that any system (from social to mathematical) can be seen as a discrete entity, characterised by the aggregation of its elements. These discrete systems are characterised by the modularity (and equipollence) of their parts. Every element is considered as important as every other, with no differentiation in the role, weight or impact on the overall system. Following a statistical approach, every element counts for the same as every other within the overall totality of the system. If N represents a generic element, in all discrete systems we can observe that N1 = N2 = N3 … = Nn where n is the (possibly infinite) number of discrete elements that constitutes the system.
The substantial difference that the use of big data brings to this discussion is the fact that, in scenarios where large datasets are involved, elements in a system are unique, and, therefore, different from each other. The system is always describable as the summation of its parts, but, contrary to the approaches based on the notion of model, the final description of the system is a richer, variable and multifaceted entity that embodies every possible idiosyncrasy that characterises each individual element. By the same token, in a big data discrete system we have: N1 ≠ N2 ≠ N3 … ≠ Nn, for every N is unique.
This chapter describes the rise of discrete logic in general, and its gradual evolution as a consequence of the growing use of approaches based on the growing computational power and capacity of today’s computers. These changes are used to observe the way in which the public realm is changing (in Chapter 2) when public space is considered to be the aggregate form resulting from the uniqueness of individuals, in opposition to the statistical concept of the model.
The atomists and the synechists
Around 400 bc, Democritus and his mentor Leucippus elaborated the first known atomic theory, whereby they conceptualised the entire world as being made of indivisible elements that freely run within a limitless space. Atoms are indestructible and constitute the “elementary grain of reality” (Rovelli 2018:8). As the smallest possible fraction of any object in the world, atoms are characterised by a finite number and finite size. Democritus’s theory can be considered a solid starting point for the idea that the world (and – by extension – the universe) is not a continuous entity but characterised by granularity (Bailey 1929; Pyle 1995). Democritus’s view of the world can be seen as the starting point for the tension between the idea of continuity and discreteness that still persists today. Prior to Democritus, when the idea of indivisible finite elements was not available to humankind, the world and all phenomena were conceptualised as the sequence of various events that fluidly connected to each other, perhaps related by extra- or intra-terrestrial forces that bound them all. Aristotle offers a good example of this, where in the Categories (Ackrill 1961) he provided a definition for the notions of continuous and discrete elements. These are quantitative attributes that characterise objects. For those objects where a point of contact with other objects, namely a beginning and an end, is identifiable are continuous (Bell 2005:21). For example, a line where two points at the extremes are easily findable is continuous, so the line can be joined up with other lines. By the same token, surfaces have edges (lines) as contact elements, and bodies where points of contact can be found. Aristotle includes in this category time and space too, whereby the present is in constant contact with the past and the future and space, where objects can have common boundaries with which they can be joined up. Where contact points are impossible to find, those objects are characterised by a discrete quantity. For example, this can be seen in numbers or language where a number is considered as an individual entity with no possibility of joining other numbers. Even considering numbers in a sequence, we need to regard them as separate entities by their nature.
The atomistic view of the world proposed by Democritus and Leucippus and the existence of continua, of which Aristotle was one of many promotors, share a common interest in the notion of divisibility and the infinite. For the atomist, any object in the world is divisible into smaller elements a finite number of times until the smallest and most irreducible elements: the atoms. This view implies the idea of a plurality included within each object. For the advocates of the continua, objects are divisible ad infinitum and without limit. This view elevates the idea of unity over the one of plurality. However, the distinction between elements which are infinitely or finitely divisible adds another layer of complexity to the debate over the two positions. The Zeno’s paradoxes provide an illustrative example here. In particular, in the Achilles and the tortoise puzzle where, according to Zeno’s rigorous logic in dividing the distance between the two an infinite number of times, Achilles will never be able to reach the tortoise, as he would require an infinite time to do so. Rovelli (2018:14) explains this paradox by discussing the example of the length of a piece of string. If we imagine cutting a piece of string in two halves, then again, each piece into another two halves and so on ad infinitum, we will obtain an infinite number of pieces of finite length. Their sum will be equal to a finite length of string, namely the original length before the experiment. An infinite number of strings will result in a string of finite length and, by the same token, an infinite number of time divisions will be equal to a finite time. This is easily representable through the notion of convergent series, whereby the sum of an infinite number of values converges to 1.1
The contrast between the two interpretations of the world endures until now and many mathematicians, thinkers and philosophers have contributed for centuries to a more sophisticated understanding of the two camps. In particular, a number of positions are helpful in the discussion of the theoretical framework presented in this chapter. In the fifteenth century, Nicolaus Cusanus (Idiota de mente/The Layman on Mind 1450) proposed an in-between position whereby it is possible to divide a continuum into two levels: the ideal sense that progresses to infinity and the actual sense where a finite number of divisions concludes in a large number of atoms (Bell 2013). Boyer (1956:91) associates the notion of actual infinite to Cusanus with the quadrature of the circle, whereby a circle is approximated into a mathematical series of polygons that approximate to the curvature of the circle. This experiment is explained in more detail in the following section. Almost two centuries later, Gottfried Wilhelm (von) Leibniz explored the notion of continua and their divisibility in his work The Labyrinth of the Continuum (2001). For Leibniz, there are entities that are neither single elements, nor the aggregation of several ones. Starting from the axiom that each real entity is either a single one or an aggregation of many, he noticed that there are a number of cases where continua cannot exist. The line, for example, can be considered as the repetition of several dimensionless points, and already Aristotle demonstrated that points cannot produce a continuous entity (Bell 2013). This led Leibniz to formulate that continua do not belong to the realm of real entities and that they are, by their nature, ideal. Continua are now outside of the initial axiom, and do not have to obey the principle of being either a simple entity or an aggregation of them. By doing this, Leibniz was able to discern ideal entities, like lines or space and time, from the real entities like matter that are purely discrete, i.e. made by monads, single substances with no ulterior parts and therefore indivisible (Leibniz 1989), or a multiplicity of them (or compound in Leibniz’s terms).
At the other end of the scale, we can find the synechists: those who consider the world as truly a continuum. Peirce used the term synechism, using the ancient Greek term synechismos, syneches (continuous) to refer to that interpretation of the world that
exists as a continuous whole of all of its parts, with no part being fully separate, determined or determinate, and continues to increase in complexity and connectedness through semiosis and the operation of an irreducible and ubiquitous power of relational generality to mediate and unify substrates.
(Esposito 2007)
The limit of series in topological space
Within the context of this study, the notion of liminal spatiality stems from the idea of approximation and the limit of geometries (Cornu 1991; Edwards 2012). This notion was first identified by classical Greek mathematicians as the outcome of the application of the method of exhaustion. In Euclid Book XII,2 proposition 2, there is a clear account of the extent to which this method can be used to demonstrate the approximation by polygons. A demonstration of the emergence of the liminal space as the difference between the area of the circle and the inscribed polygon can be found in Casselman (2003), where Figure 1.1 is also provided.3
McFarlane (2004) provides an extensive description of the development of the concept of exhaustion in relation to the ideas of limit and the potential infinite. He traces back these notions to Aristotle, Eudoxus and Archimedes, providing a powerful method to approximate the area of a circle (intended as infinite), by calculating the areas of the polygons inside the circle using triangulation.
The area of the circle can thus be found with any desired precision by selecting a sufficiently large value of n and calculating the areas of the two polygons. This method, however, does not provide a precise value for the area of the circle. To arrive at a precise formula for the actual area of the circle, one would need to take n equal to infinity. But this would require one to add up an infinite number of triangles, which is impossible.4
(McFarlane 2004)
Another way to visualise the emergence of liminality is through a simple code, where the computational power of a computer can be used to automate the process of increasing the number of sides in a polygon in the approximation of a circle. Figure 1.2 has been obtained using Python for Rhino, where a loop ran a number of iterations, generating a series of polygons with an incremental number of sides, from 4 to 50. The more sides the polygon has, the closer to a circle it is, and the difference between the two is less evident. The system created allows the number of sides in the polygon to be increased at will, whereby the polygon can theoretically become incrementally closer to the circle ad infinitum, but never coincides with it.
Figure 1.1 The surface between the circle and the polygon represents the exceeding space between the two
Figure 1.2 Code-generated incremental approximation of polygons to the curve (entire generation on the left-hand side, and a close up of the approximation on the right)
The space generated at each iteration between the polygon and the circle underpins, here, the idea of liminal spatiality. The more sides the polygon has used in the process, the smaller is the area between the circle and the polygon. By the same token, the more resolution the polygon has (namely, the more information used to define it), the smaller and more accurate the liminal space will be.