1
Digital Society’s Technological Network
From Saying Goodbye to Analogue to Intelligent Automation
Josep-Lluís Micó-Sanz, Berta García-Orosa and Eva Campos-Domínguez
Introduction
The comparison between the tradition of the printing press and the innovation of Internet is as obvious as it is recurrent: mechanisms for transmitting information that have established a new order by multiplying and democratizing the scope of previous production methods. Nevertheless, it seems more appropriate to equate the appearance of the web with the discovery of a new continent: a milestone that has had continuity and projection, quite different from the moon landing, which had neither. At the time, both the Internet and the discovery of America were emerging worlds, novelties that expanded a hitherto almost closed and exhausted universe. The Internet is not physical, but few would disagree that this innovation, however intangible, has served to conspicuously broaden the field of action of individuals, companies and administrations.
In the same way as in the 15th century, when Christopher Columbus arrived in Guanahani in the Bahamas, first, millions of virtual navigators applied themselves practically to explore the same ethical and moral basis that they had followed until then; secondly, there have also been those who have adapted their beliefs, attitudes and habits to reality; and a third group was stripped of its former values in favour of incomers, never before seen (Chalmers, 2010, pp. 7–65). Nevertheless, there were other, even more profound adaptations, such as those of the fourth industrial revolution, or Industry 4.0 as it is being called, with cyber-physical systems that interface and combine infrastructures with computer software, sensors, nanotechnology and so on to form a super-sphere that surrounds us completely (Eden, Moor & Soraker, 2013). This is what we might call digital absolutism: the structure that constantly surrounds us, often invisible, from when we get up until when we go to bed: at home, on the street, at school, at work, when we go to buy, while we travel … (Kurzweil, 2000).
In the 2013 government agenda, Germany established that the so-called Industry 4.0 was a high-tech strategy. The deployment of robotics, artificial intelligence, the Internet of Things, 3D printing, cloud computing, big data, Blockchain … has led the rest of the countries to adopt the same standpoint. Internet, regarded as a new continent, has grown and branched out in so many directions that it pervades everything (Akhtar, Khan, Rao‐Nicholson & Zhang, 2016, pp. 7–20). Transfiguration is not easy to explain, nor are the social, economic, political and cultural connections easy to manage (Dubravac, 2015, pp. 229–258).
The combination of the fourth industrial revolution technologies is giving rise to a generation of flexible and receptive companies that do not take decision-making lightly (Ahmad, Basir & Hassanein, 2008, pp. 321–364). Industry 4.0 is bringing in just as many business modifications as strictly digital ones. International references such as Google, Samsung, Sony, LG and Huawei link artificial intelligence with the Internet of Things, robotics with analytics, deep learning with Blockchain, big data with cloud computing (Galli, 2019, pp. 53–72). The budget necessary to advance in these areas and the amplitude of the actions through which the transformation expands force us to contemplate business in a different way than conventionally (Saphiro, 2013).
Data for Industry 4.0
Data and artificial intelligence are the fourth industrial revolution building blocks. Information is the raw material of the digital economy and the fuel of machine learning. Organizations from all sectors use these advances to boost their current and future growth. Not surprisingly, machine and deep learning will increase the return on investment by up to 30% (Broussard, 2015, pp. 814–831; Micó, 2018, pp. 137–138).
Few notions give rise to more reflections than this: thanks to digital technology, in the present there is more information available than ever, instantly and anywhere. Of course, this comprises not only good, useful data but also useless, annoying data. The “bad data” practices, unethical, include the mass gathering of data about citizens and the increased use of that data in unaccountable and discriminatory forms of algorithmic decision-making (Mann et al., 2018). Continuous radio transmission and what’s on television are exalted with the mobile telephones, the Internet of Things and virtual reality (Geraci, 2010). The postmodern identification that crosses barriers manifests itself on the Internet when the same channel transmits a criticism of the government alongside the latest hit song. The first industrial revolution came at the end of the 18th century. It was driven by the steam engine and brought with its factory mechanization. The second revolution happened one hundred years later; it was then electricity, which fostered the division of tasks and mass production. Yet another century afterwards, with the third revolution and information technologies, the automation of these tasks was complete (Napoli, 2014, pp. 340–360). The fourth has popularized drones, driverless vehicles, smart homes and cities, virtual assistants and all kinds of robots (Hammi, Khatoun, Zeadally, Fayad & Khoukhi, 2018, pp. 1–13). What place do people occupy in this new landscape?
Most people in many countries live on the web. It is one of the privileged means they use to communicate with friends, relatives and colleagues. In addition, it is the support through which professional manage and carry out both public and private tasks. They do it with on mobile devices, on home and work computer, with consoles, connected devices such as watches and cars, and so on. In spite of this reality, most of these people have never received nor will receive any type of training on these platforms. One day they started to use these devices and move around this environment (Kurzweil, 2005). At the same time that they got some decisions wrong, they were also in parallel discovering possibilities and solutions to the problems that confronted them. Almost no one received help in interpreting what the Internet is or might be, what companies and institutions want from it, how best to connect with other private users, why it is appropriate to be part of certain communities or why it is imprudent to give certain permissions. The ethical and moral basis of the physical world – offline – is just as valid on the web. However, this in itself it is not enough. Neither schools nor universities nor administrations, associations or families resolve this deficiency. In the end, everyone is self-taught on the web.
Many inventions – the bow, the pulley, the compass, reading glasses, vehicles, the steam engine, the cotton scrubber, asphalt, the Henry Ford model T, lifts, steel for the construction, the atomic bomb, the personal computer, the smartphone … had an impact that went beyond the activity for which they were originally designed. All the history of technology is full of unexpected consequences (Scribner & Cole, 1973, pp. 553–559). Quite often the destruction caused by these artefacts and machines in the cultural, economic and political surroundings in which they emerged was far greater than the beneficial results of their initial use. Ironically, investors in technology centres such as Silicon Valley in the United States have a name for them: killer applications. Today, we could define them as products or services that establish a new category and that supplant now-outdated advances: they destroy and re-make entire industries, bringing confusion to their competitors, to markets regulators and even to their future customers and users (Laniet, 2013, p. 2). Big data, artificial intelligence, robotics and the Internet of Things – the connection to the network of all kinds of objects that have the capacity to be linked digitally to one another – are currently in the public eye, either condescending or critical, depending on the observer’s viewpoint, which ranges widely from irrational technophobia to uncritical technophylla (Bostrom & Cirkovic, 2008). In this sense, already in the year 2000, Red analysed period to explore how the personal computer was successfully connected to middle-class family and was transformed from a frightening and distant (Cold) war machine into a socially (and family) “friendly” machine between the 1960s and 1990s.
Experience accumulated over the centuries shows that killer apps tend to create fabulous wealth and revitalize certain areas that were previously paralysed, but the regeneration that they drive can also go hand in hand with uncontrollable devastation. The speed and trajectory of the fourth industrial revolution are causing more frequent reactions – and harmful at that – than the first-generation technologies.
The People–Machine Relationship
The human brain computing capacity is limited by two primeval requirements: survival and procreation. This is to say our hardware – our physiology – and the incorporated software – our psychology – which need to progress to allow a limited set of elementary actions: distinguish between friends and foes, find the appropriate place in the social hierarchy, obtain food, and find a partner and perpetuate the species (Moravec, 1999). From an evolutionary point of view, everything that goes beyond this limited repertoire might be regarded as redundant (Asaro, 2008, pp. 50–64). The human brain developed in response to these challenges, typical of the African Savannah, to the point that it contains an average of one hundred thousand million neurons and seven thousand synaptic connections for each one of them. It was not necessary to continue evolving because, at that time, far from the demands of the present time, our brains had not been really used for anything (Boden, 1998, pp. 347–356).
We can say that today, more or less, people co-exist with machines that, thanks to the same humans, can learn without limit, with given equipment. Computers and algorithms are at the centre of the global economy: in the production of goods, in the management of services, in transport and logistics, in security, in healthcare, in education, in information, in entertainment … (Dörr, 2016, pp. 700–722) with companies such as Google, Facebook, Apple, Amazon, Microsoft, and so on. Many of the professionals who participate in international reference congresses such as the Mobile World Congress in Barcelona have taken on board that computers are already doing what people could never have dreamed of. In this aspect, traumas and complexes are as dangerous as unbridled enthusiasm. The most sensible solution is to assimilate naturally what, in both substance and in essence, is artificial (More & Vita-More, 2013).
The technological impulse is not a guarantee of freedom, democracy, quality. For example, let us consider the conspiracy being concocted to eliminate messengers and intermediaries; in this case, teachers and journalists are dispensed with in favour of direct communication and the incessant flow of diverse material (Carlson, 2014, pp. 416–431). However, without the intervention of these go-betweens, it would be impossible for data to be converted into information, and that this information could then produce knowledge and that this knowledge could then be consolidated as wisdom (Wilson, 1998, p. 294). The explanation remains qualitative. In certain spheres of society, it is imperative that there are trained professionals who position and evaluate that what is circulating in cyberspace and what hops from the real world into the virtual (Broussard, 2015, pp. 814–831).
In 2004, Tim O’Reilly used the term Web 2.0 to refer to an environment dominated by applications that enhance collaboration and share information. Two years later, Twitter came on the scene, the most popular micro-blogging service in the world. Similar media and networks such as Facebook, YouTube, Instagram, Snapchat, and so on have since grown exponentially (Flew, Spurgeon, Daniel & Swift, 2012, pp. 157–151; Boyd, 2014, p. 24). It took the radio almost four decades to muster 50 million listeners in the United States. On television, this milestone took just 13 years. The popular Mark Zuckerberg page reached 100 million users in just nine months. The virtual space has shattered the monopoly of communication companies and political agents when it comes to producing and distributing content (Anderson, 2013, pp. 1005–1021). Classic calculations are no longer valid about who – and how and when – generates such data and who – and how and when – consumes them.
The proliferation of Web 2.0 led to the transformation of the role and the role of consumers (Dusi, 2017). Several authors consider that after the technological bubble of the year 2000, the digital business environment needed to restore confidence in the digital economy and Web 2.0, together with social networks, created this new climate of trust. Fuchs (2014), one of the authors who has studied the prosumer the most, associates the proliferation of participatory web technologies and of mobile platforms as a need to reinvent the digital economy.
When people themselves enter into action, conventional rules and protocols lose validity (Smajuk & Zanutto, 1997, pp. 63–129). In 2008, Barack Obama’s successful election campaign was a turning point (Farrell, 2012, pp. 35–52; Bimber, 2014; Lilleker, Tenscher & Stetka, 2015). The use of technology in campaign strategies is growing and it is already common in different political contexts such as Mexico (Orcutt, 2012), Venezuela (Forelle et al., 2015), Colombia (López-Urrea, Páez-Valdez & Cuellar-Rodríguez, 2016), United Kingdom (Murthy et al., 2016) or United States (Bessi, Ferrara, 2016).
Nevertheless, people at large are still far from joining in the public debate on equal terms. Only those most dedicated – or, at least motivated – live in the hope that, with some luck, one day they might talk directly with their leaders.
Community and Culture
In general, we can affirm that every community has a culture, but there are notable differences between cultures and communities. The first is a series of perceptions, conventions, language, history and similar concepts. It is compiled and stored in collective memory, books, songs … and also in web pages. Culture can be learned, although some collectives consider it necessary to h...