1
The Information Challenge
Introduction
The tsunami of big data flooding through organisations is overwhelming us. Organisations, processes and decision-making commonly adhere to norms developed when instant messaging meant sending a telegram and the telephone was a rare and exotic instrument. We are beyond the once seemingly impossible idea of the paperless office (rarely realised) and are heading towards a world of blockchain (Tapscott, 2018): distributed systems under no single ownership, with data stored securely and anonymously (or so it is claimed). Emerging generations of technology will exacerbate the organisational, informational, practical, legal and moral challenges we face both in sustaining organisations and in preserving some semblance of privacy for the individual. We cannot continue as we are and expect a good outcome.
The revolution in our organisations since the 1940s has been technological, not informational. The development of data processing machines from Colossus in 1943, the first digital programmable computer, to powerful and highly portable devices, contemporary communications capability and emerging fifth generation wireless has been astounding. Jackson (2015), citing Forrester Research, highlighted how the growth in our ability to process, retrieve, store and transmit data is stupendous, while our collective ability to make effective use of it lags far behind: â90% of data stored within a company is not available for analysisâ.
While, as Kaplan suggests (2015), we are seeing the emergence of âa new generation of systems that rival or exceed human capabilitiesâ for a significant proportion of organisations, the old rules are still being applied. They continue to capture and store data in unstructured, difficult-to-search formats and to file or archive using, let us be gentle for now in what we call them, âflexibleâ approaches and methods that, in some cases, render the data irretrievable for practical purposes. Such approaches make it ever more difficult for organisations to extract value from the data that they hold let alone claim compliance with legislation such as the Data Protection Act 2018 (UK), the EU GDPR (General Data Protection Regulations) and other privacy requirements that underpin them.Webref 2
Pagnamenta, writing in The Daily Telegraph (2018), meanwhile speculated on the death of email and its replacement by messaging systems. He stated that 269 billion emails are posted every day, many generated by machines, damaging productivity and with many unanswered. While he reports an average of 121 emails per worker per day, 48% being spam and only a few actually warranting a reply, he also cites a colleague claiming an unread total of over 100,000. To even attempt to read or process that volume of data would be pointless.
Data, as Silver (2012) suggests, is just that. We have too much data when what we need is information, although even an excess of information has been cautioned against in the psychology literature on risk. Information is data which has been filtered, integrated, assimilated, aggregated and contextualised. Critically, data enables confusion, but information enables decisions.
Thinking inside the box?
Technology and technologists are very good data providers; organisations and managers must become very good data converters, interpreters and users. For many people and purposes the delivery technology is often quite irrelevant. What is relevant is the information. It is information that allows us to comprehend things, to understand them, to decide what to do. We need both thinking tools and intelligent organisations to do this with; the âTâ in information technology (IT) is predominantly the means of capturing then conveying the valuable thing, the information. Value is what is derived from what we, people, do with the data and information, that is, the thinking, the processes, problem-solving and analytical tools that we apply to and with information. There is vastly more potential information available to us and our organisations than ever before, and that availability will increase, possibly exponentially, in the coming years. It seems to me, though, that we are ill equipped to exploit this potential, either through the software tools of business intelligence (BI) or our human skills and ability in interpreting and understanding it. Collectively we seem to neither appreciate the value of information nor design our organisations to generate value from it.
Relative to the potential offered by information many of our organisations are deeply dysfunctional. Their operating models are rooted in the idea of command and control, the mechanistic, bureaucratic, functional, centralising structures and managers who, frequently, secure decisions through bureaucratic means asserting positional power: âItâs my decision. I am in chargeâ. While they are mainly kidding themselves in that respect, for such organisations, much of the money spent on IT has been wasted. Structured and organised in line with âtraditional models of organisationâ (Beckford, 2017), they are not able to exploit their investment in technologies. Both the hardware and software work; the machines operate with extremely high levels of reliability (greater than six sigma uptime â 99.999%); parts and components are exchangeable, hot-swappable; data is backed-up, mirrored and replicated; millions of messages are transmitted and received with almost no losses.
So, if that is all right, where is the failure?
IT has been attempting to deliver organisational value since the 1960s with the implementation of computerised accounting. Some substantial progress has been made, but, typically, the IT has been retrofitted to the established organisations and structures, not used to create a new organisational paradigm. Technology has commonly been applied to automate tasks previously carried out by people, tasks which can be represented in machine logic (an algorithm or programme) as routine, logical, methodical, number crunching and, relatively, unchanging. Those tasks are not characterised as needing âideation, creativity and innovationâ (Brynjolfsson & McAfee, 2014). Computers can, so far, only work inside the box.
Automation has delivered substantial efficiency gains but has often deliberately and, even more often, unconsciously removed discretion from people in the organisation, particularly those who directly deal with customers. Decision-making travels further up the hierarchy as technology makes more data more available and more rapidly to decision makers. This does not always lead to better decisions being taken but to more decisions being taken further away from the customer, problem source or need. Many organisations are developing IT-enabled, dysfunctionally over-centralised structures, not through intent, desire or need but simply because the information systems enable it. No one notices it happening or thinks to stop it. Collectively we have not re-examined the notion of what it means to âcontrolâ an organisation. We have not grasped that, particularly for service organisations, performance is subjective, an interpretation of events. When the service is created and consumed on the fly, the quality of the service rests in the human interaction not in the machine process. That cannot be controlled by an automaton, it requires people and judgement. Revising my position from the previous edition, I suggest performance is, ultimately, perhaps more a function of the customer than it is a function of the organisation.
Beynon-Davies (2013) determined that 67% of UK organisations have suffered at least one âsystemsâ project that has failed to deliver expected benefits or experienced time and cost overruns, while Gartner (webref 3) state that 80% of SAP clients are disappointed in benefits realised, the measurability of those benefits and the competency of system users. They argue that 90% of IT projects do not return real benefit to the organisation and that 40% fail completely. Meanwhile, McKinsey are reputed to have stated that, historically, two-thirds of chief information officers have not had to defend their budgets; because nobody else knew enough of the arcane language of IT to ask them the right questions. Morgan Stanley (webref 4) estimated that even as long ago now as 2002 companies threw away billions of dollars of their IT capital expenditure on âshelfwareâ (software licences and systems never used), a situation that has certainly deteriorated in the intervening years. I regularly encounter CIOs proudly boasting of the number of software licences they have cancelled, never apologising for having acquired them in the first place! In 2003 Reichheld and Markey in Harvard Business Review suggested that âIT doesnât matterâ. HBR did not see IT as a source of strategic advantage; in this new age of data science, online retailers and other information-intensive organisations might not agree. Universities meanwhile continue to produce computer science graduates who rely on âgeek speakâ (Times Higher Education, 14th August 2014), not having the communication or business skills to render themselves useful to organisations. If all this is true then somebody, somewhere must be doing something wrong â or maybe we are collectively valuing and focusing on the wrong things.
The continuing convention in commissioning an information system or information technology project is to identify a problem to be solved, to identify a technological means of addressing it, estimate the potential payback and measure the cost of solving it, (that is the hardware, software, configuration, customisation, training, backfilling and business disruption). The cost is capitalised; because such projects have a value over time, the accountants can depreciate the investment. The payback is then measured through productivity gains estimated through reduced headcount, increased system availability, better compliance with regulators, improved reporting, reduced âclicksâ to use the system, improved appearance and better toys. Still, most organisations hold nobody properly accountable for any difference. Instead they consider IT as a necessary evil, an infrastructure cost to be minimised rather than a productivity-enabling tool to be, at least, optimised. Organisations are often seduced into IT projects with the prospect of better technology, more data and information which is faster rather than more valuable, failing to appreciate the difference. This mentality drives underinvestment in what really matters, the information derived from the system.
The epiphenomena of an IT system are its gadgets: artefacts connected in the âInternet of Thingsâ, âhome hubsâ and other digital personal assistant devices, mobile devices, smartphones and all the other physical, commoditised ephemera. Software houses have modified their licensing models, lowering the initial cost while, often, increasing the cost of support, configuration and upgrades â the total life cost of the product increasing overall. The business opportunity to monetise client data, dressed up as âreducing the capital required for new investmentâ, has accelerated the trend towards âcloud-basedâ approaches, âsoftware as a serviceâ, and âonline everythingâ. While this approach can undoubtedly offer some benefit, it introduces a new set of challenges, dependencies, interdependencies and risks which many organisations fail to adequately comprehend and address.
Iâm going to pick up my email
An organisation with a highly distributed network of locations, many in rural areas with no access to high speed broadband services, decided to implement an âonlineâ suite of workplace applications. In a number of locations, employees requiring access to those applications would leave the workplace several times each day to drive to a location with good mobile connectivity to send and receive emails and access other systems. Whatever financial performance gains were achieved at the centre of the organisation, they were more than offset by losses in the distributed locations.
Many âupgradesâ add little value; of themselves they often do not make the user more productive, efficient or effective in their role. They do not, in general, âserve the customer betterâ, and they do not make individuals better at their jobs. Often the result is a faster, more efficient way of making the same mistakes. Individually each mistake is cheaper and faster â is that an increase in productivity we want to celebrate when the total cost of all the mistakes is often greater than it was before?
The various integrated âenterprise wideâ software packages in widespread use throughout the world still largely reflect the traditional, functional and siloed structures of the organisations that use them. This is partly a reflection of the preferences of the individual buyers â âI need a better finance packageâ â and partly a reflection of the challenges of developing applications that are truly comprehensive. It would be fatuous to deny the challenge of creating completely comprehensive programmes that âdo everythingâ. The proliferation of functional applications and the need (and it is a need) to use the same data in more than one functional silo often leads to replication of data across those silos. This generates a requirement to synchronise the data and maintain its integrity. However, not only is data often shared through unstable, insecure transfer and integration methods and taken from a context in which it has meaning to one in which that meaning is lost, but in being duplicated or replicated many times, it loses its integrity, definition and meaning. Integrity becomes almost impossible when even minor changes are applied to the arrangement or order of data or where it is merged with other data. Even where there is good intent it is difficult to sustain a data maintenance routine, and, anyway, âit wonât have changed much. Letâs use last monthâs dataâ. Organisations have accumulated more and more applications with more and more versions of the data so that it becomes nigh on impossible to determine which data set contains the âtruthâ. Each (whether accurately maintained or not) is applied to a particular, often functionally partial or siloed decision. Meanwhile reliance is placed on often unverified, untested data. In one organisation there were more than 30 different versions of a particular âtruthâ with consequently inadequate decision-making and many arguments about which was ârightâ. Of course, in this situation none were absolutely wrong or right; rightness (or not) depended upon the underlying assumptions and the question to be answered.
The challenges of integration arise because
- there is no agreed information architecture;
- there is inadequate understanding of the desired or needed outcome;
- the organisational and behavioural implications have not been addressed;
- system design is poor;
- project control is inadequate;
- the business benefits are not properly invested in; or
- things are done with haste to meet arbitrary, usually budgetary, deadlines.
These inadequacies compromise the integrity of the system, the data and, ultimately the organisation itself because it is relying on inadequate information to make customer- and business-critical decisions. This reflects a weak understanding of the value of information.
Multifold replication of data carries with it the likelihood of error. When we couple to that the absolute logical precision of algorithms, we discover potential for further amplification of those errors. When somebody, anybody, searches for stored data on the Internet or the corporate intranet they find, and here I am simplifying massively from MacCormick (2012), all the possible data related to the question they asked, ranked in order of the number of connected pages and the number of links to that page. The question they ask will probably not use the precise words or have the same precise meaning (to them) as the person, people or machines that populated the data sources being searched. The âInternet of dataâ is a global data proliferation engine, massively increasing its data storage requirements every day and, very often, doing so by storing even more copies of things that are already there â and, as yet, it doesnât forget.
Perhaps the Internet is a Borgesian Library (Borges, 1962), a repository of wrong answers to poor questions? The simplicity and ease of use of web browsers and other search tools, coupled to the inadequacy of the ways that data is stored and archived, attenuate our ability to ask good questions â if we let them.
Data has cost but is the raw material of information. We can use it many times. Information has value; we derive it from data, calculate it, present it, use it, exploit it. However, our poor discipline in the management of data, coupled to multiple applications and devices, compounded by the use of the Internet (especially for âcloudâ data storage), fuels this highly effective data proliferation engine. We capture and store ever more copies of, approximately, the same data but have less and less useful information to make decisions with. This leads us, to Beckfordâs Law of Data, which is, as revised, âInformation availability is inversely proportional to data availabilityâ. Data proliferation is exponential in two dimensions, volume and frequency; information declines in proportion. Because of the rate of data proliferation we probably have more potential information in absolute terms, but the rate of growth of information is much smaller than that of data. But we need information to make decisions â not data â while data proliferates as a function of
- the number of users
- times the number of devices
- times the number of applications
- times the number of backups
- times the ease of transmission (the propagation rate)
Information availability decreases accordingly.
This is because generating information relies on our ability to source correct data, to structure, interpret and present it to convey meaning to a recipient. If we cannot rely on the data, we cannot meaningfully communicate.
If data were treated, in accounting terms, as a âmaterial goodâ, it would be acquired, stored, compiled, applied, used in a manner that respected its cost and value just like a washer, nut, press or other physical element. It would be regarded as part of the assets of the business. Failure to do this (and I am not suggesting it is easy) undervalues those businesses whose stock in trade is data and which m...