Collaborative Intelligence
eBook - ePub

Collaborative Intelligence

Using Teams to Solve Hard Problems

  1. 240 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Collaborative Intelligence

Using Teams to Solve Hard Problems

About this book

This practical guide draws on cognitive science and work with Fortune 500 companies to help readers develop essential collaborative skills.
 
Collaborative intelligence is a measure of our ability to think with others on behalf of what matters to us all. It is emerging as a new professional currency at a time when influence is more important than power, and success relies on the ability to inspire. 
 
Through a series of practices and strategies, this book helps us develop our own collaborative intelligence. The authors teach us how to value intellectual diversity and recognize our own mind patterns. By mapping the talents of our teams, we're able to embark together on an aligned course of action and influence.
Collaborative Intelligence is the culmination of more than fifty years of original research that draws on Dawna Markova's background in cognitive neuroscience and her most recent work, with Angie McArthur, as a "Professional Thinking Partner" to some of the world's top CEOs and creative professionals. In their experience, managers who appreciate intellectual diversity will lead their teams to innovation; employees who understand it will thrive because they are in touch with their strengths; and an entire team who understands it will come together to do their best work in a symphony of collaboration.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Collaborative Intelligence by J. Richard Hackman in PDF and/or ePUB format, as well as other popular books in Business & Business Communication. We have over one million books available in our catalogue for you to explore.

PART I Teams in Intelligence

WHAT MAKES FOR A GREAT INTELLIGENCE TEAM? The three chapters that follow set the stage for answering that question. We will see how intelligence teams actually deal with hard problems, the different ways members can collaborate with one another, and what it means to say that an intelligence team has been “effective.” The first chapter (“Teams That Work and Those That Don’t”) opens with an extended example of two teams—one planning a terrorist act, the other trying to head it off. Among the reasons one team succeeded and the other failed are the inherent advantages of playing offense vs. defense; team dynamics that inhibit the full use of members’ resources; and the ways that stereotypes of other groups (including groups embedded within one’s own team) can cripple team processes and performance.
The second chapter (“When Teams, When Not?”) lays out the many different kinds of collaboration that exist within the intelligence community, ranging from communities of interest whose members never actually meet to teams whose members work together face to face over an indefinite period. We will see that teams are not always an appropriate means for accomplishing a particular piece of work, that certain kinds of tasks are better done by solo performers. And even when a team is called for, there remains the question of the type of team that should be created. The chapter identities five different types of teams and discusses the circumstances under which each of them is and is not appropriate.
The final chapter in this part of the book (“You Can’t Make a Team Be Great”) digs into what team “effectiveness” means and how it can be assessed. Although one cannot make a final judgment about a team’s performance until its work is completed, three team processes can be monitored in real time to assess how a team is doing. These processes are: (1) the level of effort a team is applying to its work, (2) the appropriateness of its performance strategy for the task it is performing, and (3) the degree to which the team is using well the full complement of its members’ knowledge, skill, and experience. When a team shows signs of slipping on one or more of these three process criteria, a coaching intervention may be appropriate. Or, more frequently, it turns out that the conditions under which the team is operating—how it is structured and the context within which it operates—are flawed in some way. The second part of the book is devoted to those conditions: what favorable conditions are, how they help, and what is needed to get them in place and help a team take full advantage of them.

CHAPTER 1
Teams That Work and Those That Don’t

It was not all that different from his regular work. Jim, an analyst at the Defense Intelligence Agency (DIA), looked around at the other members of his team. He knew two of them—another analyst from DIA and an FBI agent he had once worked with; the rest were strangers. The team’s job, the organizer had said, was to figure out what some suspected terrorists were up to—and to do it quickly and completely enough for something to be done to head it off. Okay, Jim thought, I know how to do that kind of thing. If they give us decent data, we should have no problem making sense of it.
For Ginny, it was quite a bit different from her regular work as a university-based chemist. She had been invited to be a member of a group that was going to act like terrorists for the next few days. Ginny had not known quite what that might mean, but if her day of “acculturation” into the terrorist mindset was any indication it was going to be pretty intense. She had never met any of her teammates, but she knew that all of them were specialists in some aspect of science or technology. She was eager to learn more about her team and to see what they might be able to cook up together.
Jim and Ginny were participating in a three-day run of a simulation known as Project Looking Glass (PLG). The brainchild of Fred Ambrose, a senior CIA intelligence officer, PLG simulations pit a team of intelligence and law enforcement professionals (the “blue team”) against a “red team” of savvy adversaries intent on harming our country or its interests. A “white team”—a group of intelligence and content specialists—plays the role of the rest of the intelligence community. The charge to the red team was to use everything members knew or could find out to develop the best possible plan for doing the greatest possible damage to a target specified by the organizers—in this case, a medium-sized coastal city that was home to a large naval base. Members could supplement their own knowledge by consulting open sources such as the Internet and by seeking counsel from other individuals in their personal or professional networks. But what they came up with was to be entirely the product of team members’ own imagination and ingenuity.
To help them adopt the perspectives of those who really are intent on doing damage to our country, red team members spent a day of acculturation. It was like an advanced seminar on terrorism, Ginny thought. Team members heard lectures from both scholars and practitioners on everything from the tenets of radical Islamic philosophy to the strategy and tactics of terrorist recruitment. By the end of the day, Ginny was surprised to find herself actually thinking and talking like a terrorist. Her red teammates seemed to be doing the same.
Ginny and her teammates were aware that the blue team would have access to a great many of their activities—they would be able to watch video captures of some of the red team’s discussions, tap into some of their electronic communications and Internet searches, and actively seek other data that might help them crack whatever plot they were hatching. The blue team also had heard lectures and briefings about terrorists, including specific information on the backgrounds and areas of expertise of red team members. Jim found these briefings interesting, but mostly he was eager to get beyond all the warm-up activities and into the actual simulation. And, by the beginning of the second day, the game was afoot.
The start-up of the red and blue teams could hardly have been more different. The red team began by reviewing its purpose and then assessing its members’ resources—the expertise, experience, and outside contacts that could be drawn upon in creating a devastating attack on the coastal city. Members then launched into a period of brainstorming about ways the team could use those resources to inflict the greatest damage possible and, moreover, do so in a way that would misdirect members of the blue team, who they knew would be watching them closely.
The blue team, by contrast, began by going around the room, with each member identifying his or her back-home organization and role. Once that was done, it was not clear what to do next. Members chatted about why they had chosen to attend the simulation, discussed some interesting issues that had come up in the previous day’s lectures, and had some desultory conversations about what it was that they were supposed to be doing. There were neither serious disagreements nor signs of a struggle for leadership, but also no discernable forward movement.
Then the first video capture of the red team at work arrived. The video made little sense. It showed the team exchanging information about each member’s special expertise and experience, but nothing they said was about what they were actually planning to do. Assured that nothing specific was “up,” at least not yet, blue team members relaxed a little. But it was frustrating not to have any hard data in hand that they could assess and interpret using their analytic skills and experience.
As blue team members’ frustrations mounted, they turned to the white team—the broader intelligence community. To obtain data needed for their analytic work, including information about some of the activities of the red team they had seen on the video, blue team members were allowed to submit requests for information (RFIs) to the white team. Some RFIs were answered, sometimes immediately and sometimes after a delay; others were ignored. It was, Jim thought, just like being back at work.
By early in the second day of the simulation, the red team had turned the corner and gone from exploring alternatives to generating specific plans for a multipronged attack on the coastal city and its environs. Now blue team members were getting worried. They finally realized that they had no idea what the red team was up to, and they became more and more frustrated and impatient—with each other, certainly, but especially with the unhelpfulness of the white team. So the team did what intelligence analysts often do when frustrated: they sought more data, lots and lots of it. Eventually the number of RFIs became so large that a member of the white team, experiencing his own frustration, walked into the blue team conference room and told members that they were acting like “data junkies” and that they ought to slow down and figure out what they actually needed to know to make sense of the red team’s behavior.
That did not help. Indeed, as accurate as the accusation may have been, it served mainly to increase blue team members’ impatience. As tension escalated, both negative emotions and reliance on stereotypes also increased—stereotypes of their red team adversaries, to be sure (“How could that weird set of people possibly come up with any kind of serious threat?”), but also stereotypes of other blue team members. Law enforcement and intelligence professionals, for example, fell into a pattern of conflict that nearly incapacitated the team: When a member of one group would offer a hypothesis about what might be going on, someone from the other group would immediately find a reason to dismiss it.
Things finally got so difficult for the blue team that members could agree on only one thing—namely, that they should replace their assigned leader, who was both younger and less experienced than the other members, with someone more seasoned. They settled on a navy officer who was acceptable to both the law enforcement and the intelligence contingents, and she helped the group prepare a briefing that described the blue team’s inferences about the red team’s plans. The briefing would be presented the next day when everyone reconvened to hear first the blue team’s analysis, and then a presentation by the red team describing what they actually intended to do.
The blue team’s briefing showed that members had indeed identified some aspects of the red team’s plan. But blue team members had gotten so caught up in certain specifics of that plan that they had failed to see their adversaries’ elegant two-stage strategy. First there would be a feint intended to misdirect first responders’ attention, followed by a technology-driven attack that would devastate the coastal city, its people, and its institutions. The blue team had completely missed what actually was coming down.
Participants were noticeably shaken as they reflected together on their three-day experience, a feeling perhaps best expressed during the debriefing by one blue team member who worked in law enforcement: “What we saw here,” he said, “is almost exactly the kind of behavior that we’ve observed among some people we are tracking back home. It’s pretty scary.”
image
The scenario just described is typical of many PLG simulations that have been conducted in recent years. Fred Ambrose developed the idea for this unique type of simulation in response to a congressional directive to create a paradigm for predicting technology-driven terrorist threats. The simulation is an upside-down, technology-intensive version of the commonly used red team methodology, with the focus as much on detecting the red team’s preparatory activities as on determining its actual attack plans. Again and again, the finding is replicated: The red team surprises and the blue team is surprised. The methodology has proven to be so powerful and so unsettling to those who participate in PLG simulations that it now is being adopted and adapted by a number of organizations throughout the U.S. defense, intelligence, and law enforcement communities.1
What accounts for the robust findings from the PLG simulations, what might be done to help blue teams do better, and what are the implications for those whose real jobs are to detect and counter terrorist threats? We turn to those questions next.

Why Such a Difference between Red and Blue Teams?

How are we to understand the striking differences between what happens in red and blue teams in PLG simulations? Although there is no definitive answer to this question, there are at least four viable possibilities: (1) it is inherently easier to be on the offense than on the defense, (2) red teams are better at identifying and using the special expertise of both their members and outside experts, (3) prior stereotypes compromise the ability of blue teams to take what they are observing seriously and deal with it competently, and (4) red teams develop and use more task-appropriate performance strategies.2
OFFENSE VS. DEFENSE.
An obstacle that many intelligence teams must overcome is that they are, in effect, playing defense whereas their adversaries are playing offense. Data from PLG simulations affirm the observations of intelligence professionals that offense usually is considerably more motivating than defense. It also is much more straightforward for those on offense to develop and implement an effective way of proceeding. Even though offensive tasks can be quite challenging, they require doing just one thing well. Moreover, it usually is not that difficult to identify the capabilities needed for success. Those on defense, by contrast, have to cover all reasonable possibilities, which can be as frustrating as it is difficult.3
The relative advantage of offense over defense is seen not just in intelligence work but also in a wide variety of other activities. A football team on offense need merely execute well a play that has been prepared and practiced ahead of time, whereas the defenders must be ready for anything and everything. A military unit on offense knows its objective and has an explicit strategy for achieving it, whereas defenders cannot be certain when the attack will come, where it will occur, or what it will involve. As physicist Steven Weinberg has pointed out, it is impossible to develop an effective defense against nuclear missiles precisely because the defenders cannot prepare for everything that the attackers might do, such as deploying multiple decoys that appear to be warheads.4
Because athletic coaches and military strategists are intimately familiar with the difference between offensive and defensive dynamics, they have developed explicit strategies for dealing with the inherent difficulties of being on the defensive. The essential feature of these strategies is converting the defensive task into an opportunity to take the offense. According to a former West Point instructor, cadets are taught to think of defense as a “strategic pause,” a temporary state of affairs that sometimes is necessary before resuming offensive operations. And a college football coach explained that a good defense is one that makes your opponents “play with their left hand.” A “prevent” defense, he argued, rarely is a good idea, even when you are well ahead in the game; instead, you always should prefer an “attack” defense. These sentiments were echoed by a military officer: “Good defense is arranging your forces so your adversaries have to come at you in the one place where they least want to.”
In the world of intelligence, there is an enormous difference between “How can we cover all the possibilities?” and “How can we reframe our task so that they, rather than we, are more on the defensive?” For all its motivational and strategic advantages, however, such a reframing ultimately would require far better coordination among collection, analytic, and operational staff than one typically sees in the intelligence community. Even with the creation of a single Director of National Intelligence, organizational realities are such that this level of integration may not develop for some time. In the interim, simulations such as PLG offer at least the possibility of helpin...

Table of contents

  1. Cover Page
  2. Title Page
  3. Copyright Page
  4. Dedication
  5. Contents
  6. Preface
  7. Introduction: The Challenge and Potential of Teams
  8. Part I: Teams in Intelligence
  9. Part II: The Six Enabling Conditions
  10. Part III: Implications for Leaders and Organizations
  11. Notes
  12. Reference
  13. Index
  14. About the Author