As weâve discussed, a particular set of cognitive abilities differentiates star executives from their peers, but until now, the only cognitive skills that could be measured were those that were originally identified to predict the academic performance of schoolchildren. These are the skills commonly assessed by traditional IQ tests, and though these instruments do, to some extent, predict work performance, they assess only a fraction of the total number of cognitive abilities that exist. And, more important, the aptitudes measured by traditional IQ tests are not necessarily the ones most crucial to business success.
Historically, weâve had to rely upon IQ measures to sort executive candidatesâeither through formal testing or by hiring individuals with elite degreesâbecause these were the only indicators that existed. In other words, weâve been relying on academic aptitude to predict work intelligence. To some extent this has been appropriate, since the cognitive skills necessary for academic performance overlap somewhat with those required for executive work.
However, as Jim Kilts of Gillette explains, more than just academic thinking skills are necessary:
As Kilts points out, the additional abilities that star business performers possess are not the same as those that determine success in academics. Clearly, we must create a measure that is specific to managerial work. The solution seems simple, but until the cognitive skills that comprise Executive Intelligence had been identified, it was not possible to create such a measure. With the recognition of the cognitive skills essential to business leadership, we can finally set about creating such a test. The first step in that process is to understand how intelligence tests are created.
Cognitive abilities, like all psychological traits, are invisible. To create measures of intelligence, researchers have to choose observable qualities that signal the presence of underlying aptitudes. In other words, since we cannot actually see a cognitive ability, we have to find some outward sign of its existence. It is like detecting the wind by observing wheat bending in a field. Although we cannot physically see a personâs intelligence, we can measure its presence and strength by observing activities that require it.
This task is complicated by the fact that there is little agreement about what specific activities denote intelligence. In fact, there still exists no universally accepted definition of this concept. It turns out that defining intelligence has been a contentious, intensely debated topic ever since scientists became interested in the subject.
When IQ testing first came into widespread use, the lack of consensus was an acknowledged problem. One of the original creators of IQ tests, Harvard University psychologist Edwin Boring, when pressed for his definition, responded, âIntelligence is what intelligence tests measure.â1
Psychologists have resorted to surveying experts in an effort to produce some agreement on the subject. The first and most famous study was published in the Journal of Educational Psychology in 1921. Responses ranged from âsensation, perception, association, memory, imagination, discrimination, judgment, and reasoning,â to âthe capacity to acquire capacity.â2 These surveys of expert opinion continue to be circulated today, but still little agreement has been achieved.
As a result, the choice of abilities to be included in an intelligence measure is left to the creator of the test and are, therefore, highly subjective. IQ tests tend to be designed to detect a combination of abilities that the creator of the test believes display intelligence.
And that is how IQ tests have become so misleading. They are referred to as âintelligenceâ assessments, implying a complete measure of this concept. But a virtually unlimited range of cognitive abilities could be considered for inclusion. Any instrument claiming to be a global measure is inevitably inadequate.
These tests evaluate candidatesâ capacities in a very narrow range of activities; the tests are relevant only to the degree that the specific skills they measure are the same skills that will be called for later. For instance, if you are trying to assess how someone will perform in a math class, having him or her solve word-analogy problems will yield scores far less predictive than will arithmetic problem-solving questions.
There is some overlap between these activities; the type of overarching ability that makes someone good at math also helps in verbal subjects. So predicting someoneâs math aptitude from their score on a verbal test would be more accurate than, say, basing it on how fast they can run a mile. But using verbal scores is significantly less accurate than using math scores to predict performance in mathematics. The inaccuracy inherent in any IQ measure is largely determined by how irrelevant or incomplete the skills measured are with regard to the application for which these scores are being used.
In the popular movie Rain Man, Dustin Hoffmanâs character, Raymond Babbitt, suffered from autism. Although he displayed severe limitations when it came to the most basic tasks (like confusing hot and cold water when running a bath), he was incredibly gifted at determining numerical sequences. If one were to define intelligence as oneâs ability to perform that particular mathematical task, Raymond would be considered a genius. Yet a more reasonable view of his capabilities would classify him as a savant, a person with outstanding skills within a very narrow range of activities.
The Rain Man example shows that the genius label has as much to do with the test makerâs selection of cognitive skills as it does with the abilities of the individual being tested. When it comes to business, the failure to build an intelligence measure that focuses on specific relevant skills has created somewhat of a quandary. Companies are told to rely upon intelligence to judge the quality of a candidate, yet the measures available were not constructed for that purpose.
Most of us have had experience with individuals who are considered âgeniusesâ from an IQ perspective yet who lack some of the most basic skills of effective executives, such as differentiating crucial priorities from more secondary concerns, or recognizing how a particular statement would be unnecessarily offensive to colleagues.
To understand and correct the limitations of current intelligence tests, it is important to understand how they have evolved along with the changing notions about what it means to be smart.
The first âintelligence testsâ were created in 1883 by Sir Francis Galton, who theorized that two core qualities characterized human intelligence: energy and sensitivity to stimuli.3 Galton, a renowned scientist and one of the worldâs first psychologists, observed that intellectually capable people appeared to have a higher capacity for labor. Therefore, he presumed that individuals who possessed more energyâand who consequently could labor longerâmust be smarter. Galton also believed that individuals with higher intelligence were more sensitive to physical stimuli. In his view, they had superior physical dexterity and a heightened awareness of the surrounding physical environment.
Galtonâs definition of intelligence was grounded in physical attributes and had clearly been influenced by the theories of evolution and competitive survival introduced by his cousin Charles Darwin.4 For Galton, intelligence was represented by observable physical qualities that promoted the survival of one individual over another. He believed that intelligence was grounded, not in the academic criteria that are central to todayâs tests, but in those abilities that gave one person an advantage over another in the natural struggle for survival.
To measure intelligence, Galton would put subjects through a battery of tests, including hearing measurements and physical challenges, that determined the speed of a subjectâs reflexes and the range of his or her senses. Between 1884 and 1890 people curious to test their abilities visited Galton at the South Kensington Museum in London and paid to have their intelligence measured.5 Although Galton was the first to attempt a measure of human intelligence, his theory was never widely accepted and his methods remained nothing more than an amusing footnote in history.
Subsequent intelligence tests were influenced less by evolutionary theory than by the need to determine a childâs academic potential. The cognitive ability tests we recognize today originated with the minister of public instruction in Paris, who, in 1904, created a commission to identify students who were struggling academically because of their mental defects. He wanted to ensure that children were sent to classes for the mentally retarded only if they could not profit from ordinary instruction. In the fall of 1904, Alfred Binet was appointed to this commission.6
Alfred Binet and his colleague Theodore Simon created a test to suit the education ministerâs needs. They felt that a childâs academic potential could be best measured using problem-solving tests containing material of the kind he or she would confront in school. Binetâs concepts were soon imported to the United States, where Stanford Universityâs Lewis Terman, a professor of psychology, created the Stanford-Binet test, whose current version is still a leader in the intelligence-testing market.7
These tests focused on vocabulary and arithmetic skills, as well as mechanical and spatial reasoning (topics taught in school), and they required candidates to solve problems in a written, multiple-choice format also common in academic settings. Over time, these tests have been shown to be exceptional predictors of school success, enabling administrators to distinguish exceptionally gifted students or severely handicapped students and place them into environments well suited to their abilities. But societyâs desire to categorize people soon led to the use of IQ tests outside of the scope for which they were originally intended.
During World War I, American psychologists, eager to contribute to the war effort, adapted Binetâs intelligence tests for the U.S. Army as an inexpensive way to differentiate among large numbers of recruits.8 This was the first large-scale use of intelligence testing on a group of adults, and it suggested that these school-based tests could predict performance in nonacademic situations.
Consequently, the high-profile role psychologists played in the job assignment of recruits created tremendous interest in the private sector. It was during this time that the Journal of Applied Psychology, the oldest and most prestigious journal in the field of industrial psychology, began publication, starting with a series of articles about real-world applications of intelligence testing.
After World War I several research institutes were founded in an effort to broaden the benefits of psychological testing. One of these, the Psychological Corporation, established by psychologist James Cattell, still exists today as one of the largest publishers of psychological tests for employee selection. By the 1950s, the use of intelligence testing by employers had become commonplace.
Since then, IQ testing has become one of the most widely applied and extensively researched of all psychological tools. Although its predictive validity is highest for its original purposeâschool successâIQ has proven to be a powerful predictor of performance for virtually any occupation.9
In 1998, two of the most respected researchers in assessment methodologies, Professors Frank Schmidt of the University of Iowa and John Hunter of Michigan State University, published in the American Psychological Associationâs Psychological Bulletin a landmark study analyzing over eighty-five years of research ...