Assessment and Development Centres
eBook - ePub

Assessment and Development Centres

  1. 238 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Assessment and Development Centres

About this book

Since the first edition of this book, the level of interest and the varied forms of assessment and development centres have mushroomed. Iain Ballantyne and Nigel Povah's book looks at the entire process, from the underlying concepts to the most effective methods of validation - not forgetting the organizational politics involved. The main objectives of the book are: ¢ to establish a thorough understanding of the principles and practice of assessment and development centres; ¢ to provide sufficient knowledge to enable practitioners to run their own events in a professional manner; ¢ to help readers to recognise when they may need to call on outside expertise, and ¢ to equip readers to ask pertinent questions of any prospective advisers. This second edition includes guidance to reflect the significant developments within the technology, along with further advice on quality control, process improvements and further refinements to the increasingly popular development centre concept. Assessment and Development Centres represents a practical approach which is sure of a warm welcome from HR professionals.

Trusted by 375,005 students

Access to over 1 million titles for a fair monthly price.

Study more efficiently using our study tools.

Information

Publisher
Routledge
Year
2017
Print ISBN
9781138270428
eBook ISBN
9781351956802

Chapter 1 What is an Assessment Centre?

What is assessment centre technology?

As almost every published paper will say, an assessment centre is an event not a location. The term was derived from the location which was used by AT&T in the United States to assess the management potential of hundreds of their staff.
We have used the term assessment centre technology because we believe it accurately reflects two key points. Firstly, the whole of the event is an integrated process of key components. Like many processes there are alternative routes to get to the same outcome, but there are also critical steps which if overlooked can lead to an unsatisfactory outcome.
Secondly, like most process technologies, there is a degree of flexibility over which tools you use to get the job done. This degree of flexibility can lead to two events having quite a different feel.
To start with, the events themselves may last anywhere from between a few hours to a few days. Sometimes assessment centres are known as development centres although this is usually because the information is gathered with the specific intent of supporting personal development. Most centres will use simulations of different kinds but this is not universally so. Indeed the original British model was really a series of interviews with some pencil and paper tests of ability. The simulations used were originally a relatively small part of a three-day procedure although their significance was quickly recognised. Some assessment centres may include some form of feedback from peers, some may include an element of self-assessment, some may include psychometric tests, where others do not include any of these features. Almost as confusing is the plethora of language that is used to describe the target of the assessment, variously known as attributes, competencies, performance dimensions or criteria.
Whatever the complexities are, any definition must encapsulate the essential or universal aspects of all these events. All assessment centres attempt to assess how competent a person is at present, either in their current role or, more usually, compared to the demands of some future job. All assessment centres focus on behaviour in two ways. Firstly, what is observed at an assessment centre is behaviour, since what someone says or does cannot be anything else. Secondly, behaviour is the start of the design process since what you are trying to do is assess the behaviours that are important to function well in the prospective job.

Defining assessment centre technology: the key features

The main feature of assessment centres as we now understand them is that they are a multiple assessment process. There are five main ways in which that is so. A group of participants takes part in a variety of exercises observed by a team of trained assessors who evaluate each participant against a number of predetermined, job related behaviours. Decisions are then made by pooling shared data.

Multiple Participants

There are some events called assessment centres in which there is only one participant. These are usually for very senior appointments where the object is to give the participant a thorough final check before an appointment is negotiated between the parties. More often than not these are conducted by a search consultant or by psychologists attached to a search consultancy. However, as understood in general use, one of the features of an assessment centre (and all the variants) is that a number of participants will be brought together for the event. Although there are no absolute rules, the practical constraints of designing an assessment centre tend to demand multiples of four or six participants. At numbers of beyond 12 participants the logistics can get out of hand very easily.

Combination of Methods

The focal point of most assessment centres is the use of work sample tests or simulations. The principle of their design is to replicate, so far as is possible, the kind of tasks that a participant would be required to do in the job for which they are being considered. To gain a full understanding of a person’s range of capabilities, one simulation is not usually enough to develop anything like a complete picture. If, for example, we were interested in selecting future salespeople it is clear that a useful simulation would be to ask the participant to make a formal presentation. While this may suffice to assess some aspects of the job, it is also clear that effective salespeople are well organised and that a presentation would not of itself give adequate evidence of organising skills. To build the complete picture one needs to use other means of assessing the ability to organise, which could include another kind of simulation, possibly an interview or maybe a psychometric test. Without pre-empting the principle of design (see Chapter 4), we look for at least two sources of evidence of a particular skill, competency or capability, which in turn implies that no single method or instrument will fit the bill.

Team of Assessors

To escape the difficulties associated with the one-on-one interview, used either as a means of selection or in some aspects of performance measurement, it is important to use a team of assessors. There are endless debates about ratios in the literature; but the important points of principle are that each assessor should be able to observe each participant in one of the various situations in which participants are asked to perform. Ideally assessors should observe every participant, but not more than once. The reasoning behind these principles will be outlined in Chapter 6. The team of assessors should include a balance between experts – that is psychologists and human resource managers – and line managers, all of whom need appropriate training.

Behaviourally Based, Founded on Job Analysis

As with any other method of assessment, the start point has to be some analysis of the job to determine what it is that discriminates between the performance of successful job incumbents and those that perform less successfully in the same job. There are a wide variety of terms for the things that discriminate; among them are attributes, dimensions, criteria and most recently competencies.
Although it is quite clear in a wide range of management/professional jobs that specific knowledge is a component that has some importance, it is not usually a significant indicator of career success. To take a very obvious example, all doctors study medicine yet few become consultants. At the other end of the scale there are a few who should not be practising – so job knowledge in itself is not always sufficient to guarantee career success. Many lay people, particularly in management, will say that success is really a matter of personality but again no personality acts in a vacuum; it has to have a context.
Successful performance in any job is likely to be founded on a combination of factors, some of which may be to do with disposition, some to do with attitudes, some with particular skills that have been developed over time, some to do with energy levels, some to do with particular ways of thinking or problem-solving and some may be to do with knowing about particular things. The objective of a job analysis is to determine which of these things are most important. Russell (1985) identifies two groups of criteria for management jobs: problem-solving and aspects of the way managers relate to other people (more of this in Chapter 3).
In determining what to call the behaviours, we prefer to use the word ‘criteria’. This is a neutral term meaning no more or less than the things against which performance is judged in an assessment centre and elsewhere. Although there will be exceptions, we will attempt to use ‘criteria’ throughout the rest of this book.

Shared Data

From the earliest days an essential feature of the design of assessment centres was that data about participants are shared between the assessors. In the case of a selection decision, no final decision is made until all the evidence is gathered from observations of participants in all the various situations and the assessors have conferred together to agree a final rating. This process of conferring together is variously known as the consensus meeting, the wash-up or assessor discussion and for the sake of consistency we will use the last term.
Whatever the title the objective is the same. A team of assessors meets to consider all the evidence at one time having had no previous discussions. In the case of a development centre, it is less likely that any kind of mark or score will be allocated as the objective of the data-sharing is to collect information together to feed back to participants on their comparative strengths and weaknesses.
Once again there are few absolute rules because a contemporary trend is to give more detailed feedback to participants even where the primary objective is to make a pass/fail decision. The most significant point is that, in a well-designed assessment centre, the individual assessor should not have all the data on any single participant until the assessor discussion has taken place.

Where did these centres come from?

It is a little known fact that assessment centres were invented in Europe not, as is commonly supposed, in America. Although it is probably true to say that most of what is available from consultants in the private sector is heavily influenced by America, there is a parallel European tradition that still exists to a large extent in the public sector. More of this later. For reasons connected with the subsequent careers of the researchers involved, it is probable that the best known precursor to what we now know as assessment centres is the Admiralty Interview Board. The Board started in 1942 and followed similar developments that took place in other branches of the armed forces, particularly the War Office Selection Board (WOSB) in the army, itself preceded by a similar approach to officer selection in Germany.
The same period saw significant development take place in the field of psychometric testing. Both of these techniques were found to give significantly better results than the almost universal method of selection: the interview. The words ‘significantly better’ will be elaborated on in later chapters. For the moment we’ll consider why these developments took place under the exigencies of war.
One of the ironies of war is that technology advances in leaps and bounds propelled, as it were, by the extreme and urgent need to overcome one’s adversary by applying science. In the Second World War at least, the same was true for the application of the nascent science of psychology, to some extent because of the advances in other technologies.
When conscription started in earnest, all branches of the forces were faced with two key facts in relation to the ranks of serving men. The first was that while there was still a need for infantrymen and deck-hands, many more people would be involved in operating advanced equipment. Secondly, it was clearly too expensive and too time-consuming to wait until a course of training was over to find that someone could not successfully operate a radio transmitter. A way had to be found of identifying people who could at the very least benefit from a course of training and had the potential to develop their skills. Hence the rapid development of what we now know as aptitude tests.
In relation to the selection of officers, the situation was somewhat different. The prevailing culture relied on an assumption that a person's background was adequate preparation to lead other people. Although that background was often understood to be a matter of social class, it was also true that success in the ranks was a significant route to a commission. Although officers did receive some training, very little thought was given to understanding the capabilities that were required to lead other people in the prevailing conditions. In short, there was an embarrassingly high incidence of inexperienced officers being ‘returned to unit’ because of some perceived or actual failure in the field. The psychologist's contribution here was to develop a selection mechanism which considerably reduced that problem. There are two key features that mark this procedure as the forerunner of the modern assessment centre: the study of behaviour as an indicator of success and the use of multiple inputs of evidence to the selection decision.

Tracing the growth of assessment centres

Although the War Office Selection Boards were not universally accepted at first, they were able to demonstrate improvements in the selection of officer candidates for training versus the previously existing selection boards. The first significant validation study (Vernon and Parry, 1949) was able to comment that the ‘Army was led to believe it was . . . getting the best possible officers’.
The next recognisable step takes us to the United States, again in a military context, where in 1944 the Office of Strategic Studies (OSS), the forerunner of the CIA, adopted the method for selection of intelligence agents. It is this event that is often thought to be the birth of modern assessment centres. Although this is not strictly true, later developments in methodology pioneered by the CIA provided the kind of simulations and content that are now commonly practised.
The difference between the British and American approaches still influences the style and content of assessment centres. To a considerable extent, assessment centres conducted in the public sector are identifiable as direct descendants of the WOSB or Civil Service Selection Board (CSSB) approach (described below). In the private sector the style is more like that developed by the OSS.
The ‘British’ approach would involve a number of interviews, carefully constructed to avoid overlap, unstructured discussions/debate on a topic, a piece of lengthy written work and a number of practical/physical exercises in which each candidate is assigned to lead the others solving a problem. By contrast, the ‘American’ approach does not assign leadership but discussions are prestructured and often require a candidate to take on an assigned role. This requires the candidate to bargain or negotiate resource in some way. American assessment centres are more likely to include one-on-one roleplays and use an in-basket rather than a lengthy written exercise.
On this side of the Atlantic Ocean the first civilian application was in the creation of the CSSB, which was used to assess the suitability of candidates for the fast-stream appointments in both the Home Civil Service and the Diplomatic Service. Initiated in 1945, the CSSB has operated continuously since that time. It was originally set up because the previous selection procedure relied heavily on educational attainment, clearly inappropriate for a generation of people who had been engaged in fighting the war rather than studying. Naturally there have been developments since then but the main components remain much the same. Exercises were designed to resemble the work of a senior civil servant, including sitting on a committee, writing an appreciation of a dossier, giving a short talk, handling a problem in committee. In addition candidates complete a battery of ability tests, are assessed by their peers, complete questionnaires and are interviewed by three different people.
The next noticeable development in civilian application was the use of assessment centres by the US telephone company, AT&T, which developed a longitudinal study of management progress. Starting in the early 1950s, the company’s objective was to identify those people who would have the capability of progressing to a managerial career, regardless of educational attainment and previous background. This work has been heavily influential in two directions. Firstly, it has been a substantial source of data for validating the utility of the method. Amazingly, the data gathered at the time of the centre, in the form of a prediction of the grade the participant would ultimately achieve, were never released into the organisation. At periodic intervals comparisons were made between predicted grade and what was actually attained. Following publication of the results, other companies, notably in the US, started flocking to AT&T to find out what was going on and to adopt the method themselves.
The other development was that various people from AT&T decided to set up on their own to answer this demand and gave birth to the forerunner of DDI, a consultancy company that specialises in the identification and development of people’s potential. As the commercial pioneer, initially in the US and soon after in Europe, DDI’s influence is very muc...

Table of contents

  1. Cover
  2. Half Title
  3. Title
  4. Copyright
  5. Contents
  6. List of Figures
  7. Preface to the Second Edition
  8. Preface to the First Edition
  9. 1 What is an Assessment Centre?
  10. What is assessment centre technology?
  11. Defining assessment centre technology: the key features
  12. Where did these centres come from?
  13. Tracing the growth of assessment centres
  14. Estimated current usage
  15. Where has the growth come from?
  16. Costs and benefits
  17. Spin-off benefits
  18. Strategic use of assessment centre technology
  19. 2 Getting Started
  20. Why an assessment centre?
  21. Implementing an assessment centre
  22. Validity of assessment centres
  23. Justifying the cost: utility analysis
  24. Selling the concept
  25. Using consultants
  26. 3 Defining the Job Needs
  27. Job analysis
  28. Why is the job analysis being conducted?
  29. Structure of a typical job analysis
  30. Who to involve in the job analysis
  31. Common queries about job analysis
  32. 4 Designing the Assessment Centre Content
  33. General issues
  34. The criteria–exercise matrix
  35. The exercise effect
  36. Types of exercise
  37. Variations in exercise design
  38. Developing the exercises
  39. Bespoke versus off-the-shelf exercises
  40. Additional assessment methods
  41. 5 Planning for the Assessment Centre
  42. The variety of people to be brought together
  43. The number of people involved
  44. Scheduling the exercises
  45. The master schedule
  46. Room allocation
  47. Equipment needs
  48. Services at the facility
  49. Briefing procedures
  50. Checklists
  51. 6 Assessor Training
  52. Choosing assessors
  53. Assessor training: objectives and duration
  54. Assessor training strategies
  55. Assessor training: content
  56. The benefits of assessor training
  57. Roleplayer training
  58. 7 Running the Assessment Centre
  59. Starting the event
  60. Administering exercises
  61. Quality control
  62. Closing remarks to participants
  63. Preparing for the assessor discussion
  64. The assessor discussion
  65. 8 Life after an Assessment Centre
  66. The assessment centre report
  67. Feedback
  68. Actioning the development plan
  69. 9 Validating the Assessment Centre
  70. Qualitative validation
  71. Quantitative research
  72. Reliability
  73. Assessment centre validity
  74. 10 Development Centres
  75. Assessment centres versus development centres
  76. Key features of development centres
  77. Growth of development centres
  78. Evolution of development centres
  79. Ongoing challenges for development centres
  80. Validating development centres
  81. The role of 360° feedback in development centres
  82. Future prospects for development centres
  83. 11 Current Issues and Future Trends
  84. The use of technology
  85. New ingredients
  86. The construct validity debate
  87. Equal opportunities and diversity
  88. Cultural issues and international centres
  89. What next?
  90. Appendix
  91. Design, Implementation and Evaluation of Assessment and Development Centres
  92. Bibliography
  93. Index

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn how to download books offline
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 990+ topics, we’ve got you covered! Learn about our mission
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more about Read Aloud
Yes! You can use the Perlego app on both iOS and Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app
Yes, you can access Assessment and Development Centres by Iain Ballantyne,Nigel Povah in PDF and/or ePUB format, as well as other popular books in Business & Business General. We have over one million books available in our catalogue for you to explore.