Judging Exhibitions
eBook - ePub

Judging Exhibitions

A Framework for Assessing Excellence

  1. 198 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Judging Exhibitions

A Framework for Assessing Excellence

About this book

Renowned museum consultant and researcher Beverly Serrell and a group of museum professionals from the Chicago area have developed a generalizable framework by which the quality of museum exhibitions can be judged from a visitor-centered perspective. Using criteria such as comfort, engagement, reinforcement, and meaningfulness, they have produced a useful tool for other museum professionals to better assess the effectiveness of museum exhibitions and thereby to improve their quality. The downloadable resources include a brief video demonstrating the Excellent Judges process and provides additional illustrations and information for the reader. Tested in a dozen institutions by the research team, this step-by-step approach to judging exhibitions will be of great value to museum directors, exhibit developers, and other museum professionals.

Frequently asked questions

Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
  • Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
  • Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
Both plans are available with monthly, semester, or annual billing cycles.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Judging Exhibitions by Beverly Serrell in PDF and/or ePUB format, as well as other popular books in Social Sciences & Archaeology. We have over one million books available in our catalogue for you to explore.

Information

PART I

Introduction

Accountability is a challenging issue for institutions that offer educational exhibitions. Assessing, judging, evaluating, critiquing, and reviewing: Each of these activities requires a different set of definitions, c:riteria, and methods for measuring the exhibition’s accomplishments, success, excellence, effectiveness–or missed opportunities. Each situation also varies in the number of people involved in the process, the duration of the activity, and the fate or usefulness of the product (e.g., the report, article, discussion, or award). Too often, the processes do not provide enough input from enough people or allow enough time for reflection, leading to ephemeral rather than long-lasting results. All types of assessments are useful; many are underused; and a few have unqualified benefits for the users. Doing something–anything–is better than doing nothing. This book presents the methods, reasons, and benefits of doing one type of assessment that can have widespread positive effects on individuals who work for museums and the people who visit them.

How and Why the Framework Got Developed

Since the 1970s I’ve worked as an exhibition evaluator, served as a judge for the American Association of Museums (AAM) exhibition awards, reviewed exhibitions for museum journals, and spoken as a panel member for the exhibition critique session at AAM’s annual conference. Living in Chicago, with a sizable pool of museum practitioners close by, I wanted to create an opportunity for a group of peers to meet repeatedly over a long period of time to review and critique exhibitions and then to develop a shared set of standards for making excellent museum exhibitions. I thought that if we could have enough time to think together, we could come up with a truly new and interesting way to judge excellence through a process or product that could have a lasting and positive impact on ourselves and on our work.
In July 2000, I sent a letter to the 100-plus members of the Chicago Museum Exhibitors Group (CMEG) outlining some of the issues with the current methods of judging excellence and inviting them to volunteer to become part of an ongoing discussion. There was no client, no schedule, and no money to do this. Still, I thought it would be fun: Would anyone else like to participate?
The Research Question
If different museum professionals used the same set of standards to review the same group of exhibitions, would their reviews agree on the degree of excellence for each of the exhibitions? And if not, why?
Twenty-one people responded to the invitation, and 13 showed up for the first meeting. Over the next four months we had seven two-hour evening meetings.
The Framework for Assessing Exhibitions from a Visitor-Centered Perspective did not emerge fully-formed from our heads. When we first sat down together in July 2000, I acted as the facilitator and attempted to direct the discussion. We didn’t know exactly where we were going with the topic of judging excellence or what to call ourselves. At times our discussions veered off in many directions. At one point, I asked everyone to use three different sets of criteria to judge one exhibition and report back on which one worked best. Instead they came back with five more tools! It was like herding cats.
The process was very open-ended. We discussed the issues in a rambling, free-flowing way. There was no schedule and no deadline. We had our research question (see sidebar on page 3), but we did not know what the answer would be or how we would answer it. The agenda evolved as we went along.
From August to November 2000, we visited eight exhibitions located in or near Chicago. They had diverse subjects, sizes, topics, and presentation techniques. Seven of the eight we chose were permanent exhibitions so we could easily revisit them. If we were to publish our comments about an exhibition, we wanted other people to be able to see the exhibition themselves and compare their reactions to ours.

Eight exhibitions in five different museums were visited by Chicago judges from July to November 2000

Jade at Field Museum
What Is An Animal? at Field Museum
The Endurance at Field Museum
Otters and Oil Don’t Mix at Shedd Aquarium
Amazon Rising at Shedd Aquarium
A House Divided at Chicago Historical Society
Petroleum Planet at Museum of Science and Industry
Children’s Garden at Garfield Park Conservatory

What We Built On

Our earliest versions of exhibition standards focused on presentation issues such as design, content, and accessibility, but they did not seem to substantially improve on the existing AAM Standards for Museum Exhibitions. After struggling with different versions of AAM criteria that were primarily related to what the museum had presented in the exhibition, we took a different tack. By the end of September 2000, we had narrowed our focus to criteria that related only to the visitor’s experience. We began arguing about how to use the criteria rather than what the criteria should be. By eliminating judgments about the quality of the design and the accuracy or importance of the content, and by not attempting to judge intent, we made our task manageable and leveled the playing field: We were all visitors. We would judge exhibitions by how it felt to be in them, not what they said about themselves in a review or in a binder of PR materials, or showed in colorful slides.
I ended up just going back and rereading the AAM Standards, and I must say that I have greatly increased respect for them after struggling with the issues myself and hearing and reading everyone’s comments.
Hannah Jennings
Rather than try to make AAM’s Standards “better,” or replace them, we came up with something that was very different (and missing from AAM’s criteria)–more focus on the visitor experience. We decided to focus exclusively on visitor-centered issues: what visitors could experience, where we could see evidence for visitors’ needs and expectations, how the exhibitions might impact them. Our criteria reflected on and asked, What did the exhibition afford visitors for informal, educationally engaging exhibit experiences?
At the end of November 2000, the group had a prototype tool with fairly well-developed criteria and a protocol for using it. An article about it was published for the museum practitioner community in the National Association for Museum Exhibition Exhibitionist magazine (Serrell 2001), and I presented it as a workshop at the 2001 Visitor Studies Association conference.
There still seems to be a lot of overlap between the categories, though I’m not sure how to make them distinct without thinking about it further. Also, I like the idea of using the Criteria as points of discussion rather than as a means of calculating a score.
Workshop participant, July 2001

Funding from NSF Helped

Feedback and wider scrutiny by our colleagues reinforced my idea to request a grant from the National Science Foundation (NSF) to support further development of the tool and its integration with broader educational theory and practice. We needed funding to move the tool to the next phase–doing more research on the questions of validity and reliability, getting help from an advisory board, and achieving the goal of broader acceptance, distribution, and use of the tool.
From April 2002 to September 2003, our project was supported by an NSF Small Grant for Exploratory Research. The grant allowed us to meet regularly in a central location in Chicago, pay for participants’ parking, give them a stipend for their time, help support my part as project director, fund the development and maintenance of a new Web site, and contribute toward publication of the final results.
fig0002
During the 18 months of NSF-funded research (plus a six-month nonfunded extension), we reviewed, revised, tested, and clarified the Criteria and Aspects of a framework and the layout of a handout. There were, in total, about a dozen iterations of the text and the design.
The most important difference between what we started out to do and what we completed was a shift in focus from measuring and comparing ratings of

NSF Grant Request Summary

Serrell & Associates requests an 18-month Small Grant for Exploratory Research (SGER) totaling $95,500 to conduct research that seeks a valid and reliable way for museum professionals to judge the excellence of science exhibitions in museums from a visitor-experience point of view. This is a novel and untested idea for practitioners of exhibition development in science museums. The need for this research arises from a lack of agreed-upon standards of excellence (or even competence) for science museum exhibitions. Museums that receive funding from the National Science Foundation are called upon to document the effectiveness and merit of their exhibit projects, yet they have few shared, standardized methods to help them do so. An SGER would enable Serrell & Associates to conduct a series of meetings and seminars with local (Chicago) museum professionals and a national advisory panel to facilitate the development and testing of an audience-based, peer-reviewed criteria for recognizing excellence through empirical definitions and exempla...

Table of contents

  1. Cover
  2. Half Title
  3. Dedication Page
  4. Title Page
  5. Copyright Page
  6. Table of Contents
  7. Foreword
  8. Part I. Introduction
  9. Part II. What Is the Framework?
  10. Part III. How to Use the Framework
  11. Part IV. Theoretical Underpinnings
  12. Part V. Future of the Framework
  13. Glossary
  14. Bibliography
  15. Photo Credits
  16. Who Were the Excellent Judges, Anyway?
  17. About the Chapter Contributors
  18. Index
  19. About the Author