Judging Exhibitions
eBook - ePub

Judging Exhibitions

A Framework for Assessing Excellence

Beverly Serrell

Compartir libro
  1. 198 páginas
  2. English
  3. ePUB (apto para móviles)
  4. Disponible en iOS y Android
eBook - ePub

Judging Exhibitions

A Framework for Assessing Excellence

Beverly Serrell

Detalles del libro
Vista previa del libro
Índice
Citas

Información del libro

Renowned museum consultant and researcher Beverly Serrell and a group of museum professionals from the Chicago area have developed a generalizable framework by which the quality of museum exhibitions can be judged from a visitor-centered perspective. Using criteria such as comfort, engagement, reinforcement, and meaningfulness, they have produced a useful tool for other museum professionals to better assess the effectiveness of museum exhibitions and thereby to improve their quality. The downloadable resources include a brief video demonstrating the Excellent Judges process and provides additional illustrations and information for the reader. Tested in a dozen institutions by the research team, this step-by-step approach to judging exhibitions will be of great value to museum directors, exhibit developers, and other museum professionals.

Preguntas frecuentes

¿Cómo cancelo mi suscripción?
Simplemente, dirígete a la sección ajustes de la cuenta y haz clic en «Cancelar suscripción». Así de sencillo. Después de cancelar tu suscripción, esta permanecerá activa el tiempo restante que hayas pagado. Obtén más información aquí.
¿Cómo descargo los libros?
Por el momento, todos nuestros libros ePub adaptables a dispositivos móviles se pueden descargar a través de la aplicación. La mayor parte de nuestros PDF también se puede descargar y ya estamos trabajando para que el resto también sea descargable. Obtén más información aquí.
¿En qué se diferencian los planes de precios?
Ambos planes te permiten acceder por completo a la biblioteca y a todas las funciones de Perlego. Las únicas diferencias son el precio y el período de suscripción: con el plan anual ahorrarás en torno a un 30 % en comparación con 12 meses de un plan mensual.
¿Qué es Perlego?
Somos un servicio de suscripción de libros de texto en línea que te permite acceder a toda una biblioteca en línea por menos de lo que cuesta un libro al mes. Con más de un millón de libros sobre más de 1000 categorías, ¡tenemos todo lo que necesitas! Obtén más información aquí.
¿Perlego ofrece la función de texto a voz?
Busca el símbolo de lectura en voz alta en tu próximo libro para ver si puedes escucharlo. La herramienta de lectura en voz alta lee el texto en voz alta por ti, resaltando el texto a medida que se lee. Puedes pausarla, acelerarla y ralentizarla. Obtén más información aquí.
¿Es Judging Exhibitions un PDF/ePUB en línea?
Sí, puedes acceder a Judging Exhibitions de Beverly Serrell en formato PDF o ePUB, así como a otros libros populares de Ciencias sociales y Arqueología. Tenemos más de un millón de libros disponibles en nuestro catálogo para que explores.

Información

Editorial
Routledge
Año
2017
ISBN
9781315425795
Edición
1
Categoría
Arqueología

PART I

Introduction

Accountability is a challenging issue for institutions that offer educational exhibitions. Assessing, judging, evaluating, critiquing, and reviewing: Each of these activities requires a different set of definitions, c:riteria, and methods for measuring the exhibition’s accomplishments, success, excellence, effectiveness–or missed opportunities. Each situation also varies in the number of people involved in the process, the duration of the activity, and the fate or usefulness of the product (e.g., the report, article, discussion, or award). Too often, the processes do not provide enough input from enough people or allow enough time for reflection, leading to ephemeral rather than long-lasting results. All types of assessments are useful; many are underused; and a few have unqualified benefits for the users. Doing something–anything–is better than doing nothing. This book presents the methods, reasons, and benefits of doing one type of assessment that can have widespread positive effects on individuals who work for museums and the people who visit them.

How and Why the Framework Got Developed

Since the 1970s I’ve worked as an exhibition evaluator, served as a judge for the American Association of Museums (AAM) exhibition awards, reviewed exhibitions for museum journals, and spoken as a panel member for the exhibition critique session at AAM’s annual conference. Living in Chicago, with a sizable pool of museum practitioners close by, I wanted to create an opportunity for a group of peers to meet repeatedly over a long period of time to review and critique exhibitions and then to develop a shared set of standards for making excellent museum exhibitions. I thought that if we could have enough time to think together, we could come up with a truly new and interesting way to judge excellence through a process or product that could have a lasting and positive impact on ourselves and on our work.
In July 2000, I sent a letter to the 100-plus members of the Chicago Museum Exhibitors Group (CMEG) outlining some of the issues with the current methods of judging excellence and inviting them to volunteer to become part of an ongoing discussion. There was no client, no schedule, and no money to do this. Still, I thought it would be fun: Would anyone else like to participate?
The Research Question
If different museum professionals used the same set of standards to review the same group of exhibitions, would their reviews agree on the degree of excellence for each of the exhibitions? And if not, why?
Twenty-one people responded to the invitation, and 13 showed up for the first meeting. Over the next four months we had seven two-hour evening meetings.
The Framework for Assessing Exhibitions from a Visitor-Centered Perspective did not emerge fully-formed from our heads. When we first sat down together in July 2000, I acted as the facilitator and attempted to direct the discussion. We didn’t know exactly where we were going with the topic of judging excellence or what to call ourselves. At times our discussions veered off in many directions. At one point, I asked everyone to use three different sets of criteria to judge one exhibition and report back on which one worked best. Instead they came back with five more tools! It was like herding cats.
The process was very open-ended. We discussed the issues in a rambling, free-flowing way. There was no schedule and no deadline. We had our research question (see sidebar on page 3), but we did not know what the answer would be or how we would answer it. The agenda evolved as we went along.
From August to November 2000, we visited eight exhibitions located in or near Chicago. They had diverse subjects, sizes, topics, and presentation techniques. Seven of the eight we chose were permanent exhibitions so we could easily revisit them. If we were to publish our comments about an exhibition, we wanted other people to be able to see the exhibition themselves and compare their reactions to ours.

Eight exhibitions in five different museums were visited by Chicago judges from July to November 2000

Jade at Field Museum
What Is An Animal? at Field Museum
The Endurance at Field Museum
Otters and Oil Don’t Mix at Shedd Aquarium
Amazon Rising at Shedd Aquarium
A House Divided at Chicago Historical Society
Petroleum Planet at Museum of Science and Industry
Children’s Garden at Garfield Park Conservatory

What We Built On

Our earliest versions of exhibition standards focused on presentation issues such as design, content, and accessibility, but they did not seem to substantially improve on the existing AAM Standards for Museum Exhibitions. After struggling with different versions of AAM criteria that were primarily related to what the museum had presented in the exhibition, we took a different tack. By the end of September 2000, we had narrowed our focus to criteria that related only to the visitor’s experience. We began arguing about how to use the criteria rather than what the criteria should be. By eliminating judgments about the quality of the design and the accuracy or importance of the content, and by not attempting to judge intent, we made our task manageable and leveled the playing field: We were all visitors. We would judge exhibitions by how it felt to be in them, not what they said about themselves in a review or in a binder of PR materials, or showed in colorful slides.
I ended up just going back and rereading the AAM Standards, and I must say that I have greatly increased respect for them after struggling with the issues myself and hearing and reading everyone’s comments.
Hannah Jennings
Rather than try to make AAM’s Standards “better,” or replace them, we came up with something that was very different (and missing from AAM’s criteria)–more focus on the visitor experience. We decided to focus exclusively on visitor-centered issues: what visitors could experience, where we could see evidence for visitors’ needs and expectations, how the exhibitions might impact them. Our criteria reflected on and asked, What did the exhibition afford visitors for informal, educationally engaging exhibit experiences?
At the end of November 2000, the group had a prototype tool with fairly well-developed criteria and a protocol for using it. An article about it was published for the museum practitioner community in the National Association for Museum Exhibition Exhibitionist magazine (Serrell 2001), and I presented it as a workshop at the 2001 Visitor Studies Association conference.
There still seems to be a lot of overlap between the categories, though I’m not sure how to make them distinct without thinking about it further. Also, I like the idea of using the Criteria as points of discussion rather than as a means of calculating a score.
Workshop participant, July 2001

Funding from NSF Helped

Feedback and wider scrutiny by our colleagues reinforced my idea to request a grant from the National Science Foundation (NSF) to support further development of the tool and its integration with broader educational theory and practice. We needed funding to move the tool to the next phase–doing more research on the questions of validity and reliability, getting help from an advisory board, and achieving the goal of broader acceptance, distribution, and use of the tool.
From April 2002 to September 2003, our project was supported by an NSF Small Grant for Exploratory Research. The grant allowed us to meet regularly in a central location in Chicago, pay for participants’ parking, give them a stipend for their time, help support my part as project director, fund the development and maintenance of a new Web site, and contribute toward publication of the final results.
fig0002
During the 18 months of NSF-funded research (plus a six-month nonfunded extension), we reviewed, revised, tested, and clarified the Criteria and Aspects of a framework and the layout of a handout. There were, in total, about a dozen iterations of the text and the design.
The most important difference between what we started out to do and what we completed was a shift in focus from measuring and comparing ratings of

NSF Grant Request Summary

Serrell & Associates requests an 18-month Small Grant for Exploratory Research (SGER) totaling $95,500 to conduct research that seeks a valid and reliable way for museum professionals to judge the excellence of science exhibitions in museums from a visitor-experience point of view. This is a novel and untested idea for practitioners of exhibition development in science museums. The need for this research arises from a lack of agreed-upon standards of excellence (or even competence) for science museum exhibitions. Museums that receive funding from the National Science Foundation are called upon to document the effectiveness and merit of their exhibit projects, yet they have few shared, standardized methods to help them do so. An SGER would enable Serrell & Associates to conduct a series of meetings and seminars with local (Chicago) museum professionals and a national advisory panel to facilitate the development and testing of an audience-based, peer-reviewed criteria for recognizing excellence through empirical definitions and exempla...

Índice