
- 334 pages
- English
- ePUB (mobile friendly)
- Available on iOS & Android
eBook - ePub
Evaluation of Reference Services
About this book
Library authorities address the increasing significance of reference services and the increasing need for evaluation of those services to further ensure professionalism and efficiency.
Frequently asked questions
Yes, you can cancel anytime from the Subscription tab in your account settings on the Perlego website. Your subscription will stay active until the end of your current billing period. Learn how to cancel your subscription.
No, books cannot be downloaded as external files, such as PDFs, for use outside of Perlego. However, you can download books within the Perlego app for offline reading on mobile or tablet. Learn more here.
Perlego offers two plans: Essential and Complete
- Essential is ideal for learners and professionals who enjoy exploring a wide range of subjects. Access the Essential Library with 800,000+ trusted titles and best-sellers across business, personal growth, and the humanities. Includes unlimited reading time and Standard Read Aloud voice.
- Complete: Perfect for advanced learners and researchers needing full, unrestricted access. Unlock 1.4M+ books across hundreds of subjects, including academic and specialized titles. The Complete Plan also includes advanced features like Premium Read Aloud and Research Assistant.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, we’ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes! You can use the Perlego app on both iOS or Android devices to read anytime, anywhere — even offline. Perfect for commutes or when you’re on the go.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Please note we cannot support devices running on iOS 13 and Android 7 or earlier. Learn more about using the app.
Yes, you can access Evaluation of Reference Services by Linda S Katz in PDF and/or ePUB format, as well as other popular books in Lingue e linguistica & Scienze dell'informazione e biblioteconomia. We have over one million books available in our catalogue for you to explore.
Information
EVALUATING REFERENCE SOURCES
Developing Criteria for Database Evaluation: The Example of Women's Studies
Since the advent of computerized databases as major sources of information, the evaluation of these tools has been a significant area of investigation by librarians and information specialists. The proliferation of both individual databases and search systems and the high costs associated with utilizing these services have necessitated careful review and comparison to ensure the best use of one's resources. Such studies have typically followed two directions: the evaluation of information retrieval systems as a whole and the detailed comparison of individual databases within a subject field. In the former case explicit criteria and methods have been devised and consistently applied, whereas in the latter there is greater variation in methodology and narrower use of formal criteria. This paper will review the literature addressing the second problem, the evaluation of database content. Using broad principles identified in that literature, the paper describes the development and application of specific criteria for determining the usefulness of various bibliographic databases for searches in the field of women's studies.
The need for criteria development became apparent during discussions among women's studies librarians and educators interested in creating an automated file of women's studies materials. A series of approaches to improve access to these materials were being considered in special libraries and research centers. Gaps in coverage of the field and inadequacies in the indexing of existing print and automated sources were the focus of librarians within the American Library Association (ALA), while the need for a coordinated listing of curricular materials was the impetus for proposals from university-based educators. At the same time, the libraries of the Business & Professional Women's Foundation and of Catalyst, Inc., were taking steps to automate their catalogs, which in both cases provide access to collections in the area of women and work and women's economic issues. These diverse groups are now working on several fronts with the support of the National Council for Research on Women (NCRW), an umbrella organization of both independent and academically-based institutions devoted to research and programming in the field of women's studies.1
The basic assumptions underlying these proposals are that traditional reference tools do not provide adequate access to information in the growing interdisciplinary domain commonly called women's studies, and that specialized tools are at a scattered and early stage of formation. An excellent analysis of the problems encountered in women's studies and feminist research is given by Detlefsen.2 She elaborates on five aspects: first, the conceptual differences between materials on women, feminist materials, and women's studies materials; second, the need for highly interdisciplinary approaches to the topic; third, the serious terminology and language biases and barriers; fourth, the lack of computer access to the indexes of choice for women's and feminist information; fifth, the hoped-for advent of new projects in this area, as alluded to above.
To understand the nature of current gaps in coverage and to establish a framework for developing new systems of access, an ad-hoc task force within A L A decided to evaluate existing bibliographic databases for their indexing of topics related to women's studies. Reviewers needed a uniform set of guidelines to be able to synthesize and compare the reports. At issue was not only coverage of core literature, but indexing language and certain policies of database producers. This paper represents the author's effort to prepare appropriate guidelines for evaluation, drawing upon her previous research and committee work in the evaluation of reference services and sources. Individual database critiques written in accordance with these guidelines could then be used to support grant proposals for new computerized services in this field.
DATABASE REVIEW LITERATURE
References to content evaluation of databases are found throughout the information science literature, but there have been few codifications of the principles. System-level studies usually consider factors like indexing but are not as concerned with individual files, and descriptions of the files rely on implicit evaluation standards which themselves are rarely examined. Conventional guidelines for reviewing reference books are also useful but need to be supplemented to address features unique to automated sources. A brief survey of this literature shows the varied approaches to content evaluation, leading to a group of documents that define certain core areas. Generally communicated as a series of questions or considerations, these core areas are what is meant by this author's use of the words “guidelines” or “criteria.” Rarely couched in a quantitative fashion, such considerations cannot really be used as standards but rather as categories to assess the value of a file.
From the outset, librarians must be aware of the service issues raised by automated reference sources. Nichol identifies a number of concerns, for example standardization, vendor contracts, training, and duplication.3 She discusses criteria for selecting a retrieval system, some of which apply to databases: data reliability, currency, form of displays. Although the responsibility of the librarian to apply subject knowledge from choosing files is stressed, no specific guidelines are given. A logical companion to Nichol is a bibliography by Shroder listing basic citations in areas such as finance, equipment, training, and outreach.4 The section on comparing and rating databases includes references demonstrating the two trends described at the outset of this paper but does not cite a basic source for content evaluation methods or criteria.
From these overviews of automated reference services one moves to the specific study of database products. Stern's lengthy summary partially approaches content evaluation, but tends to focus on narrow technical issues or direct database comparisons, skirting the problem of developing common guidelines.5 He concentrates on citation analysis in a broad sense and technical design factors for database creators, highlighting methods such as overlap studies or the use of test documents. Stern deplores the lack of reliable experimental design for database evaluation but doesn't discuss the existence of guidelines or criteria for content evaluation. Of more practical use is Pugh and John's listing of the many overlap studies and system comparisons.6 These apply various evaluation techniques but do not critically define them. Most of the studies look at journals covered and type of indexing, but none attempts to generalize the underlying criteria.
Not only are there choices among databases in the same discipline, but interdisciplinary research such as that done in women's studies may require using several files to provide a complete response. Goodyear and Gardner searched the topic of abortion in four print and computerized indexes to see if journals outside the primary field of the index were being covered.7 The results show very poor cross-discipline coverage for this topic which encompasses medicine, psychology, sociology, politics, law, and philosophy. Yerkey used the cross-file capabilities of the BRS system to identify clusters of databases relevant to particular topics, but remained within the scope of traditional disciplines.8 Both articles are based on coverage as a single criterion of evaluation, as determined by noting which journals are cited or which files produce the most “hits.” More detailed definitions of coverage are explored by Lancaster, who outlines a systematic way to compare databases using review articles as a source of sample citations.9 This technique can be applied to a single subject or to an interdisciplinary field across several files. Tenopir compares Lancaster's “bibliography” approach with the use of a term-oriented search profile to see which method of evaluation is more effective.10 Finding that both generate similar results, she recommends the profile method as easier and cheaper to use.
The tendency to focus on coverage as a criterion may be due to the difficulty of comparing other factors. The lack of standardization among databases makes it awkward to devise a set of evaluation guidelines while at the same time making their use even more important. However, an overall approach to content evaluation has been suggested by several principal authorities in the development of automated library services.
SOURCES
In looking at these sources chronologically, Williams' 1975 article is the first to give a general basis for establishing a series of comparable database reviews.11 She discusses retrieval systems, individual databases, and service centers. Although our concern is database content, it is clear that system features and auxiliary services affect the usefulness of those contents. Williams outlines the following areas of evaluation: 1) general subject scope and orientation of the database, types of materials covered, completeness of coverage; 2) time lapse before citations appear in the database as compared to the hard-copy sources; 3) indexing and coding practices, such as key word searching, controlled vocabulary, enhanced titles or other special codes; 4) size and growth rate of the database and its corresponding print version (if any). In addition, Williams questions the services provided by database producers/vendors: document delivery, availability of manuals and thesauri, form of print-outs or displays, number and types of access points, and forms of search logic. These services are central to effective searching but may not always be under the control of the database producer. If the retrieval system (e.g., BRS, DIALOG) determines certain capabilities, these must be noted when making pure “content” comparisons.
F.W. Lancaster has thoroughly treated the evaluation of library systems and services in a number of works. In the latest edition of his book on information retrieval systems, he devotes chapters to criteria for evaluating information services, techniques for conducting evaluations, and the evaluation of machine-readable databases.12 Lancaster places databas...
Table of contents
- Cover
- Half Title
- Title Page
- Copyright Page
- Table of Contents
- Introduction
- Overview of Evaluation
- Question, Answer and Librarian
- Those Who Are Served
- Other Approaches
- Evaluating Reference Sources