1
Introduction
The essential aim of science is to produce and communicate scientific knowledge. As Merton stated:
ā⦠for science to be advanced, it is not enough that fruitful ideas be originated or new experiments developed or new problems formulated or new methods instituted. The innovations must be effectively communicated to others. That, after all, is what we mean by a contribution to science ā something given to the common fund of knowledge. In the end, then, science is a socially shared and socially validated body of knowledge. For the development of science, only work that is effectively perceived and utilized by other scientists, then and there, matters.ā
Scientific research is an information-producing activity (Nalimov and Mulchenko, 1969), the essence of which is communication (F. Crick, in Garvey, 1979). The factors acting in scientific communication form a highly complex system.
The definition of scientometrics focused on the study of scientific information is given by Braun et al. (1987): āScientometrics analyzes the quantitative aspects of the generation, propagation, and utilization of scientific information in order to contribute to a better understanding of the mechanism of scientific research activities.ā
In my definition, scientometrics is a field of science dealing with the quantitative aspects of people or groups of people, matters and phenomena in science, and their relationships, but which do not primarily belong within the scope of a particular scientific discipline (Vinkler, 2001). The aim of scientometrics is to reveal characteristics of scientometric phenomena and processes in scientific research for more efficient management of science. Kepler (1597) stated that āThe mind comprehends a thing the more correctly the closer the thing approaches toward pure quantity as its originā, underlining the importance of the application of scientometrics in practice.
Scientometrics may belong to the discipline of āthe science of scienceā (Bernal, 1939; Price, 1963; Merton, 1973). The term āthe science of scienceā may be understood, however, as indicating a discipline that is superior to others. In this respect, the relationships between scientometrics and other disciplines would be similar to that of philosophy, as had been assumed earlier. But scientometrics should not be regarded as a field āaboveā other scientific fields: scientometrics is not the science of sciences but a science on science for science.
As with all scientific disciplines, scientometrics involves two main approaches: theoretical and empirical. Both theoretical and empirical studies are concerned primarily with the impact of scientific information.
The term āevaluative bibliometricsā was coined by Narin (1976). He was the first to summarise research performance indicators based on previous publications. The processes in science and scientific research, however, involve non-bibliometric data as well, human capacity, grants, cost of equipment, etc. Therefore, I argue for the application of the term āevaluative scientometricsā, which may be regarded as a special field within scientometrics. The term ābibliometricsā here is concerned primarily with measuring the quantitative aspects of publications, whereas scientometrics represents a broader view.
An important step on the road to the development of evaluative scientometrics was made by Martin and Irvine, who applied several input and output indicators and developed the method of converging partial indicators for evaluating research performance of large research institutes (Martin and Irvine, 1983, 1984; Irvine and Martin, 1984; Martin, 1996). I agree with the conclusion drawn by Martin: āā¦all quantitative measures of research are, at best, only partial indicators ā indicators influenced partly by the magnitude of the contribution to scientific progress and partly by other factors. Nevertheless, selective and careful use of such indicators is surely better than none at all. Furthermore, the most fruitful approach is likely to involve the combined use of multiple indicators.ā (Martin, 1996).
Braun et al. (1995) introduced several sophisticated indicators for studying publications of particular countries. Moed et al. (1985a, 1985b) and van Raan (2004) provided a standardised method for evaluating publications of research teams at universities. And I have developed several indicators and methods for assessing the publications of research institutes and teams (Vinkler, 2000b).
According to Kostoff (1995) the ā⦠bibliometric assessment of research performance is based on one central assumption: scientists who have to say something important do publish their findings vigorously in the open international journal (āserialā) literature.ā In his opinion: āPeer review undoubtedly is and has to remain the principal procedure of quality judgment.ā This may be true, but we can easily prove that most indicators of evaluative scientometrics are based directly or indirectly on particular expert reviews (e.g. acceptance or rejection of manuscripts, referencing or neglecting publications).
Scientific information may be regarded as goods (Koenig, 1995) with features characteristic of goods, namely value and use value. Here, āvalueā may be assumed as scientific value referring to the innate characteristics of information (i.e. originality, validity, brightness, generality, coherence, etc.). āUse valueā refers to the applicability of information in generating new information or to its immediate application in practice. References may be considered as manifested signs of use value of information.
Scientific information produced by individual scientists or teams are objects of an absolute competition regardless of possible handicaps of the producers of information, such as poor equipment, low salaries and lack of grants. In this respect, evaluation of scientific results could be regarded as unfair.
There are no absolute scientometric methods for assessing scientific eminence. However, there are methods and indicators according to which the relative position of scientists, teams or countries can be determined within an appropriately selected system. For assessing value or use value of scientific information we have to apply reference standards that are independent of personal or institutional relations of the persons or institutes analysed.
Science is multidimensional. Consequently, the performance of individuals, teams or countries should be analysed from several aspects. The different aspects of eminence can be selected and several indicators can be found to represent these individual aspects. Therefore, composite indexes are seen as of increasing importance.
One of the main paradoxes of scientometric evaluation methods is that scientometric relationships prove to be valid statistically only for large (whole) sets of publications, but they will be applied for assessing part-sets with significantly lower numbers of papers. The relationships that hold, for example, for a set of journal papers of a whole field or subfield will be applied to papers on a particular topic. Therefore, āmicroscientometricsā, i.e. studies on the level of a team or laboratory, subfield or journal are highly relevant. Characteristics of āwholeā scientometric systems may not be identical to those of sub-systems. Consequently, each assessment may be regarded as a special exercise. There are no detailed recommendations on how to assess, for example, the publications of a team. There are, however, general relationships and conclusions based on case studies that could be taken into account.
Science policy-makers, both on a national and on an institutional level, frequently demand more than scientometricians can offer. Therefore, experts in scientometrics have responsibility in drawing attention to the applicability, limitations and uncertainties of the methods applied. Nevertheless, aestimare necesse est (assessing is necessary) both for the scientists themselves and for the leaders of scientific organisations, but it is also required of society.
Scientometrics covers different areas and aspects of all sciences. Therefore, its laws, rules or relationships cannot be regarded as being exact (āhardā) as those of the natural sciences, but also not as lenient (āsoftā) as those of some social science disciplines. Scientometric relationships may be considered as statistical relationships, which are primarily valid for larger sets but with necessary limitations. The Lotka and Bradford laws, for example, may be regarded as trends rather than strict rules. The āconstantsā applied in the corresponding equations depend strongly on the individual systems analysed.
The factors influencing scientometric indicators are interdependent. Consequently, separating the effects of, for example, the developmental rate of fields and mean number of references on the citedness of papers or on the growth of the Garfield (Impact) Factor (GF) seems to be hardly possible. One of the main difficulties of scientometric assessments is that both different and similar bibliometric factors may act in different ways in the different scientific fields or subfields.
The basic assumptions of evaluative scientometrics ā the information unit of sciences is the scientific paper, and the unit of impact is citation ā are only crude, statistical approximations. Nevertheless, by applying these simple assumptions, we may reveal basic features of both theoretical and practical significance of communication processes in science.
The numbers of publications and citations of individuals, laboratories or countries are easily available via international data banks [e.g. Science Citation Index, Institute for Scientific Information (SCI ISI), Thomson Reuters]. These numbers resemble a gun on the stage of a theatre, which must be sooner or later be fired. Similarly, publications and citations, as they are available, will sooner or later be objects of some kind of calculation. The aim and method of application depend on the potential of the corresponding actor. Several researchers in various disciplines and science policy-makers at different levels assume that dealing with scientometrics needs no previous education, and that practical experience obtained in a relatively narrow field is sufficient. This contradicts the most recent view that scientometrics can be regarded as an institutionalised scientific discipline. This may be verified from the establishment of journals studying the discipline, namely Scientometrics in 1978 and the Journal of Informetrics in 2007, and the International Society of Scientometrics and Informetrics in 1993 (Berlin). The regular and successful conferences and meetings held on scientometrics and informetrics, the distinction (Price medal) awarded biannually to one eminent scientist in the field, the several monographs published on different scientometric topics, and the scientometric and informetric teams working at universities worldwide all indicate the successful institutionalisation of research activities on scientometrics and informetrics.
Several scientometric indicators have been suggested by authors publishing in scientometric and non-scientometric journals. Most researchers prefer writing papers to reading them, and therefore several indices, although they are essentially similar, have been renamed and reintroduced. I have tried here to trace back all original publications, and apologise if any relevant reference has been inadvertently omitted.
I will try to describe the scientometric phenomena from the viewpoint of evaluative scientometrics with indicators calculated by simple arithmetical and statistical methods and to avoid the use of sophisticated mathematics in order to offer a clear presentation to all those interested in the field.
My main aim is to provide information on elaborating publication assessment methods that are easily applicable in practice. The work was initiated by the directors of the Chemical Research Center of the Hungarian Academy of Sciences, Professors JÔnos Holló, Ferenc MÔrta and GÔbor PÔlinkÔs, who wished to promote the application of quantitative scientometric indicators for establishing science management measures. I am grateful for their support.
It is clear that any publication assessment method must cover the amount of scientific information produced by the scientists evaluated, the eminence of the publication channels used, and international acknowledgement of the results published. Consequently, scientometric methods and indicators for evaluating the aspects mentioned were elaborated. In support of the methods and indicators suggested, studies on the general and specific bibliometric characteristics of information in the pertinent scientific fields were made. Application of scientific journals and papers, references and citations requires the study of the characteristics and relationships between these items. Therefore, the following topics are tackled here: basic categories of scientometrics, type and characteristics of scientometric indicators, publication growth of science, scientific eminence of journals, ageing of scientific information, scientometric indicators for the assessment ...