Part I
Coin debasement: approaches and explanations
1
The scientific analysis of coinage: expectations, realities, problems and potential
Matthew Ponting
The first aim of any piece of research is to define the question: what do we want to know?
When that question concerns debasement, it is assumed that what we need to know is simply the silver or gold content of the coinages under scrutiny. To do this, however, we need to define what we mean by the gold or silver content. Since we are mainly concerned with the debasement of silver coins here, I will limit the discussion to them, but similar observations can be made for gold. Firstly, what do we understand by ‘silver’? Silver is a chemical element that appears next to gold, palladium and copper in the periodic table, it has a melting point of 962 C and has an electrode potential of +0.8. It is also soft, having a MOHs hardness of only 2.5. Silver occurs very rarely as a native metal and it is generally accepted that whatever native silver had existed in the Mediterranean world originally, this had all been used up by the early first millennium BC at the latest.1 So, for coinage production, the use of native silver can be largely ruled out. Simple silver ores, the so-called dry ores, and silver reclaimed from the parting of native electrum, are possible sources; although there is some debate regarding how common the reclamation of silver from the parting process would have been in antiquity.2 Jarosite ores and argentiferous lead ores are the most likely primary sources of silver from the inception of coinage to the early medieval period. Silver metal extracted from any of these sources would not have been elementally pure and would have contained traces of other metallic elements that are geochemically associated with it. These primarily include gold, lead and bismuth, although other elements may be present depending on the ores exploited. Consequently, when trying to measure the amount of ‘silver’ in a coin, the figure should include not only the elemental silver, but also the associated gold, lead and bismuth and the presence of other elements that might be associated with specific ore types noted. This ‘silver bullion’ would have constituted what in antiquity and medieval times would have been considered pure silver.
There are many analytical techniques that can measure the silver, gold, lead and bismuth in ancient silver; some can be applied directly to the surface of a coin, whilst others require a small sample of metal to be removed from the coin and then dissolved. Most analytical techniques used today are forms of spectroscopy, where different energies are used to excite the atoms of a sample. This usually involves the movement of electrons from a higher to a lower energy state and the subsequent emission of excess energy once the atom returns to the ground state.3 Such techniques rely on the acquisition of emitted energy over a period of time or the repeated measurement of emissions for a set number of times. These measurements will inevitably have some degree of error associated with them and it is important that this error is quantified. In fact, there are two standard metrics that are crucial if a judgement is to be made regarding the quality of the measurements being taken: these are precision; and accuracy. Precision is the reproducibility of any given measurement; if you measure the same thing several times, by how much will the readings differ? Instrument manufacturers usually provide estimates of the precision for a single reading that is usually the average of three readings presented as the residual standard deviation (RSD). Where some idea of precision is needed that looks at the reproducibility of measurements over a period of time, say the duration of a project, then it is usual to include the measurement of an alloy of known composition in every batch of samples analysed. These reference alloys are usually known as standard reference materials (SRMs) and are available for most materials. Some specialist SRMs have also been commissioned to be used alongside certain alloy compositions no longer found in the modern repertoire. When several measurements of an SRM have been made across the life of a project it is then possible to estimate the degree of variation to which the data are subject and, therefore, judge where a small difference in composition may be significant and where it is not.
The same SRMs can be used to estimate the accuracy of the analyses, since they have a known composition that is rigorously checked by the manufacturers (usually by several different accredited laboratories worldwide). They can be used to calculate the difference between the measured values obtained by the instrument and the values quoted by the manufacturer. This provides an estimate of the accuracy of the analyses that can be used alongside the estimate of precision to judge the level of data quality and, therefore, the confidence that can be placed on their interpretation.
Sometimes data tables include zeros for elemental concentrations; a figure that is unlikely to be correct since all that a zero reading by an instrument is saying is that the instrument cannot detect the element in question. It does not mean than none of that element is present; it may indeed be present, but at a concentration that is not detectable by the instrument being used. For this reason, it is important that an estimate of the limits of detection (LoD) be calculated. The way this is done differs by technique, but it is important that the level at which a concentration will no longer be detected by an instrument is known. If a low value appears to be present in one group of samples and not in another, it is important to determine that this is true and not simply different LoDs between batches.
Different analytical techniques provide different qualities of result; so the choice of technique is crucial. If results from different techniques are to be compared, it is vital that comparable SRM data are available and LoDs for each to ensure comparability; a level of concentration that is measurable with good precision by one technique could well be below the LoD of another.
All these things were well understood by David Walker and his colleagues in the 1970s who ran the then new Oxford x-ray fluorescence (XRF) milliprobe, when the milliprobe was used for Walker’s seminal three-volume work, The Metrology of the Roman Silver Coinage.4 These are standard scientific ‘checks and balances’ that any competent analyst will know are important and will do, and they will ensure the accuracy of the results obtained for the material presented to the instrument. XRF was then considered the new ‘holy grail’ of archaeological science (rather like laser ablation is today), and the fact that it can be used non-destructively made it much easier for scholars to get permission to analyse Museum objects, including coins. Walker, for example, analysed over 5,000 Roman silver coins by this method, and for many years his results were accepted as an accurate account of Roman levels of debasement in their silver coinage.
The most important issue, however, is not the analytical technique itself, but how it is used and on what part of the coin. By 1993 there was quite a lot of literature that discussed the fact that the interior chemical composition of coins was often quite different to that of the surface. This was largely put down to the fact that during burial, the chemical constituents of the coin alloy react with the surrounding environment in various ways dependent on their chemical reactivity. This is, of course, natural corrosion; however, evidence was beginning to emerge that suggested many types of coin had thick layers of altered metal too deep to be the result of normal corrosion processes. For the details of this debate and many of the relevant publications the reader is directed to two of our previous publications.5 To explo...