1
Introduction:
Approaches to Touch
and Blindness
Morton A. Heller
Eastern Illinois University
Soledad Ballesteros
Universidad Nacional de Educación a Distancia, Madrid, Spain
Research on touch has blossomed, with recent years seeing startling growth in the field. This renewed interest represents an awakening of a realization that fundamental perceptual problems may be solved in this arena. Researchers have approached the problems of touch and haptics from a great many directions, and this volume aims to bring a number of important contemporary approaches to the forefront. In this volume, researchers have approached the study of touch and blindness from the perspectives of psychological methodology and the most sophisticated, state-of-the art techniques in neuroscience.
The traditional investigation of touch and blindness has involved psychophysics, however, the historical roots of interest in the area are philosophical, psychological, and medical. The problems posed are important for a variety of theoretical reasons, since they bear on issues of intersensory equivalence and intermodal relations. Psychologists and philosophers have long wondered if we can get equivalent information through the senses of vision and touch (Freides, 1974, 1975; Ryan, 1940; Streri & Gentaz, 2003). The ecological answer is affir- mative (Gibson, 1966), but not everyone has agreed with this appraisal (Revesz, 1950). Philosophers have been interested in cases where congenitally blind persons have had sight restored (e.g., Gregory & Wallace, 1963). Gregory and Wallace (1963) have published a fascinating account of the restoration of sight in a congenitally blind person (also see Morgan, 1977). The assumption is that initial responses of the individual after the restoration of sight will provide some reasonable answer to questions about the equivalence of vision and touch. Thus, should persons immediately understand the configuration of a seen object that was only felt in the past, that would be an indication of intersensory equivalence. The problem, of course, is that it is rare that psychologists are able to gain immediate access to persons when their sight is restored. Moreover, the failure of sight would not be a crucial test of the notion of intersensory equivalence, since sight is rarely normal immediately after the removal of surgical bandages.
Researchers have been interested in studying haptics (touch perception) in blind people for a number of important reasons. Research with blindfolded sighted individuals may frequently underestimate the potential of touch for pattern perception (Heller, 1989, 2000; Heller, Brackett, Scroggs, Allen, & Green, 2001; Heller et al., 2002; Heller, Wilson, Steffen, Yoneyama, & Brackett, 2003). Blindness may be accompanied by increased tactile sensitivity (Sathian & Prather, chap. 8, this volume; Van Boven, Hamilton, Kaufman, Keenan, & Pascual-Leone, 2000). Blind individuals have increased haptic skill, and very low vision and late blind persons may show advantages over the sighted when pattern familiarity is controlled for (see Heller, 2000; Heller et al., 2001). Blindfolded persons lack visual guidance of haptic exploration (Heller, 1993), and it is known that sight of hand motion helps touch in a variety of tasks involving pattern perception and texture judgments (Heller, 1982; Heller et al., 2002). We also know that it may help a person attend to touch if he or she directs vision toward the location of a target. A blind person told Morton Heller that it helps him concentrate on touch perception if he “looks” at his hands while feeling objects. This person is totally blind, and does not have any light perception. Researchers report empirical evidence that gaze direction can aid haptics, and spatial resolution is improved and reaction time is speeded when persons look at their hands (e.g., Kennett, Taylor-Clarke, & Haggard, 2001).
Visual guidance may aid touch in many ways. The benefits could be attentional, and this would be consistent with the previously mentioned comments by the late blind individual. In addition, very blurry vision may provide spatial reference information that is often lacking in blind and blindfolded conditions (see Millar, 1994). Spatial frame of reference information can help subjects interpret patterns that are defined by orientation, namely Braille patterns (Heller, 1985). Vision can be so blurry (as with visual impairment or the use of stained glass) that it is impossible to see surface features (e.g., Heller 1982, 1985, 1989). Nonetheless, this blurry vision of hand motion can aid haptic perception of form. Moreover, it is conceivable that some form information is obtained by watching one’s hand feel patterns, even when the patterns themselves are not visible. Thus, sighted subjects were able to name Braille patterns by viewing another individual touch them (Heller, 1985). The patterns themselves could not be seen, because of the effect of interposed stained glass. However, subjects were able to glean useful pattern information by observing the finger motion of another person.
Touch is an accurate and fast modality in detecting salient attributes of the spatial layout of tangible unfamiliar displays, especially their bilateral symmetry (see Ballesteros, Manga, & Reales, 1997; Ballesteros, Millar, & Reales, 1998; Locher & Simmons, 1978). Active touch is an accurate perceptual system in discriminating this spatial property in shapes and objects. Although touch is quite sensitive in dealing with flat displays, it is sometimes far more accurate and faster with 3-D objects. Moreover, studies on the discrimination between symmetical and asymmetrical patterns underscored the reference frame hypothesis. Accuracy in the perception of symmetric two-dimensional raised line shapes improved under bimanual exploration (Ballesteros et al., 1997; Ballesteros et al., 1998). Bimanual exploration proved to be superior than unimanual exploration due to extraction of parallel shape information and to the use of the observer’s body midline as a body-centered reference frame. The findings suggest that providing a reference frame in relation to the body midline helps one to perceive that both sides of a bilaterally symmetrical shape coincide. The finding was further supported in another study (Ballesteros & Reales, 2004b) designed to assess human performance in a symmetry discrimination task using new, two-dimensional (raised-line shapes and raised surfaces) and three-dimensional shapes (short and tall objects). These stimuli were prepared by extending the 2-D shapes in the z-axis. The hypothesis under test was that the elongation of the stimulus shapes in the third dimension should permit a better and more informative exploration of objects by touch, facilitating symmetry judgments. The idea was that providing reference information should be more effective for raised shapes than for objects, since reference information is very poor for those stimuli when they are explored with one finger. The results confirm this hypothesis, since unimanual exploration was more accurate for asymmetrical than for symmetrical judgments, but only for 2-D shapes and short objects. Bimanual exploration at the body midline facilitated the discrimination of symmetrical shapes without changing performance with asymmetrical ones. Accuracy for haptically explored symmetrical stimuli improved as they were extended in the third dimension, while no such trend was found for asymmetrical stimuli.
PERCEPTION IS NORMALLY MULTIMODAL
In sighted individuals, objects and space are normally perceived through multisensory input. We see the world, feel it, hear it, and smell it. It is rare that we are limited to one sense when we seek information about the world. Thus, we may use vision to guide tactual exploration (Heller, 1982), or for pattern perception. People are able to localize objects in space by looking at them and by feeling them. We may use peripheral vision for guidance of haptic exploration, and simultaneously use foveal vision for pattern perception when looking at objects that are at a different location in the world. Moreover, vision can be used to orient objects for more efficient haptic exploration. The two senses of vision and touch may cooperate to allow us to move objects in space so that we can see them more effectively.
Of course, there are instances in which vision and touch yield contradictory information about the world, rather than redundant information. We may look at a snake and it appears slimy and wet, but feels cool and very smooth and dry. Visible textures are not invariably tangible, particularly when they are induced by changes in coloration that do not include alterations in surface configuration. For example, one can not feel the print on a page of this volume, but these changes in brightness and contrast are certainly visible. While the senses may yield conflicting input about objects in the world, we learn when to rely on vision or touch, and when to attempt to compensate for these apparent perceptual errors. If two senses yield contradictory information, they both can not be correct. Fortunately, it is more often the case that the senses provide redundant information that is accurate and reliable, leading to veridical percepts (see Ernst & Banks, 2002; Gibson, 1966; Millar, 1994).
There has been a recent trend toward the view that perception is typically multimodal, and this movement has roots in psychology and in neuroscience. From the psychological perspective, there has been an increasing interest in intermodal interactions from Spence and his colleagues, and many others (e.g., Spence, Kingstone, Shore, & Gazzaniga, 2001). For example, Reales and Ballesteros (1999) studied the architecture of memory representations underlying implicit and explicit recollection of previous experiences with visual and haptic familiar 3-D objects. They found that cross-modal priming as a measure of implicit recovery of object information (vision to touch and touch to vision) was equal to within-modal priming (vision to vision and touch to touch). The interpretation was that the same or very similar structural descriptions mediate perceptual priming in both modalities (see also Easton, Greene, & Srinivas, 1997). This issue is discussed in more detail later in this chapter.
There has been a renewed interest in intermodal relations from a neuroscience perspective (e.g., James et al., 2002; James, James, Humphrey, & Goodale, chap. 7, this volume; Millar, chap. 2, this volume, Millar & Al-Attar, 2002; Pascual-Leone, Theoret, Merabet, Kauffmann, & Schlaug, chap. 9, this volume; Pascual-Leone & Hamilton, 2001; Roder & Rosler, 1998; Sadato et al., 1998; Sathian, 2000; Sathian & Prather, chap. 8, this volume; Stein & Meredith, 1993). An examination of many of these contributions shows this increased emphasis on the relationship between the senses, and a very interesting blurring of the lines between psychology and neuroscience. This issue is taken up again very shortly in this introduction (also see Milner, 1998).
IMAGERY AND VISUAL EXPERIENCE IN TOUCH
An important motive for research on blind individuals derives from interest in the roles of visual imagery and visual experience in the development of spatial awareness, pattern perception, and memory (e.g., Ballesteros & Reales, chap. 5, this volume). We take the first two issues in turn, although they are intimately related. Note, also, that some persons in our society appear to believe that vision is the only adequate sense for spatial perception (see Millar & Al-Attar, 2002; Millar, this volume). There is little doubt that vision is an excellent spatial sense. However, there also can be no doubt that spatial perception can be superb in the absence of vision, as with some persons who are congenitally blind or early blind.
A lack of visual experience has implications for the nature of the imagery that one possesses. Individuals who are born without sight are presumably incapable of using visual imagery in understanding space. While their imagery could be spatial, it must derive from different sensory experiences than imagery that is specifically visual in nature. Visual imagery is known to aid memory (e.g., Paivio, 1965), and may be especially useful in coping with complex imagery tasks (Cornoldi & Vecchi, 2000). Of course, color may be relevant to object recognition. Late blind persons recall how things look, and report using visual imagery. Sighted persons certainly report experiencing visual images while feeling objects (Lederman, Klatzky, Chataway, & Summers, 1990), but it is not likely that this visualization process is needed for the perception of 2-D forms. Current evidence suggests that while it may be helpful, it is not necessary, since congenitally blind subjects perform very much like blindfolded sighted individuals in picture perception and matching tasks, and in a variety of tasks that assess spatial reasoning (see Heller, chap. 3, this volume; Kennedy, 1993; Millar, 1994). Of course, this does not mean that visual imagery is not useful, and it surely is. However, the evidence suggests that other forms of representation may often substitute for a lack of vision (Millar, 1994). Moreover, the nature of processing might differ between the sighted and the congenitally blind. Thus, one should probably not attempt to draw inferences from the blind to the sighted, nor vice versa.
Some researchers have argued that haptics is limited in blind persons, since they are not likely to be able to think of objects from a particular vantage point (Hopkins, 2003), nor are they able to think in terms of perspective (Arditi, Holtzman, & Kosslyn, 1988). The representation of depth relations is a problem for haptic pictures, and has only recently received much interest (Holmes, Hughes, & Jansson, 1998; Jansson & Holmes, 2003). When asked to draw pictures of a slanted board to show the tilt, congenitally blind persons did not spontaneously draw rectangular shapes using foreshortening and perspective cues to depth (Heller, Calcaterra, Tyler, & Burson, 1996). They did not use a reduction in drawing height as the board was tilted, nor did they use converging lines in their depictions of the board. Their drawings were all the same height, despite inclination of the board in depth. Moreover, more than one blind person has indicated (to Morton Heller) that blind people spontaneously tend to imagine objects as a whole, and not from one side. However, blind people are able to adopt a vantage point, and have demonstrated this in a number of studies (Heller & Kennedy, 1990; Heller, Kennedy, & Joyner, 1995; Heller et al., 2002). Per...