1
A Very Long Perspective
William C. Stokoe
Gallaudet University
One perspective on deafness has changed a great deal even in the short time since the first volume of Psychological Perspectives on Deafness was published. As long as speech and language were taken to be alternate names for the same phenomenon, psychologists and educators could see deafness only as disease or a severely handicapping physical condition. A new perspective on the origin and evolution of language (Armstrong, Stokoe & Wilcox, 1995), however, revealed that sign languages have all the essential features of languageâsyntax, semanticity, and creativityâand that only visible gestures can connect a sign naturally to a great many things people talk about. An arbitrary social convention is necessary to link vocal sounds to meanings, and unless it was supernaturally established, such a convention could only have arisen after gestures and their meanings provided a set of paired forms and meanings for vocalizations (which may have accompanied the pairs) to represent.
This is a revolutionary change in perspective: Teachers, to be effective, instead of pretending they can make deaf children into hearing children must learn to see how ideas relate, how gesture directly expresses basic oppositions like upâdown, inâout, closeâfar, and dozens of others on which cognition, the power of thought, depends. With sign language, deaf children are not handicapped. Teachers also need the perspective such change provides: (a) the arbitrary vocal expressions of spoken languages make understanding difficult; (b) hearing children have to acquire the wordâmeaning pairs of their caretakers; and (c) deaf children have to be recognized as very effective at using their eyes, upper bodies, and brains to grow language again from its roots as they pair visible signs with meanings.
In the first volume of Psychological Perspectives on Deafness, Lillo-Martin (1993) presented the following account of how language acquisition is accomplished:
A child is exposed to his or her native language daily, in the home, from birth on. As early as 6 months, the child begins babbling, using the sounds (or sign pieces) of the language to form meaningless syllables.⌠After about a year of input and output, the child begins using words (whether spoken or signed) in a systematic, meaningful way. Later, the child combines words into phrases and short sentences, beginning at around 18 months.⌠Although these early utterances are short and often devoid of grammatical morphemes, they display a consistent word order that represents various grammatical relations.
By around the age of 3 years, the child uses sentences of many types to describe a vast array of experiences and feelings. (p. 312)
An entirely different perspective on language acquisition can be found in the work of Volterra (Volterra & Erting, 1990; Volterra & Iverson, 1995). Although it is true that an infant is exposed to language from birth, the hearing child does not begin babbling (nor the deaf child with deaf parents begin finger babbling) until about 6 months of age. This alone suggests that exposure to language (and/or an internal set of universal grammar rules) is exceedingly slow at making enough of an impression to elicit attempts at imitation. Volterra and Iverson reported that from a very early age all children, hearing and deaf alike, communicate with gestures. Even at 16 to 20 months, the many children they observed (in homes where different languages were in use) were still engaging in meaningful communication in the gesturalâvisual channel. This communication begins more than a year before the child begins to combine words (or sign language signs) into phrases and short sentences. Yet, the physiological ability to make the sounds of speech and the movements of sign has been apparent since the vocal or manual babbling began.
If language acquisition begins with passive exposure, if there is input from birth onward, but output begins to emerge only at 6 months or later and has reached virtual completion at about age 3,1 it should be apparent that the language experience of a child in the first few months is crucial. This, of course, was the conclusion of Hart and Risley (1995), whose book Meaningful Differences in the Everyday Experience of Young American Children, presented striking disparity in development measured at age 3 and at age 10 as the effect of just one variable: the amount and kind of language experiences the children had between birth and 3 years of age. The children scoring in the top third compared with those in the bottom third were exposed to several times as much language generally, and to language directed specifically at them in utterances of the kind that encourage and build a childâs self-confidence and do not turn off the inquiring mind by commanding silence or discouraging questions.
When this new light on language acquisition is applied to the circumstance of childhood deafness, there can be no mistaking the implication. All children, hearing and deaf alike, communicate gesturally before they communicate linguistically. It is therefore imperative that they be communicated with in the gestures they naturally use and have the sensory equipment to perceive unimpeded. There is not the slightest advantage to be gained (quite the reverse) in withholding the perfectly natural use of gesture from deaf infants and children. Hart and Risley supplied copious unequivocal evidence that it is imperative to communicate with young deaf children gesturally, insofar as possible in a natural sign language.
OUR SPECIES ALSO HAD TO ACQUIRE LANGUAGE
This brief look at the acquisition of human communicative competenceâwhich begins at birth, attains expressive power shortly thereafter, and shows effective employment in gestural form many months before recognizable linguistic forms emerge. Another perspective broader in scope and deeper in time is called for; that is, a view of acquisition by the human genus, first of effective gestural interaction, and from that, genuine language in gestural form.
Hard evidence is missing to show how gestural interaction began and became language. Nevertheless, human physiology and epistemology (the nature of cognition and knowing) make it impossible to disprove the hypothesis that language began with gesture and eventually changed from mainly addressing the eyes to mainly addressing the ears. Examination of circumstantial evidence well grounded in physiology began essentially with Kimura (1993), who pointed out that the brain centers controlling the sequential and parallel activities in the speech tract are the very centers, or lie adjacent to them, that control similar timing of upper limb movement. Although spoken language is addressed to hearing and signed language is addressed to vision, these activities are not separated in the human body and brain. Kimura, in the same work, reviewed much of the literature on aphasia and apraxia and showed how in the literature, brain damage in a specific location is said to cause certain kinds of language impairment, but the statement is made without any assessment of other motor activities. In a nutshell, Kimuraâs findings implied that language is movement. Whether the movement is mainly within the vocal tract or out where it can be seen, language is disrupted by any damage to the brain that impairs the ability to make complex simultaneous and sequential movements. Language is not not some wholly unique brain activity but the normal working of an evolved brain and body (Edelman, 1989).
The evidence, or argument, from epistemology is more fully explained later; it shows that signs of different nature are linked in the mind or behavior of some interpretant in different ways to what they signify.2 This excursion into physiology, epistemology, and semiotics (the branch of philosophy that deals with signs and signification) may seem to take a reader some distance away from deafness, but distance between observer and observed is a requirement of perspective. Besides, if it once again became recognized that humans have always used both vision and hearing to get and exchange information, once it is seen that language is most likely to have evolved from gestures, the public and the professional view of deafness would change, improving the education and other treatment of those who cannot hear.
Although more than 99% of earthâs people use spoken languages, there are also primary sign languages that deaf people use, and alternative sign languages used by tribal people whose contact with modern industrial cultures was relatively recent (Farnell, 1995; Kendon, 1988). Gesture, or gesticulation, is universally used with speech or alone in human interaction (McNeill, 1992). Moreover, gesture is useful and often necessary to make spoken language utterances understood. For example, the following order overheard at a delicatessen counter was immediately and correctly filled: âA half-pound of this and a quarter-pound of that, please.â And yet, although gesturing, with or without speaking, is part of every known culture, it has been standard practice ever since writing was invented, to divide communication into what is redundantly called verbal language,3 and its assumed categorical opposite, nonverbal communication. A better perspective, aided by recent research, makes it possible to see both language and deafness in a different light.
A difference in perspective can often change the viewerâs understanding substantially. For example, without previous knowledge, no one looking at caterpillars and butterflies would know that butterflies are transformed imagoes. Before their metamorphosis they were crawling caterpillars with nothing to suggest that they would become delicate airborne creatures. It is becoming more and more apparent that language before its metamorphosis was gesturally signed. What makes this likely is that only visible signs can relate naturally to what they denote.
A sign, as semiotian Peirce defined it, is anything that can be perceived to denote something besides itself, by some interpretant. Moreover, all of life works by virtue of signs (Sebeok, 1994). Pheromones are signs to insects that have the chemical senses to detect them, but the pheromone-triggered response is built into the organism. Organisms of greater neural complexity interpret various kinds of signs with well-differentiated sensory systems, and their brains provide them with a repertory of responses that is lacking in simpler life forms. Among mammals, especially in the primate order, vision is highly developed, as it must have been for living in the treetops. Vision is also the main channel for social information transfer, which is more sophisticated in apes than in monkeys and is likely to have been even more so in hominids (King, 1994).
What this glimpse of semiotics and primatology reveals is that, for intelligent interpretants like anthropoids, seeing and interpreting each of their movements as signs is what makes their social existence possible. This fact is not disputed by those who think language is only vocal-auditory activity, but they consider it irrelevant to their theories of language and generally dismiss it as nonverbal communication.
This dualism is as untenable as the Cartesian myth that mind and body are separate. If mind and body were the complete strangers Descartes took them to be, psychology could have nothing to do with the mind, because as a science it must be based in the natural world. If verbal and nonverbal communication were essentially separate, verbal and nonverbal messages would have to be managed by totally different and separate parts of, or modules in, the brain. The best neuroscientists cannot find any such separation and do not look for it because that is not the way brain cells and circuits work (Edelman, 1988, 1989, 1991). Moreover, what is essential to language can be shown to derive from visible signs, as is demonstrated next.
The converse proposition, that language originated with vocal sounds, is a hypothesis easily disproved. Sounds by themselves are signs with very limited signifying power. Thunder is an exception; it signifies to sophisticated interpretants that lightning has flashed somewhere. (Long ago it was interpreted as a sign that a god had spoken.) Vocal sounds, if they signify directly are a specific kind of signâsymptoms. We know that an infantâs cries may be symptoms of hunger or some other discomfort. Yet, it is only by observing much more than the sound an infant makes that we can determine what caused it, what it means. Although we teach young children words like meow, moo, and bow-wow, while showing them pictures of animals, these sounds suggest but do not precisely reproduce the sounds domestic animals make. As signs to denote meanings on a more sophisticated level than nursery games, vocal sounds are almost completely useless. Even involuntary sounds that can be caused by sneezing, yawning, or flatulence are not allowed in most societies to stay in their natural state; they must either be suppressed or expressed in socially acceptable form.
How then could vocally produced sounds become bearers of the meanings we find in language? Ferdinand de Saussure (1967), the Swiss linguist, had a half-grasped answer to that question. He declared that words, as linguistic signs, had to be arbitrary, could not be motivatedâcould not be naturally related to what they mean. Then who are the arbiters? Who makes the arbitrary decisions linking the words of a language to their meanings? Why, it is we ourselves and all the users of a particular language who do that. We all have to use words to mean what others mean by them, or else we become like the character Humpty Dumpty that Alice met in Looking-Glass Land. There is a loophole here, however. Creative, innovative people, poets, teenagers, and others are continually changing word-to-meaning linkages in small and subtle ways. Nevertheless, both what makes sense and the acceptable way to phrase it are determined less by rigorous abstract rules than negotiated by social convention.
Saussure seemed not to have realized that the meanings, the relationships, the concepts into which we sort our world come to us only partly from our language. Much of our basic understanding of the world and ourselves comes directly from our senses and from our exploration and manipulation of things, not words. When we understand, however, that Saussure was talking only about spoken languages, not language as a whole as he thought, his observation is quite correct: In order for spoken words to have meanings there must be a convention; such as, a whole society or community agreeing to understand that certain words, patterns of sounds, denote certain meanings.
This is necessary because spoken words, like the sounds they are made of, have no natural connection to anything beyond what made them. Saussure failed to observe that signs of another kind (i.e., gestures) are not so completely dependent on convention, and that many languages are systems using just these kinds of signs. He, like others of his time and ours, did not think of language as anything except what people speak and hear. Visible gestures, however, show what they mean naturally, and they continue to do so until cult...