Sunday, February 17, 2013

Cued Speech and Cued Language: What's the Difference?

You see Cued Speech. You understand cued language. 

That is what people need to understand about why cued language has become a permanent fixture of Cued Speech. There's no going back unfortunately on this issue and the term is here to stay. Certified instructors of Cued Speech will be explaining the differences between Cued Speech and cued language because that has become the standard. 

Yes, there was a tough period when people went back and forth on this issue and it did drive some people apart because of their personal views. However the issue has already been resolved professionally and the NCSA has adopted cued language as part of its official definition.

Furthermore, we have the title "Cued Speech and Cued Language for Deaf and Hard of Hearing Children" for a research publication. 

In the eyes of American law (IDEA, ADA, Section 504), we use the term cued language services, which honestly sounds better than Cued Speech services in my opinion because we are not really learning speech. We are learning language. 

Claire Klossner made a good point a year ago over the concept of cued language as being a language of its own. She said cued language could not exist on its own because it was based on spoken language, but she also added that a language base has to have a way in and a way out. Before Cued Speech came to be, spoken language and written languages were those two ways in and out of one language. Now we have three different ways of expressing and understanding a language base. 

In my opinion it seems that the majority of the cueing community has accepted cued language as part of the identity of Cued Speech so efforts to "take back" Cued Speech would be unfounded as we still use the word Cued Speech to identify the system and cued language to identify what people understand. 

One way to think about it is to think of a hypothetical situation where there is a community of deaf people who have no vocal cords. They all communicate using Cued Speech (like hearing people use speech). If they are not able to articulate sounds and they are not able to process sounds, are they processing spoken language or are they processing cued language? Both modes are based on the same language base. 

We need to consider what is going on inside the cuer's brain. The cuer is processing three different markers of cued language - hand shapes, hand placements, and lip shapes. Those three markers are the equivalents of spoken language's markers - manner, place, and voicing, however those cuers cannot perceive the place and voicing because that is hidden or obscured by the lips. 

The cuer is able to process cued language because it is an "intact" language with all the components that a language needs in order to be understood and expressed. Cued language fits all the linguistic definitions of a language that is both expressive and receptive, which includes non-verbal methods. 

Cued language and spoken language can be understood simultaneously for those who have access to auditory input as well as the hand cues, which could be a reason why some people struggle to separate the two. 

Another point is that cued language would not be a "natural" language developed over time because it was developed to represent the phonemes of spoken language - therefore creating an artificial representation of a language base. 

However, what's interesting though is that children can acquire cued language naturally in the same way they can acquire spoken language and sign language. So can we not use cued language to describe what children are acquiring in the absence of auditory input?

Cued language gave me access to written language and spoken language. It wasn't until I had my cochlear implant that I could access spoken language independently of cued language. 

This brings me to another point about language bases. It is possible for someone to be expressive and receptive in cued and written English fluently, but not spoken English due to the absence of auditory input. I can argue that because that person cannot hear at all, he would not be fluent receptively in spoken language. However that person will still have access to English through cueing and writing. 

Cued Speech is the concept of visualizing spoken language with hand cues as writing is the concept of visualizing spoken language with print. What you understand is their language form. However we cannot use the term Cued Speech to describe the different cued languages expressed such as Cued American English or Cued Mandarin Chinese. 

Yes, there are multiple cued languages. I cannot understand Cued Spanish, even though I can tell I am seeing a different cued language. Cued American English is not the same as Cued British English so cuers from either side of the pond would work harder to use context clues and other strategies to help them understand what they see. Each cued language has its own set of rules due to the linguistic nature of their spoken language counterpart. 

Languages comes in different forms. Computer language (which is artificial by the way), written language, spoken language, tactile language, sign language, and cued language. All these languages are understood by those who are receptive in those languages. 

I will say this again to make my point clear. You see Cued Speech. You understand cued language(s).

1 comment:

Anonymous said...

Your blog is brilliant and explains explicitly what Dr. Cornett invented to cue speech so that language could be acquired and literacy achieved.