Me, Myself and iPad
When Apple introduced the iPad in 2010, it sparked a revolution in technology for children with autism spectrum disorder, as developers focused on applications designed to enhance their learning and communication.
“Literally, therapists and children dragged computers around.”
In the past, educators have found it challenging to develop instructional approaches that address the particular needs of children with ASD, who typically struggle with processing verbal information, remembering a sequence of instructions and engaging in social interaction. Some have trouble learning to speak, and the window of time for developing spoken language may be limited.
Researcher Ann Kaiser is achieving remarkable results in helping minimally verbal children with autism develop speech by using interventions that include iPads and applications that mimic speech.
She is participating in a five-year study led by UCLA education and health sciences professor Connie Kasari, supported by the National Institutes of Health’s Autism Centers of Excellence. The study is a continuation of a recently completed study funded by Autism Speaks.
The Power of Technology
These interventions, Kaiser has found, have a unique advantage in teaching children with autism to communicate.
“When a person says ‘apple,’ it sounds a little different each time, particularly in a sentence, where sounds may blend together,” explained Kaiser, Susan W. Gray Professor of Education and Human Development. “But every time these devices say the word ‘apple,’ it sounds exactly the same, whether spoken alone or in phrases. That might be important for kids with autism, who seem to prefer sameness and who can discriminate among small changes in their environments.”
Kaiser’s research has benefited from the tech explosion in the past few years, in which touch-screen devices are ubiquitous and affordable, and a slew of autism-specific apps are available for download.
No so for MaryAnn Romski and Rose Sevcik, early pioneers in using speech-generating devices as an intervention for children who could not speak. In the early ’90s, they conducted one of the first studies with speech-generating devices at Georgia State University’s Language Research Center in Atlanta.
“Those first studies used a computer on a luggage rack with a monitor,” Kaiser said. “I can’t remember what they had for touch-screen technology, but it wasn’t much. Literally, the therapists and children dragged computers around.”
“Social language is all about commenting.”
The DynaVox tablet, which was one of the devices used in Kaiser’s past studies, represented an advance over early devices at the time, but was not particularly portable. Julie Bryant, a Peabody doctoral student on Kaiser’s team recalls working with a preschooler who transported his device on a cart. “He couldn’t hold it up himself,” Bryant said.
Since then, the DynaVox tablet has become more streamlined, but still weighs almost twice as much as the iPad. And at $6,000 or more, the DynaVox carries a much heftier price tag.
Using the iPad in conjunction with Proloquo2Go, one of the most popular speech-generating education applications available for tablets, Kaiser’s research includes two teaching approaches. In the direct-teaching approach, the child is prompted to use the iPad to communicate choices and to respond to an adult therapist during instruction sessions that teach the foundations of spoken language—imitation, labeling and understanding words.
Learning Through Play
In the naturalistic-teaching approach, the adult models the use of the iPad by commenting with the device during play and conversation, and provides a limited number of prompts to use the iPad to make choices or requests.
When an iPad is used in either approach, children can touch the symbols on the screen, hear the device repeat the words, and then say the words themselves. In each procedure, children are encouraged to use both spoken words and the iPad to communicate, and the therapist uses both spoken and the speech-generating device to communicate throughout instruction.
“Our goal isn’t just to give kids more words but to really help them be communicators with words,” Kaiser said. “If you prompt children who are minimally verbal, they can repeat after you. But they don’t use language socially with eye contact or with a gesture or to comment. They don’t say, ‘Wow, this juice is terrific,’ or ‘I like your red shirt’. As they learn to play with materials and engage in this sort of back-and-forth exchange with an adult, it becomes easier to teach social use of language, and it becomes much more likely that they will learn to comment. Social language is all about commenting.”
The Speech Window
At age 5 to 8, it is usually clear if children are preverbal and will learn to talk later, or are minimally verbal and unlikely to use spoken language beyond a few words. Data suggest that minimally verbal children in this age window can still learn to use spoken language, but after about age 8, it is increasingly unlikely they will begin using phrase speech or sentences.
The Peabody researchers found that the children whose language intervention included speech-generating devices used more social communication and more spoken words at the intervention’s end than children who did not have such a device.
There are more apps for autism than ever before. The Autism Speaks website lists more than 30 pages of apps and ratings in categories that include accessibility, behavioral intervention, communication, functional skills, language and organization. For example, there are apps to help children keep up with their schedules and for learning to share during playtime.
“Teaching communication occurs most easily when the child is engaged in an interesting activity with the adult,” research associate Courtney Wright said. “Turn-taking, for example, is the foundation of social communication, but children with significant communication impairments are likely to be low-rate turn-takers. Balancing turns in interactions introduces children to the structure of conversation and extends social communicative exchanges.”
Kaiser observed that all of the children participating in her study learned new spoken words, and several learned to produce short sentences as they moved through the training and became more fluent speakers. In the second phase of the study, parents were taught to use the speech-generating devices and naturalistic-communication strategies with their children as the therapists had.
“Having the parents hear their child for the first time is amazing.”
For some parents, it was the first time they’d been able to converse with their children. Study participant Krishonda Lanier is experiencing a new level of communication with her son Candin. “Now he says things to me, things he never did before,” she said, “like, ‘I want to go to school,’ or he tells me something that he likes, or he says, ‘no.’ He says ‘no’ really, really well.”
“With older children, parents haven’t given up, but I think they’re just so used to their children not being all that successful,” researcher Jennifer Nietfeld said. “Having the parents hear their child communicating for the first time is amazing.”
The iPad can be useful for children and teens with autism outside of the laboratory setting as well. Bryant observed young people with autism can use tablets to generate speech when communication becomes a struggle at school or on the playground.
“You can get Proloquo2Go for your iPhone or your iPod Touch,” Bryant said. “I worked with student who used it. It was great because he was carrying around an iPod Touch just like all the other kids, but he used his as a communication device. It’s really normalizing for these kids.”