Open Always
Email: support@globalcompose.com Call Now! +1-315 515-4588
Open Always
Email: support@globalcompose.com Call Now! +1-315 515-4588

Applying Technology to Visually Support Language and Communication in Individuals with Autism Spectrum Disorders

Assignment Writing Help

This assignment question was recently posted by one of our customers and shared with our professional writers. If you are looking for assignment help on this topic or similar topic, click on ORDER NOW button to submit your details. Once we have your order details, your assignment will be assigned to one of our best writers, who will then proceed to write your paper and deliver it within your specified deadline. Thank you for choosing us today!

Applying Technology to Visually Support Language and Communication in Individuals with Autism Spectrum Disorders

Applying Technology to Visually Support Language
and Communication in Individuals with Autism Spectrum
Disorders
Howard C. Shane • Emily H. Laubscher •
Ralf W. Schlosser • Suzanne Flynn •
James F. Sorce • Jennifer Abramson
Published online: 21 June 2011
Springer Science+Business Media, LLC 2011
Abstract The burgeoning role of technology in society
has provided opportunities for the development of new
means of communication for individuals with Autism
Spectrum Disorders (ASD). This paper offers an organizational framework for describing traditional and emerging
augmentative and alternative communication (AAC) technology, and highlights how tools within this framework can
support a visual approach to everyday communication and
improve language instruction. The growing adoption of
handheld media devices along with applications acquired
via a consumer-oriented delivery model suggests a potential paradigm shift in AAC for people with ASD.
Keywords Applying technology Computer based
instruction
Introduction
Augmentative and alternative communication (AAC)
strategies began to be explored for individuals with autism
spectrum disorders (ASD) in the 1980s and early 1990s.
Since that time, developments in technology have played a
crucial role in improving the quality, affordability and
accessibility of AAC strategies for members of this population. Currently, we are in the midst of a potential paradigm shift in AAC for people with ASD, in large part due
to the growing adoption of handheld media devices along
with applications acquired via a consumer-oriented delivery model that are not only affordable but also transportable, socially acceptable and ubiquitous. These new tools
support implementation of a visual approach to everyday
communication and language instruction in ways that were
impossible prior to the digital technology revolution by
enabling both access to visual content and creation of better
instructional materials.
The Evolution of Technology Used in AAC Applications
for Individuals with ASD
AAC originally developed from a challenge to provide
expressive communication tools for persons with little or
no functional speech and/or insufficient manual dexterity to
write or manipulate a keyboard. Such difficulties are generally associated with neuromotor disorders such as cerebral palsy (Zangari et al. 1994). In the 1980s and early
1990s, the use of AAC for persons with ASD began to be
explored. Since that time, a wide variety of strategies and
tools have been developed, and AAC users within this
population now have a plethora of low-tech tools, special
and general purpose hardware, and special and general
purpose software at their disposal that may be combined as
needed to help meet their communication requirements.
The first strategies used with the ASD population were
no-tech in nature, focusing primarily on the use of manual
signs in the context of total communication training
(e.g., Carr et al. 1978). Following the early adoption of
manual signs for individuals with ASD, forms of low-tech
H. C. Shane (&) E. H. Laubscher R. W. Schlosser
S. Flynn J. F. Sorce J. Abramson
Children’s Hospital Boston, Center of Communication
Enhancement, 9 Hope Avenue, Waltham, MA 02453, USA
e-mail: howard.shane@childrens.harvard.edu
R. W. Schlosser
Northeastern University, Boston, MA, USA
S. Flynn
Massachusetts Institute of Technology, Boston, MA, USA
123
J Autism Dev Disord (2012) 42:1228–1235
DOI 10.1007/s10803-011-1304-z
special-purpose AAC tools emerged. These low-tech special-purpose AAC tools do not involve an integrated circuit
(Quist and Lloyd 1997) and have been developed exclusively as AAC systems to enhance communication. They
include non-electronic communication boards and graphic
symbols (Mirenda and Iacono 1988; Wendt 2009). The
technology involved in the use of these tools can be either
pointing-based (i.e., individual points to a symbol in order
to communicate) or exchange-based (i.e., individual hands
over a graphic symbol in exchange for an object or activity
delivered by the communication partner) (Sigafoos et al.
2007). The popularity of an exchange-based approach, the
so-called Picture Exchange Communication System
(PECS) (Bondy and Frost 1998; Frost and Bondy 1994,
2002), led to further increased acceptance of low-tech AAC
approaches with this population.
Building upon this increased acceptance of AAC
approaches with individuals with ASD, the application of
high-tech special-purpose AAC hardware and software
emerged for adoption. These tools were used primarily for
expressive communication purposes (Mirenda and Iacono
2009; Zangari et al. 1994). They use an integrated circuit
(Quist and Lloyd 1997) and were developed for the special
needs population. Among these were portable speech
generating devices (SGDs) that produce synthetic and/or
digitized speech; the use of these tools increased communication options for individuals with ASD in new and
unprecedented ways (Schlosser and Blischak 2001; Schlosser and Sigafoos 2008; Schlosser et al. 2009).
SGDs can be grouped into two classes: dynamic-display
devices (e.g., Dynavox V) and static-display devices (e.g.,
GoTalk). Although static-display devices were initially the
norm, both types are currently available. These devices
typically use specialized software (e.g., Boardmaker with
Speaking Dynamically Pro, Viking), have considerable
expansion capability, can be text and/or symbol-based, and
often can incorporate multimedia content.
Furthermore, as the use of AAC for persons with ASD
became more widespread, it expanded beyond simply its
traditional use for expressive communication alone, and
was used to augment comprehension (Drager et al. 2006)
and to provide organization of time and sequences—a
framework outlined in Shane and Weiss-Kapp (2008).
Despite these advances, the AAC systems described
above are often expensive, cumbersome, and time-consuming to program and personalize. Often, they serve to
stigmatize the user. These barriers may prevent many
individuals from using their systems on a regular basis,
leading to impoverished input and lost opportunities for
language learning and natural interaction. More recently,
however, the relatively expensive, dedicated hardware
platforms and software programs described above have
been subject to competition from less costly, consumerlevel tools developed for the general market. For some
individuals, communication needs can be met by using
consumer-level hardware (e.g., personal laptop computer,
tablet computer) to run special-purpose software designed
to support communication.1 For others, special-purpose
software is not necessary, and communication needs can be
met through a combination of consumer-level hardware
and general-purpose software.2 Low-cost, widely available
peripheral devices (e.g., cameras, camcorders, DVD players) offer even more opportunities for AAC users to create,
edit, store and present content in ways that do not rely on
expensive, dedicated tools. Thus, relatively affordable,
consumer-level hardware, software and peripherals now
afford opportunities for greater flexibility in selection and
design of AAC systems for users with ASD.
With the current widespread availability of generalpurpose portable hardware (e.g., Apple iPadTM, Google
AndroidTM) running specialized AAC ‘‘apps’’, new
opportunities now exist for AAC users with ASD. In fact,
the adoption of the new portable hardware and software
may suggest a significant paradigm shift in AAC: what is
now available to consumers is a device that is small, low
cost, easy to obtain and transport, readily available, and
socially acceptable. Many of the apps designed for these
devices (e.g., Proloquo2go, MyTalk) may serve as full
AAC systems, similar in many ways to the dedicated SGDs
described above, while others (e.g., Steps, First-Then,
MyChoiceBoard, PicCalendar) may also provide support
for organization and enhance the efficiency of simple
functions such as choice-making. Apps are often easily
obtainable, affordable, customizable, and user-friendly. For
further information regarding the technology outlined in
this section, the reader is referred to Appendix.
Importantly, some of these new technologies are
beginning to be evaluated as tools to be used in intervention studies (e.g., Van der Meer et al., in press). Despite
these innovative opportunities, however, caution must
continue to be exercised to ensure that the dazzle of this
impressive technology does not replace a methodical,
clinical process that matches a person with communication
assistance needs with the optimal communication technology available—a process that has come to be known as
‘‘clinical feature matching’’ (e.g., Blischak and Ho 2000;
Shane and Bashir 1980). Furthermore, even technology that
has been carefully selected for an individual based on this
1 Examples of consumer-level hardware running special-purpose
software include personal laptop computers with Speaking Dynamically Pro, tablet computers running Clickit! and personal laptops
with Pogo Boards.
2 Examples of consumer-level hardware running general-purpose
software include tablet computers running Microsoft Word and
personal desktop or laptop computers running Microsoft Powerpoint
to present dynamic displays.
J Autism Dev Disord (2012) 42:1228–1235 1229
123
feature-matching process does not itself enhance communication unless combined with an appropriate instructional
approach. The remainder of this paper will describe one
such approach developed at Children’s Hospital Boston for
learners with ASD, and will outline ways in which current
technology has helped to make implementation of this
approach a reality.
Application of Technology Within a Visual
Communication Approach for Learners with ASD
One clinical approach that makes significant use of AAC is
the Visual Immersion Program (VIP) (Shane et al. 2009a).
The VIP has developed in response to research findings and
clinical observations suggesting that learners with ASD
have relative natural strengths in visual processing (Althaus et al. 1996) and tend to be highly interested in visual
content delivered via electronic screen (Shane and Albert
2008). The VIP is built upon several core principles. First,
learners should be immersed in a visually symbol-rich
environment across home, school, and community. Spoken
language acquisition relies on constant exposure to rich
linguistic input by native speakers along with frequent
opportunities for use; individuals with ASD should similarly be provided with these same opportunities for language development although ‘‘spoken’’ language is
supplemented with a rich visual linguistic environment.
Second, visuals should support both comprehension and
expression. Third, there should be a focus on multiple
communicative functions beyond protesting and requesting
(e.g., directing, questioning, commenting). Adherence to
these principles requires tools that allow for quick, convenient creation and presentation of visual content on an
almost constant basis in order to facilitate effortless
everyday communicative exchanges. It also requires tools
that allow mentors and learners to use visual content in
unique ways that enhance language learning. The following
discussion outlines ways in which the VIP takes advantage
of current technology to create and present visual materials
that accomplish both of these goals, thereby making the
principles of the VIP a reality in ways that were impossible
prior to the digital technology revolution.
Using Technology to Facilitate Effortless Everyday
Communicative Exchanges
Successful implementation of the VIP begins with the
recognition that not all learners with ASD are yet able to
communicate using spoken language. Their linguistic deficits, both in terms of form and content, often result in
difficulties with comprehension and expression of
directives, comments, questions, and other communicative
functions, and make it difficult for learners to participate in
daily exchanges. For many of these individuals, visual
scenes can be used to represent complex ideas in a way that
bypasses the need for aural language processing. Because
these scenes offer a means of delivering symbolic content
that is simultaneously interesting and meaningful, scenebased visual supports hold great potential for improving the
effectiveness of everyday interactions when used frequently (Light and Drager 2007). The VIP uses visual
scene cues in two different forms. Dynamic scene cues are
full-motion video clips that depict an entire scene, event or
concept that unfolds over time. Each scene is a perceptually integrated whole that illustrates not only objects and
spatial relationships, but change over time. Static scene
cues are photographs that capture a single prototypical
moment in the scene event or depiction of a concept (refer
to Fig. 1).
Often, static scene cues are introduced after the learner
is familiar with the corresponding dynamic scene cue.
Together, these tools can offer an effective means of
communication for learners with ASD who are not yet able
to separate language elements from the scene as a whole.
Prior to the introduction of mobile smartphone technology, regular use of scene cues to enhance the effectiveness of everyday communicative exchanges was
extremely difficult. Use of dynamic scene cues was largely
limited to the tabletop environment where they were presented on high-tech, general-purpose hardware platforms
(e.g., personal desktop, laptop, tablet computers) that were
difficult to transport. Static scene cues offered greater
potential for use in the natural environment, as they were
not bound to hardware platforms that could display animation, but they required a time-consuming process of
printing, laminating, and organizing that frequently proved
to be difficult for families and clinicians alike, and therefore impractical as a reliable means of supporting
Fig. 1 Static scene cue
1230 J Autism Dev Disord (2012) 42:1228–1235
123
communication during everyday exchanges. Currently,
however, the availability of consumer-level, handheld
alternatives (e.g., iPhoneTM, Blackberry StormTM,
AndroidTM, iPod touchTM, iPadTM) has made regular use of
scene cues more of a reality. These small, portable devices
can store, organize and present large numbers of images,
including both static images and videos. This reduces the
need to limit communication opportunities to the tabletop
environment or to keep track of cumbersome low-tech
materials, and increases the ability of mentors and learners
alike to quickly access scene cues ‘‘just in time’’ to take
full advantage of each communication opportunity. In
many cases, the devices themselves can also capture images for on-the-spot creation of materials. These tools have
allowed clinicians, parents, teachers, and other mentors to
use both dynamic and static scene cues with greater frequency and across multiple contexts, bringing creation of a
visually immersive environment closer to a reality for
individuals with ASD.
Using Technology to Improve Language Instruction
Communicating via dynamic and static scene cues is a
momentous achievement, but a transition from scene cues
to the elements of language is the ultimate goal. Within a
visual communication approach, language elements are
represented by visual element cues (see Fig. 2). These are
graphic symbols or photographs that represent each of the
individual linguistic components that comprise a sentence
(e.g., noun, verb, preposition, adjective, and object) (Shane
et al. 2009a). These element cues may be displayed in lowtech form or on most hardware platforms using either
special-purpose or general-purpose software.
Within the VIP, element cues form the building blocks
of generative language. Once learners understand the
symbolic meaning of individual symbols and the rules
governing their syntactic configurations, they have the
capacity to both comprehend and generate novel utterances
(Light et al. 2003). Clinical experience indicates that
children with mild-to-severe autism often have little difficulty learning that these graphic elements can correspond
to the grammatical notions of ‘‘subjects’’ (Dave) and
‘‘direct objects’’ (water), but these same learners often
seem to have great difficulty understanding the graphic
symbols used for the more abstract depiction of verbs (e.g.,
pour), prepositions (e.g., in front of), and adjectives (e.g.,
cold). In addition, combining element cues in a way that
demands an understanding of abstract configurational rules
for forming grammatical sentences is often difficult for
those with ASD. Thus, instruction in use of element cues
within the VIP is a twofold process involving first the
learning of individual language concepts and their graphic
representations, and second the learning of rules governing
element cue combination (i.e., syntax). As described in the
following sections, the VIP takes advantage of technological innovations whenever possible to maximize the
effectiveness of instruction in both of these domains.
Using Technology to Improve Graphic Representations
of Difficult Concepts
Language elements are represented in the VIP, as they are
in most aided systems, by graphic symbols; accordingly,
effective use of the system relies on knowledge of the
individual concept represented and recognition of the graphic that represents it. Some researchers have argued that
nouns are easiest to understand via graphics whereas verbs
and prepositions are more difficult to understand (Schlosser
and Sigafoos 2002). Yet, if AAC systems are to support
multiple communicative functions, as the VIP strives to do,
learners must be able to comprehend more than noun
symbols. Actions involve movement and this change may
be particularly difficult to represent through static symbols.
Although some limited sets of animated symbols have been
available for many years (Zangari et al. 1994), they have
received limited usage and there has been no investigation
of their efficacy conducted until recently. In a recent study,
Mineo et al. (2008) reported that non-disabled preschoolers
identified symbols for actions better when they were animated rather than static. Recently, Schlosser et al. (2010)
completed a study with nondisabled preschoolers across
three age groups to determine the effects of animation on
guessing, naming, and identifying graphic symbols for
verbs and prepositions. Results indicated that (a) animated
symbols were more guessable than their static counterparts;
(b) animated verbs were named more accurately than static
verbs but there was no difference for the prepositions; and
(c) older children were more effective than younger children in terms of guessing, naming, and identifying symbols. Whether these results will replicate with children with
ASD is currently being investigated (Schlosser et al.,
2010).
Based on clinical observation, some learners with ASD
can only comprehend the meaning of a symbol when repFig. 2 Language element cue resented in animated form. Accordingly, a set of animated
J Autism Dev Disord (2012) 42:1228–1235 1231
123
symbols for over 110 verbs and prepositions (ALP Animated Graphics) was developed by the Center for Communication Enhancement of Children’s Hospital Boston
and incorporated into the VIP-TLC software (see below).3
Obviously only a high-tech system can portray an animated
or dynamic representation. Using a hi-tech system appears
to be opposite to how instruction is traditionally delivered.
Typically, in clinical practice, low-tech methods are
introduced first and eventually hi-tech devices and
approaches are utilized. Because some individuals with
ASD require dynamic images in order to comprehend
meaning, beginning intervention with static images may
not be effective. In addition to use of animated graphics
during tabletop instruction for concepts, the widespread
availability of consumer-level mobile technology now
enables use of animated graphics in the natural environment. Additionally, once meaning of a dynamic symbol is
recognized, it can be faded and replaced by its static image
(Shane et al. 2009b). Mobile devices may also be used to
store and organize static images.
Using Technology to Improve Concept Knowledge
Within the Context of Various Syntactic Structures
A central tenet of the VIP, as in language development for
typically developing children, is that knowledge of difficult
language concepts is most effectively acquired when the
learner is provided with multiple, varied opportunities to
experience the concept within meaningful exchanges.
Children’s Hospital at Boston has developed a specialpurpose software program that enables learners to experience difficult concepts such as verbs and prepositions
within various syntactic constructions. The program uses
visual scenes to make language more interesting and
accessible to the learner in order to facilitate not only
development of concept knowledge, but acquisition of
syntactic knowledge as well. This Visual Immersion Program (VIP)—Teaching Language Concepts (TLC) software
presents learners with color-coded language elements
arranged in a left-to-right orientation (See Fig. 3). Learners
are encouraged to combine elements creatively to generate
sentences in order to demonstrate how sequential placement of specific language elements can result in the activation of a video that depicts a created sentence. With
repeated explorations, learners can often come to understand the meaning of the individual symbols and begin to
internalize the abstract rules for generating meaningful,
grammatically correct sentences.
TLC is perhaps the most streamlined in a collection of
many software programs (e.g., Microsoft PowerPoint,
Boardmaker with Speaking Dynamically Pro, Viking) that
can be used to create personalized materials for language
instruction. This speaks to yet another benefit of consumerlevel, high-end technology, which is the ability to easily
customize content and program software to improve the
language learning experience.
Using Personalization to Enhance Language Instruction
Traditionally, the libraries of visual symbols available for
AAC purposes have consisted of generic sets and systems of
static graphic images (Fuller et al. 1992; Lloyd et al. 1997).
Based on clinical observations, many young children with
moderate to severe ASD often have difficulty understanding
the meaning of many of the graphics within these symbol
sets and systems. In fact, many individuals with ASD need
instruction on the fundamental symbolic nature of visual
images; i.e., that an image stands for something other than
itself (Kozleski 1991). The use of highly iconic images
(e.g., video clips, digital photographs) of familiar people
and objects can usually improve comprehension of symbol
as well as help maintain attention and interest. Research has
shown support for the hypothesis that symbols that look
more like what they represent are more guessable and if
they are not guessable more easily learned (for reviews see
Fuller and Lloyd 1990; Stephenson 2009).
A noteworthy trend in AAC is the ease with which nontechnical individuals can now create personalized content
that can be applied to both AAC systems and in language
instruction software. An important outcome of such ready
access to low cost, easy-to-use consumer electronics,
mainstream software, and free access to large inventories of
visual content is the opportunity to introduce more motivating and meaningful content. For example, digital cameras are now commonplace in most households; they
Fig. 3 Visual immersion program (VIP) teaching language concepts
(TLC) software
3 These authors are involved in the design and creation of the
described software project but do not receive royalties for these
activities.
1232 J Autism Dev Disord (2012) 42:1228–1235
123
provide a straightforward way to create visual content
representing people, places, and things. Photographs often
provide a more concrete and iconic means of representing
information related to organization and transitions for
learners who have difficulty interpreting more abstract
visuals (e.g., black and white line drawings, colored picture
symbols) (Kozleski 1991; Mirenda and Locke 1989). In
addition, general-purpose photo-editing programs (e.g.,
iPhoto, Picassa) can be used to store, organize and modify
personalized visual supports, making them easier to customize and maintain. Similarly, general-purpose video
recording equipment (e.g., camcorders, smartphones) allow
the relatively effortless recording of videos that can in turn
be used as part of video modeling instruction to teach both
behavioral and language skills (see below). These videos
can be edited with software that is usually bundled with
home computers (e.g., Windows Moviemaker, Apple iMovie), and played on simple free video players (e.g., Windows
Media Player, QuickTime Player). Finally, there now exist
vast online inventories of videos, photographs graphics
(e.g., YouTube, Google Images) that offer free, quick and
easy access to a multitude of video content that can be
downloaded and used to represent common objects, activities, events, and language concepts.
It is interesting to note that the creation of visual content, once considered low-tech (e.g., graphics printed on
paper), is now typically the product of a high-tech production process; individuals now start with a process that
includes either creating or downloading digital content,
editing it, then displaying it on a digital device. Only after
the last step—printing—does the high tech process produce
a low-tech product—a paper graphic.
Thus, instruction for language form and content within the
VIP has been vastly improved through use of both consumerlevel, general-purpose hardware and special-purpose software that, in combination, allow for creation and easy presentation of more effective visual supports. Although the
VIP emphasizes use of visual content to support knowledge
and improve instruction in the form and content of language,
as described above, attention is also given to language use
(e.g., social pragmatic skills), behavioral management, and
daily living skills. A growing body of literature suggests that
instruction within these domains may be advanced through
use of video modeling (Bellini and Akullian 2007; ShuklaMehta et al. 2010); accordingly, the VIP attempts to maximize the effectiveness of this strategy by taking advantage of
current technology.
Using Technology to Enable Video Modeling
Video modeling (a method that traditionally falls under
what is known as observational learning; Bandura 1977) is
a form of learning that results from imitation of observed
behaviors. A growing body of literature reveals that persons with ASD not only effectively imitate a model’s
behaviors after observing them on a screen, but also generalize these behaviors across models and settings (Bellini
and Akullian 2007; Shukla-Mehta et al. 2010). Based on
this body of literature, there appear to be five instructional
domains for which video modeling is well suited. The first
of these domains is language concepts, which, as described
above, is of primary interest within the VIP. The others
include (2) social pragmatics, (3) activities of daily living,
(4) life skills, and (5) behavior management. Clearly, the
ease with which video content can now be created or
located (see personalization above) has stimulated the
expansion of this instructional technique into an effective,
mainstream learning strategy. While it is the case that
video modeling examples can be played on any number of
general-purpose computer media players, the instructional
requirements of logical segmenting, highlighting, and
replay could be improved. Accordingly, this process has
been streamlined in a special-purpose software application,
called the Visual Immersion Program (VIP)—Video
Observational Learning (VOL).
4 In this application, an
instructional video can be quickly and intuitively divided
into logical segments, with each segment representing a
step in a multi-step routine. In addition, static scene cues
representing each of the video segments can be easily
created and then shown with or without its accompanying
video. Video segments can be played in a continuous
fashion, or step-by-step depending on learning needs. The
major benefit of this tool is that practitioners can now
readily create lessons to teach language concepts by
incorporating highly motivating video content that includes
familiar people, settings, objects, favorite activities, etc.
Additionally, videos may be segmented as needed to more
effectively highlight behavioral skills, social pragmatic
skills, or activities of daily living and life skills.
Conclusions
In this paper we overviewed several categories of communication hardware and software as well as described
current trends in communication technology as it applies to
individuals with ASD. While in the past most technologies
for persons with special needs were developed by companies and organizations focused exclusively on such populations, the contemporary landscape is quite different as
hardware and software created for mainstream markets are
4 These authors are involved in the design and creation of the
described software project but do not receive royalties for these
activities.
J Autism Dev Disord (2012) 42:1228–1235 1233
123
readily adaptable to special needs populations. As a result,
new possibilities exist for use of technology to effectively
support and improve everyday communicative exchanges
for learners with ASD and their communication partners;
additionally, technological developments are leading to the
creation and more widespread use of innovative teaching
tools that may be more effective for language instruction
than their predecessors. The widespread uses of innovative
technology across every segment of society bodes well for
persons with ASD.
Appendix: Referenced Technology
Hardware
GoTalk: Special Purpose Hardware by Attainment
Company
DynaVox V: Special Purpose Hardware by DynaVox
Mayer-Johnson.
Software
Boardmaker with Speaking Dynamically Pro: Special
Purpose software produced by DynaVox Mayer-Johnson.
Clickit!: Software produced by IntelliTools, inc.
First-Then: Special Purpose application by Good Karma
Applications, Inc.
Google Images: General Purpose software by Google,
available at http://images.google.com/
iMovie: General Purpose Software produced by Apple
Inc.
iPad AAC software applications: http://www.miasapps.
com/icomm.html
iPhoto: General Purpose software produced by Apple
Inc.
Microsoft PowerPoint: General Purpose software produced by Microsoft
Microsoft Word: General Purpose software produced by
Microsoft
My Choice Board: Special Purpose application: Good
Karma Applications, Inc.
MyTalk: Special Purpose application: 2nd Half Enterprises LLC, available at http://www.mytalktools.com/
pDownloadMyTalk.htm
PicCalendar: Special Purpose application by INZENYR
LLC
PogoBoards: Special Purpose software produced by
Talk To Me Technologies, LLC, available from:
http://www.pogoboards.com
Proloquo2Go: Special Purpose application: AssistiveWare, available at http://www.proloquo2go.com/
Picassa: General Purpose software produced by Google
QuickTime Player: General Purpose software produced
by Apple, Inc
Steps: Special Purpose application by Adastrasoft
Windows Movie Maker: General Purpose software
produced by Microsoft
Windows Media Player: General Purpose software
produced by Microsoft
Viking: Special Purpose software produced by Viking
Software
Visual Immersion Program (VIP)—ALP Animated
Graphics (AAG): Special Purpose software produced
by Children’s Hospital Boston
Visual Immersion Program (VIP)—Teaching Language
Concepts (TLC): Special Purpose software produced by
Children’s Hospital Boston
Visual Immersion Program (VIP)—Video Observational
Learning (VOL): Special Purpose software produced by
Children’s Hospital Boston
YouTube: General Purpose software by Google Inc.,
available at http://www.youtube.com
Mobile Media Devices
Android: Smartphone currently owned by Open Handset
Alliance (OHA)
Blackberry Storm: Smartphone developed by Research
In Motion (RIM)
iPhone: Smartphone produced by Apple Inc.
iPad: Handheld media device by Apple Inc.
References
Althaus, M., de Sonneville, L. M., Minderaa, R. B., Hensen, L. G., &
Til, R. B. (1996). Information processing and aspects of visual
attention in children with the DSM-III-R Diagnosis ‘‘Pervasive
Developmental Disorder Not Otherwise Specified’’ (PDDNOS):
II. Child Neuropsychology, 2, 17–29.
Bandura, A. (1977). Social learning theory. New York: General
Learning Press.
Bellini, S., & Akullian, J. (2007). A meta-analysis of video modeling
and video self-modeling interventions for children and adolescents with Autism spectrum disorders. Exceptional Children, 73,
264–287.
Blischak, D. M., & Ho, K. (2000). School-based augmentative and
alternative communication evaluation reports. Contemporary
Issues in Communication Sciences and Disorders, 27, 70–81.
Bondy, A. S., & Frost, L. A. (1998). The picture exchange
communication system. Seminars in Speech and Language, 19,
373–398.
Carr, E. G., Binkoff, J. A., Kologinsky, E., & Eddy, M. (1978).
Acquisition of sign language by autistic children. I: Expressive
labeling. Journal of Applied Behavior Analysis, 11, 489–501.
Drager, K. D. R., Postal, V. J., Carrolus, L., Castellano, M., Gagliano,
C., & Glynn, J. (2006). The effect of aided language modeling on
1234 J Autism Dev Disord (2012) 42:1228–1235
123
symbol comprehension and production in 2 preschoolers with
autism. American Journal of Speech-Language Pathology, 15,
112–125.
Frost, L., & Bondy, A. S. (1994). PECS: The picture exchange
communication system training manual. Cherry Hill, N. J.:
Pyramid Educational Consultants.
Frost, L., & Bondy, A. (2002). Picture exchange communication
system training manual (2nd ed.). Newark, DE: Pyramid
Education Products.
Fuller, D., & Lloyd, L. L. (1990). The role of iconicity in
augmentative and alternative communication symbol learning.
In W. Fraser (Ed.), Key issues in mental retardation research
(pp. 295–306). London: Routledge.
Fuller, D. R., Lloyd, L. L., & Schlosser, R. W. (1992). The further
development of an augmentative and alternative communication
(AAC) symbol taxonomy. Augmentative and Alternative Communication, 8, 67–73.
Kozleski, E. B. (1991). Visual symbol acquisition by students with
autism. Exceptionality, 2, 173–194.
Light, J. C., Beukelman, D. R., & Reichle, J. (Eds.). (2003).
Communicative competence for individuals who use AAC: From
research to effective practice. Baltimore: Paul H. Brookes
Publishing Co.
Light, J. C., & Drager, K. (2007). AAC technologies for young
children with complex communication needs: State of the
science and future research directions. Augmentative and
Alternative Communication, 23, 204–216.
Lloyd, L. L., Fuller, D. R., & Arvidson, H. (1997). Augmentative and
alternative communication: A handbook of principles and practices. Needham Heights: Allyn & Bacon Publishing Company.
Mineo, B. A., Peischl, D., & Pennington, C. (2008). Moving targets:
The effect of animation on identification of action word
representations. Augmentative and Alternative Communication,
24, 162–173.
Mirenda, P., & Iacono, T. (1988). Strategies for promoting augmentative and alternative communication in natural contexts with
students with autism. Focus on Autistic Behavior, 3, 1–16.
Mirenda, P., & Iacono, T. (2009). Autism Spectrum Disorders and
AAC. Baltimore, MD: Paul H. Brookes.
Mirenda, P., & Locke, P. A. (1989). Comparison of symbol
transparency in nonspeaking persons with intellectual disabilities. Journal of Speech and Hearing Disorders, 54, 131–140.
Quist, R. W., & Lloyd, L. L. (1997). High technology. In L. L. Lloyd,
D. R. Fuller, & H. H. Arvidson (Eds.), Augmentative and
alternative communication: A handbook of principles and practices (pp. 137–168). Needham Heights, MA: Allyn & Bacon.
Schlosser, R. W., & Blischak, D. M. (2001). Is there a role for speech
output in interventions for persons with autism? A review. Focus
on Autism and Other Developmental Disabilities, 16, 170–178.
Schlosser, R. W., Koul, R., Shane, H. C., & Luiselli, J. (2010). Do
animations facilitate symbol understanding in children with
autism? Grant funded by U.S. Department of Education, National
Institute on Disability and Rehabilitation Research, CFDA
84.133G-1 Field-Initiated Research, 10/1/2009–9/30/2012.
Schlosser, R. W., Shane, H., Sorce, J., Koul, R., Bloomfield, E., &
Debrowski, L., et al. (2010). Animation of graphic symbols
representing actions and prepositions: Effects on transparency,
name agreement, and identification. (manuscript under review).
Schlosser, R. W., & Sigafoos, J. (2002). Selecting graphic symbols
for an initial request lexicon: Integrative review. Augmentative
and Alternative Communication, 18, 102–123.
Schlosser, R. W., & Sigafoos, J. (2008). Communication intervention
for children with Autism Spectrum Disorders. In J. L. Matson
(Ed.), Autism spectrum disorders: Evidence-based assessment
and intervention across the lifespan (pp. 299–325). Amsterdam,
New York: Elsevier.
Schlosser, R. W., Sigafoos, J., & Koul, R. K. (2009). Speech output
and speech generating devices in autism spectrum disorders. In
P. Mirenda & T. Iacono (Eds.), Autism Spectrum Disorders and
AAC (pp. 141–170). Baltimore, MD: Paul H. Brookes.
Shane, H. C., & Albert, P. D. (2008). Electronic screen media for
persons with autism spectrum disorders: Results of a survey.
Journal of Autism and Developmental Disorders, 38, 1499–1508.
Shane, H. C., & Bashir, A. S. (1980). Election criteria for the adoption of
an augmentative communication system: Preliminary considerations. Journal of Speech and Hearing Disorders, 45, 408–414.
Shane, H. C., O’Brien, M., & Sorce, J. (2009a). Use of a visual
graphic language system to support communication for persons
on the autism spectrum. Perspectives on Augmentative and
Alternative Communication, 18, 130–136.
Shane, H., Schlosser, R. W., Sorce, J., Duggan, M., O’Brien, M.,
Flynn, S., et al. (2009b). The efficacy of teaching language
concepts to children with autism. New Orleans, LA: Seminar
presented at the national convention of the American SpeechLanguage-Hearing Association.
Shane, H. C., & Weiss-Kapp, S. (2008). Visual language in autism.
San Diego: Plural Publishing.
Shukla-Mehta, S., Miller, T., & Callahan, K. J. (2010). Evaluating the
effectiveness of video instruction on social and communication
skills training for children with Autism Spectrum Disorders: A
review of the literature. Focus on Autism and Other Developmental Disabilities, 25, 23–36.
Sigafoos, J., O’Reilly, M., Ganz, J., Lancioni, G. E., & Schlosser, R.
W. (2007). Assessing correspondence following acquisition of
an exchange-based communication system. Research in Developmental Disabilities, 28, 71–83.
Stephenson, J. (2009). Iconicity in the development of picture skills:
Typical development and implications for individuals with
severe intellectual disabilities. Augmentative and Alternative
Communication, 25, 187–201.
Van der Meer, L., Kagohara, D., Achmadi, D., Green, V. A.,
O’Reilly, M. F., & Lancioni, G. E., et al. (in press). Teaching
functional use of an iPod-based speech-generating device to
students with developmental disabilities. Journal of Special
Education Technology.
Wendt, O. (2009). Research on the use of graphic symbols and
manual signs. In P. Mirenda & T. Iacono (Eds.), Autism
spectrum disorders and AAC (pp. 83–139). Baltimore: Paul H.
Brookes.
Zangari, C., Lloyd, L. L., & Vicker, B. (1994). Augmentative and
alternative communication: An historic perspective. Aug

Academic Writing Help

All writing formats, 24/7 customer support via live chat, email or phone number, and timely delivery guaranteed.

Are you looking for homework writing help? Click on Order Now button below to Submit your assignment details.

Homework Writing Help
We Can Help you with this Assignment right now!

Are you looking for homework writing help on this topic? This question was posted by one of our client seeking homework help.  If you are therefore looking for an assignment to submit, then click on ORDER NOW button or contact us today. Our Professional Writers will be glad to write your paper from scratch, and delivered within your deadline. Perfect choice for your excellent grades! www.globalcompose.com.

 

We ensure that assignment instructions are followed, the paper is written from scratch. If you are not satisfied by our service, you can either request for refund or unlimited revisions for your order at absolutely no extra pay. Once the writer has completed your paper, the editors check your paper for any grammar/formatting/plagiarism mistakes, then the final paper is sent to your email.

Privacy| Confidentiality

We do not share your personal information with any company or person. We have also ensured that the ordering process is secure; you can check the security feature in the browser. For confidentiality purposes, all papers are sent to your personal email. If you have any questions, contact us any time via email, live chat or our phone number.

Our Clients Testimonials

  • I appreciate help on the assignment. It was hard for me but am good to go now

    Impact of pollution on Environment
  • Am happy now having completed the very difficult assignment

    Creative Message Strategies
  • Your writer did a fine job on the revisions. The paper is now ok

    Ethics: Theory and Practice
  • The paper was so involving but am happy it is done. Will reach you with more assignments

    Title: Privatization in or of America
  • I expected perfection in terms of grammar and I am happy. Lecturer is always on our head but was pleased with my paper. Once again, thanks a lot

    Title: Bundaberg Inquiry
  • The paper looks perfect now, thank to the writer

    Health Care Systems
  • You helped me complete several other tasks as you handled paper. wonna thank you

    Critique Paper on Political Change

Related Articles

Leadership Studies Assignment on Module 4 – SLP

 Discussion: Politics and the Patient Protection and Affordable Care Act Please attachments. Use Module 4 SLP attached file for writing instruction. Please message me if you have any questions. Thanks....
Read More

Management Research Paper Assignment on PROJ Week 2 Capstone

 Discussion: Politics and the Patient Protection and Affordable Care Act Your team will develop the project management plan document using the instructions below. The document should contain:      
Read More

Psychology Assignment on Disclosing Sexual Orientation

 Discussion: Politics and the Patient Protection and Affordable Care Act you will have to use the information given about Rebekah on the one file and use the template .. use...
Read More

ESED 5603 – Sample IEP Assignment

 Discussion: Politics and the Patient Protection and Affordable Care Act you will have to use the information given about Rebekah on the one file and use the template .. use...
Read More

Get more from us…

Would you like this sample paper to be sent to your email or would you like to receive weekly articles on how to write your assignments? You can simply send us your request on how to write your paper and we will email you a free guide within 24-36 hours. Kindly subscribe below!

Email Address: support@globalcompose.com