A Leitmotif in the reading for this class so far has been the justifying of technologically driven, multi-literary modes of composition as opposed to (mostly) text-based essays as student projects. This is understandable considering the reluctance of some faculty to move away from the textual modes of composition that they produced as students and feel comfortable evaluating. Many also fear that new modalities rely so heavily on technology that they themselves will have to learn a new literacy.
I find Pamela Takayoshi and Cynthia Selfe’s “Thinking about Multimodality” a convincing and productive description of multimodality and its value because the chapter grounds its message in five reasons to teach multimodal composition and addresses five questions that instructors may have, not dismissing them, but rather giving each a thoughtful response. The chapter also places multimodal composition in a timeline that includes text-based composition, showing that composition has evolved before and though the art has changed, it remains significant. Additionally, to clarify, Takayoshi and Selfe maintain that multimodality in composition does not require computers or new technologies, addressing a fear that some instructors have of learning to use new instruments or programs.
Takayoshi and Selfe begin their chapter with an introduction to multimodality in the form of a question: “Why [m]ultimodal [c]omposition?” (1) I’ve added the italics to highlight that their introduction does not attempt to define multimodal composition as much as put it into a classroom context. In fact, I think it’s important that their definitions do not limit multimodality to images, web sites, PowerPoints, and so forth because such composition affords student composers creative control; students must make choices. In fact, multimodality succeeds when students set their own terms and conditions (Shipka 299) because they can base those conditions on what they aim to achieve rather than completing a checklist of requirements an instructor assigns.
Takayoshi and Selfe also note that “teachers of composition are beginning to sense the inadequacy of texts—and composition instruction—that employs only one semiotic channel (the alphabetic) to convey meaning” (2). Arranged in boxes on the page of their chapter, separated from the rest of the text, lie quotes from composition instructors such as Selfe, the New London Group, Wysocki, and Gee, giving the reader a snapshot of the conversation surrounding multimodality and composition. Readers willing and reluctant to embrace the effects of multimodality on pedagogy can see that this conversation exists outside of the text they are reading.
And multimodality challenges boundaries in other ways. Takayoshi and Selfe argue first that by teaching expression outside of (or not limited to) the alphabetic semiotic channel, teachers prepare students to “communicate successfully within the digital communication networks that characterize workplaces, schools, civic life, and span traditional cultural, national, and geopolitical borders” (3). The text moves on to provide a list of professions where such skills would prove necessary, including game design, archeology, the military, and less surprisingly, comics (3). Multimodality is so woven into the creation of cultural products that when I tell a friend and SVA graduate student that I’m writing about multimodal composition this weekend, he has no idea what I’m talking about. His degree program focuses on the marriage of narrative and image, and he aims to write children’s books, but to him and those in his industry, multimodality has for so long been the norm that they don’t need the term; they just call it being good at their jobs.
Also, President Obama set himself apart from his opponent during his re-election campaign by spending an hour or two on reddit, doing an AMA (ask me anything) thread, in which anyone could write him short messages and he responded. Though this was a largely text-based medium, there remains room for hyperlinks to photos or other texts in reddit threads, and this platform made it possible for people to anonymously communicate with the President of the United States.
Because of the ubiquity of multimodal communication, students must learn to consume and create multimodal communication, but also composition departments must learn to teach it or they, too, risk falling behind. Takayoshi and Selfe understand that to academics, this may sound like “technological determinism” (3), but they add that such moves in composition instruction will keep the discipline relevant to the people composing and consuming and their expectations, which to me seems like it humanizes composition, as the focal point is the people interacting with texts.
And why had why would this have caught on in the first place? Alphabetic semiotic channels have worked for hundreds of years, after all. This question touches on Takayoshi and Selfe’s third point: creating multimodal compositions may prove challenging, but it also engages the author. Yes, multimodal composition presents the challenge of learning not only the tools of creation, but also the aesthetic sensibility to make them effectively. But that learning can be fun, and besides, many “students bring to the classroom a great deal of implicit, perhaps previously unarticulated, knowledge about what is involved in composing multimodal texts” (4), and the learning process for them may include learning how to apply what they already know or build on it to create academic projects. Also, the conversation about teaching pedagogy often highlights a ideal where the teacher/student relationship is reciprocal, meaning that teachers can learn from students as students learn from teachers, and students teaching teachers how to use web applications challenges traditional classroom hierarchies.
But this challenge to complete authority does not undermine the composition teacher’s expertise, as “[a]udio and visual composing requires attention to rhetorical principles of communication” (5). The literacy needed for multimodal communication must spring from previous notions of literacy, and composition instructors with knowledge of rhetoric still have much to teach students. In fact, a thorough education in rhetoric prepares instructors because written rhetoric, as we know it, arose out of the study of oral communication, and, therefore, to return to oral or visual elements in a rhetoric course grounds that study of rhetoric in history (5). The classicists have nothing to worry about, unless, of course, they are worried about having to learn PowerPoint.

Finally, Takayoshi and Selfe note the pedagogical benefits of multimodality in composition courses—that such literacy can achieve “long-valued pedagogical goals” (5)—and give instructors agency in that achievement. Citing John Dewey, Takayoshi and Selfe argue the importance of student involvement in the creation of goals in the learning process, which compliments their previous assertion that multimodality engages students as producers as well as consumers. More effectively, however, Takayoshi and Selfe note that technology or multimedia access will not change composition instruction themselves; teachers must employ these elements to catalyze change in composition (6), which means nobody is being left behind, and teachers are being asked to participate in new and exciting ways to make progress towards long-valued educations goals.
Of course, such changes can still make some uncomfortable, and Takayoshi and Selfe anticipate that, finishing their chapter with responses to some common concerns. For example, some wonder if multimodal composition is really composition at all. To respond, the authors note that “[t]he classical basis for composition instruction involves teaching students how to use all available rhetorical means of communicating effectively” (6). Communication has historically relied on the mediums available, and the rules governing such communications (grammar, etiquette, or logic, to name a few) rely on the affordances of the semiotic channels available.
Some wonder if shifting the focus of composition curriculums to include multimodality will require giving something up, to which Takayoshi and Selfe respond that teaching students to make “sound rhetorically based…video, still images, animations, and sound can actually help them better understand the particular affordances of written language” (9). If students develop a solid understanding of how to communicate visually, they may place that understanding in opposition to the rules of written communication, thus reaffirming their understanding of both literacies by creating foils that operate as mnemonic devices.
Takayoshi and Selfe also note that the concern of what is lost in a shift to teaching multimodality is nothing new. For example, the 16th-century church worried that easy access to print would threaten their authority as the holders of information as well as promote vernacular speech. And Plato’s Socrates worried that putting ideas into written words is detrimental to the memory and people would lose immediate access to memorized information such as how to get to Athens or that hemlock is bad for you. (Too soon?)
As a reader fairly well convinced that multimodal composition instruction is increasingly valuable to students and teachers alike, I feel this chapter articulated much of the argument for such instruction in a way that could persuade instructors to experiment with technology. Throughout the chapter, Takayoshi and Selfe maintain that technology need not play a governing role in multimodality, as students can still create visual “texts” by hand. This is a strong move, as I think much resistance to multimodality arises form teachers’ fears that they will not be able to keep up with technology and be left behind.



No comments:
Post a Comment