Geoff-Hart.com:Editing, Writing, and Translation

Home Services Books Articles Resources Fiction Contact me Français

You are here: Resources --> 2008 --> Editorial: annual conference report
Vous êtes ici : Ressources --> 2008 --> Editorial: annual conference report

Editorial: annual conference report

by Geoff Hart

Previously published as: Hart, G. 2008. Editorial: Annual conference report. the Exchange 15(2): 2, 11–24.

As I did last year, I'll be using this space to report on the sessions I attended at this year's annual conference in Philadelphia. Not all were directly relevant to scientific communication, but with a few twists of perspective, I found that all the speakers had something useful to say about how we do our work. You can obtain copies of the speaker handouts at the conference Web site.

Keynote speech: Howard Rheingold

Howard Rheingold (hlr@well.com, http://www.rheingold.com/) started his career back in the days when the Xerox Palo Alto Resarch Center (PARC) was still working on the first word processor, leading to his book Tools For Thought. Since then, he's been a kind of knowledge butterfly, flitting among the more attractive intellectual flowers of our age and doing a lot of thinking about them, while simultaneously cross-pollinating a great many ideas.

One of his recent enthusiasms began with observations of Finnish teens and their cell phones; apparently, the Finnish word for cell phone is the diminutive form of the word hand, which says something interesting about the value they place on their phones and how close they keep them at all times. Rheingold noted that "We are human because we use communication to do things together in new ways." He speculated that this ability to use communication to organize ourselves is what helped our primitive forebears survive when other hominids died out. Listening to his description of how these kids were constantly networked reminded me of the buzz among computer scientists about the notion of "pervasive computing", in which computers will be embedded in everyday objects and found everywhere in the world around us. Possibly these folk should get out of the lab and keep an eye on the real world, since in many ways, we're already there. Cell phones may not yet be the world's largest pervasive computing network, but with most new phones now permitting Web browsing, they're clearly version 1.0 of that world. Consider a device like Apple's iPhone, which combines an iPod (something that seems as firmly attached to teens like my children as any of the devices used by the Borg in Star Trek), a small computer (including a nifty Web browser), and a decent cell phone with (as of the latest release) GPS capability—and all of it integrated seamlessly with their desktop Macs now that Apple is transitioning from their Mac.com service, which already permits sharing of calendars and other information, to MobileMe. Other competitors aren't quite there yet, but they'll catch up eventually.

One of many interesting things about this sea change in how we're using technology is just how much it changes about how our world works. For instance, kids in Finland and Japan tend to flock together, drawn to interesting happenings by means of text messaging transmitted over their phones. This is a fairly innocent, if still intriguing social phenomenon, but it also has both darker and more practical sides. The darker side is what science fiction author Larry Niven referred to as a "flash crowd" in his 1973 story of the same name: cheap communication technology combined with essentially instantaneous transportation led to the "flash" emergence of crowds of people, often leading to riots and looting. The practical side is a sudden ability of otherwise powerless people to self-organize in a great hurry. Examples include the mass demonstrations against Philippines president Joseph Estrada in 2001, similar political demonstrations in Korea and Spain, and the Muslim protests against the infamous Mohammed cartoons in Denmark. This and related thoughts led Rheingold to write Smart Mobs. I've seen this sudden appearance of communities described as a "collective emergent response": something that emerges, often without any prior knowledge, from the collectivity.

There's been an interesting sequence of communication revolutions over time. Probably the first preservation of collective memory in fixed form, preserved across time and space rather than purely as oral history, would have been cuneiform writing on clay tablets. Subsequent development of paper and ink improved the ease of the process, but probably did not change it much beyond that, and the development of the printing press had a similar effect: a quantitative change (vastly improved reproduction speed) rather than anything truly qualitative. In all these cases, fixing information in tangible form remained the province of experts. But now, widespread literacy combined with the creation of the Internet has produced an unimaginable acceleration of this process: not only has publishing become open to anyone with access to a computer (even if only via a public library's shared terminals), now they're distributing their creations to ever larger global audiences, accompanied by huge amounts of collaboration on and reworking of the information via blogging, wikis, FaceBook, Del.icio.us, and others. All of these trends have facilitated knowledge sharing and collaboration, giving rise to the modern technological and social explosions, which have been accelerating faster in recent decades than they had in all previous centuries. In pondering this, I found it interesting and ironic that at the same time this has been happening, phenomena such as cell phone culture and blogs are once again reinvigorating the old notion of oral history and reinventing how we communicate.

Many companies and groups have been taking advantage of this paradigm shift, and nowhere more obviously to technical communicators than in the concepts of "open source" software. IBM, for instance, has created an "open commons", in which they have released many of their patents into the public domain to spur innovation and create a market for their open-source products, such as Linux. In so doing, they turned a potentially serious competitor for their main software business into a lucrative source of consulting and services income. Open-source software is all about harnessing the power of communities and the information they're eager to share; the success of Linux and the Mozilla Foundation, developers of the free Firefox Web browser and Thunderbird e-mail software, are familiar examples. Sites such as eBay work because of their "reputation management system", which helps buyers find vendors they can trust and vendors find buyers by demonstrating their trustworthiness. The WikiPedia which combines peer creation and review of knowledge with open access to that knowledge, is perhaps the most familiar example to technical communicators, but there are other examples. ThinkCycle, for example, uses the open collaboration concept to harness the minds of people all over the world to solve thorny design challenges. Many other endeavors are using "borrowed" computing time from computers all over the world to perform calculations that would be impossible for any one agency: science-related examples include SETI@home, a project to analyze interstellar radio signals in the hope of discovering other civilizations, and the protein folding project, a tool for analyzing protein structures and speeding up the development of new drugs while improving our understanding of crucial biological processes.

There are several key characteristics that permit communities to form and collaborate. The tools must be easy to use, enable connections, be open to all, promote collaboration, be self-instructing (easy to learn), and leverage the power of self-interest. The last point is particularly interesting because, as in economic theory, selfish (self-interested) individuals working together can accomplish great things for everyone. In this context, you may be interested in "the cooperation project", which is designed to encourage interdisciplinary study about cooperation and collective action. Technical communicators will be familiar with this concept in the guise of Web 2.0. Michael Wesch of Kansas State University has explicated this brilliantly in a 5-minute video (http://www.youtube.com/watch?v=NLlGopyXT_g) that shows how things are changing; for all the hype, Web 2.0 really does represent something new and exciting.

All this ferment has also led to what is commonly known as a "creative commons", most familiar in the form of the group of the same name. One goal of this group is to provide information creators with a way to provide more nuanced access to their information than is permitted by conventional copyright, thereby facilitating collaboration and conversation and co-creation using an author's materials. A direct example of how this can affect scientific communicators is discussed at some length in a recent Scientific American article on "Science 2.0". If you have any experience in this area, and particularly in how it affects scientific communication, please drop me a line to discuss the possibility of writing about your work for this newsletter.

All of these trends will develop increasing importance for us in years to come. Communication is changing at a phenomenal rate, and although our traditional means of communication remain important and valid tools, clinging too tightly to them will stop us from taking advantage of many new possibilities. If you're currently taking advantage of any of these trends to improve your scientific communication, drop me a line; I'm sure most readers of this newsletter will be interested to learn what you're doing.

Distance education

Science has always been about global communication, and for those of us who must communicate complex science to the public, the notion of distance education should be quite familiar. However, I attended this session to get a sense for how educational technology is changing, since many of us are also educators and may have things we can learn from the academics in our midst.

David Lumerman and Robert Krull (krullr@rpi.edu) discussed various aspects of their experiences with distance education in the form of a case study, revealing both the challenges and the promise of these methods. One common theme was that despite advances in the technology, the human aspects of the communication were still most important, and any strategy that neglects these aspects will fail or have greatly compromised effectiveness. Though most of us still communicate in static form (print, online help, Web pages), this will gradually or perhaps even suddenly change, as noted by Howard Rheingold during his keynote. It will also change as our roles transform from sources of one-way information delivery to more interactive roles, in whch roles we have much to learn from what educators are doing. Distance educators are already grappling with these changes.

Many distance education programs involve a mixture of online and in-person learning, often with both proceeding simultaneously. Technological limitations (e.g., excessive transmission delays, too-slow site responses) and a lack of adequate interactivity were the single biggest problems reported by the presenters (both 33% of the total); latency (delays) were a particular problem for audio and video, with occasional delays of 10 seconds or even longer. Other problems included difficulty in achieving effective interactions (15%) and human networking (13%), and in dynamically defining learning roles (6%). The best learning experiences were achieved when everyone participated, but it was difficult to manage "handoffs" (taking turns speaking) and to juggle different streams of information (audio, video, chat) simultaneously. Using graduate students as moderators and facilitators during lectures helped keep the interactions on track, but did not entirely resolve these problems. For example, remote students could click a button to sound a chime that would notify everyone that they wanted to say something, but this was not particularly effective (often missed). Students preferred face to face interactions and conference calls over videoconferencing, preferred phone calls over chat software, and preferred chat software over whiteboard software, though this may have resulted from limitations in the available technology rather than inherent problems in the approach. The study also revealed the importance of testing technology carefully under realistic conditions to ensure that it performs as advertised.

In this study, teachers tended to be most strongly concerned with reliable delivery of information, whereas students were more concerned with their ability to control the multiple channels of information they were receiving and engage in peer-to-peer learning. A majority of the on-campus students (mostly grad students in Krull's case study) tended to be engaged in peer learning during class, whereas in other previous reports, variable but often large proportions of the students weren't paying close attention to the class, being distracted by the ability to engage in other online activities (e.g., Twitter, music downloads). Distance students were more tolerant of problems than on-campus students, perhaps because they had no alternatives, but interestingly, a significant number of on-campus students chose to take the course online rather than in person, perhaps because doing so allowed them to multitask and accomplish other activities. Both groups favored synchronous interaction with their peers over asynchronous interactions.

Jennifer Cote (jennifer_cote@credence.com) and Mariann Foster (mariann_foster@credence.com) discussed how their company, which produces quality control equipment used by engineers, transitioned from classroom-based learning to online learning. Since they initially had no experience with this form of instruction, the chose a contractor to produce their learning management system and create the final instructional materials, but they retained control over the actual course content because of their expertise in creating this material. To acquire some expertise, both took William Horton's course "E-learning by design". One problem with the initial approach that they developed was a lack of incremental reviews; rather than approving information at several stages (e.g., storyboard, prototype), they only reviewed the final product, leading to considerable amounts of rework when the contractor did not successfully interpret their needs. Once they had been through the process with their contractor and began to feel comfortable with the technology, they gradually began taking over more of the production themselves. Over time they developed various useful heuristics for their lessons. For example: learning objectives + test that those objectives were attained + test of what was absorbed + "do" (actually perform the activities, which sometimes was equivalent to the test part of the heuristic). One useful rule of thumb they adopted based on an uncited Elearn.com article: "Keep lessons no longer than a sitcom." They also noted the importance of keeping lessons interactive, because without interaction, you might as well just send someone a PDF of the information you want them to learn. This fits with what I've read about adult learning, in which engagement can be significantly increased through interaction even when, unlike in the case study by Lumerman and Krull, the interaction is not with humans.

In developing lessons, they found storyboarding techniques particularly useful. A typical storyboard involved combining screen captures with narration text in a Word document, thereby showing how the two related. In my own work, I began with this technique but discovered a more effective alternative that combines the information in DreamWeaver to create a working prototype that is easy to revise and republish before committing efforts to something more final, like a Flash presentation. I've found that this makes the actual images and interactions clearer than is possible with a static storyboard, particularly when you're dealing with managers and subject-matter experts (SMEs) who aren't good visual thinkers. A big advantage of storyboarding, whatever approach you use, is that it provides a quick way to perform stakeholder reviews, including reviews by SMEs, before actually committing time and effort to produce a working prototype. One cool trick they discovered was to use text-to-speech software (now built into most operating systems) to read the narration, since this quickly produces a usable prototype of the narration; the problem with recording actual voices during the early design stages is that narration takes a long time to do right, and changes in the script would force lesson developers to re-record the narration with each change—a poor use of their time. Rather than hiring professional actors to do the voiceovers, they used their own colleagues, and found that students appreciated the resulting diversity of voices. I've used this approach successfully too, and it illustrates that you can achieve surprisingly good results without the time and expense of hiring professionals.

How scientists communicate

I often find that when you know something far too well to have any critical distance from it, listening to someone else discuss it provides many interesting insights. In this session, Joseph Harmon (harmon@cmt.anl.gov) of the Argonne National Laboratory, coauthor of Communicating Science: the Scientific Article From the 17th Century to the Present, presented a summary of the trends he and his coauthor observed during an intensive study of how scientific communication has changed over the past 400 years. He reported three clear trends, and an emerging trend.

The first trend involves the increased use of visuals: 88% of modern journal papers contain at least one graphic, versus only 39% in the 17th century, probably due to the technical difficulty of creating such materials during the early history of science writing and the current ease of doing so. During this time, visuals have evolved from illustrations with varying degrees of photorealism (the dominant form in the 17th century) to data-driven graphics. For example, as late as the 19th century, Harmon found only a single Cartesian graph in his sample of the research literature, versus 60% of modern papers; data tables, being easier to create, were more common than graphs initially, but still only appeared in 10% of papers, versus 50% of papers today. (And these figures seem low based on the papers I edit, probably due to the different populations being sampled.) Modern graphs have also become increasingly complex, often with multiple graphs in a single figure, juxtaposed to facilitate comparisons of the simultaneous trends in several parameters. In the 21st century, graphics are increasingly moving online, as "online supplemental material", where they can make lavish use of color (still difficult and expensive to use for printed matter) and include sound files, video, and interactive tools such as modeling software and databases. Amidst these changes, I noted the evolution from first-person, anecdotal evidence provided by nominally credible authorities in a field to an increasingly heavy reliance on quantitative data and replication of results. Lost in this evolution is much of the qualitative information that is often equally important, but more difficult to "sell" to journal reviewers.

The second trend is the emergence of English as the international language of science, accompanied by certain changes in writing style. From the 17th century to the present, there has been a continuing evolution from active to passive voice, from personal references to the excision of any personal component in the writing, from anecdotal and qualitative data to replicated and quantitative data, from broad statements to increasingly qualified (hedged) statements, from long and dense sentences to shorter sentences with fewer clauses, and from simple descriptive phrases to complex compound adjectives. Nominalization (using verbs as nouns) and the creation of complex acronyms and abbreviations have also greatly increased in frequency. Formerly poetic and visually descriptive phrases have been largely lost from the literature. What interests me about these changes is how they have been inspired by attempts to make scientific writing focus on objective science rather than subjective impressions, ignoring the highly subjective, often irrational (highly emotional) personalities of the real scientists who produce this information. In many cases, the joy of reading the material has been lost, and the resulting materials become of interest only to scientists, irrespective of the inherent interest of the topic. For those of us who must transform science into popular science, this is a problem; it can be very difficult to persuade authors to abandon these hard-learned habits and adopt a style that will communicate effectively with a new audience, particularly since this form of communication is often devalued by the scientific community.

The third trend is that the schema for a scientific document has become increasingly standardized, with increasing use of headings (versus the scrawled marginalia of the 17th century), integration of graphics and tables directly within the text (versus the "plates" at the end of a paper or the center of a book that were traditionally used in the 17th century), greatly increased use of literature citations (often dozens per paper today, rather than a few scrawled marginalia in the 17th century), and insertion of equations within the text but on separate lines for ease of reading (rather than inline within paragraphs). Perhaps more significantly, the rhetorical structure has changed from an almost folksy description of the author's voyage of personal discovery to the rigidly structured AIMRDR schema (Abstract, Introduction, Methods and Materials, Results, Discussion, and References). These sections, in turn, have their own schemata. For example, the Introduction usually follows a pattern of describing the research domain, framing the research problem to be solved, and proposing a possible solution to be tested by research. This schema was adopted by only 60% of papers in the 17th century, versus more than 85% today. In contrast, the Conclusion section must fulfill the promises made in the Introduction by presenting answers to the questions raised at the start of the paper, while also discussing the wider significance of these results, and calls for specific future research; though only 15% of papers have all three components, 60% have at least one of them. (Again, this seems low in my experience.) The Methods and Materials section has its own schema: preparation for the experiment, details of the experimental procedures, and generation and analysis of the data. The goal of this section is to provide a "warrant" for the Results (i.e., to justify their validity). The Results and Discussion sections, which are often combined, specifically present the results of the study (often visually or by means of tables), then attempt to explain the meaning of these results, often supported by citing results from other studies, and provides any necessary qualifications of the results (uncertainties, future research, etc.). Interestingly, though the Abstract is now a key component of all journal articles, and often read instead of the full paper (a guilty admission of most scientists), it is a relatively recent innovation (possibly as late as the 20th century); it presents the overall article in microcosm.

Journal articles are increasingly evolving onto the Web. Although the current form of the article itself may be identical to the printed form, length is no longer a restriction, so potentially huge amounts of supplemental supporting material may be provided. Visual and auditory information are increasingly available, as are interactive tools such as modeling software and databases such as the Encyclopedia of Life and many specialized genetics databases. And as my mention of Science 2.0 earlier in this article reveals, we are only seeing the beginnings of this evolution.

Interestingly, although the unified modern journal paper schema is highly efficient (see, for example, The scientific method: technical communicators learning from scientists), it sacrifices some of the pleasure of reading that came with the variety of older texts to achieve this efficiency. Some of this loss arises from the focus on abstract science rather than concrete human endeavor. Some of this arises from the modern peer review process, which began in the late 1700s when John Hill publicly criticized the many ludicrous research findings that were making their way into the research literature, controlled only by the diligence (and personal biases) of the publishers. Today, truly rigorous peer review provides a much higher degree of quality control, though the human fallibilities (prejudice, competition for research funding, personal animosities) have by no means been completely eliminated.

Pictures and profits

In this presentation, designer Patrick Hofmann (patrick@designph.com) presented examples of how he has redesigned information to take advantage of the power of visual communication. One of the more interesting things about Hofmann's approach is not so much the graphics, but rather how he manages to consistently think outside the box, something all of us should strive for. (The title of his presentation refers to how much money he has been able to save for clients through his designs; here, I'll focus on the design strategies rather than the financials.)

In one example, Hofmann was responsible for developing training aids for a company that produced laser projection machines for leather cutters. The handheld control for these machines was a typical engineering nightmare, requiring complex combinations of button presses with a three-button control, and errors in learning and applying these shortcuts were expensive in terms of lost time and wasted leather. The factory workers in this environment were recent immigrants from multiple ethnic backgrounds, with weak English skills and few words and phrases in common, suggesting the need for visual aids. Although strongly discouraged by his employer from visiting the actual users of the product, he nonetheless obtained permission to visit the factory and observed something crucial: that most of the workers had created their own visual aids to help them remember the correct combinations. By observing these aids and how they were used in the workplace, he was able to design a comparable work aid that made the work much easier and more effective for all the workers. In scientific communication, we might follow much the same approach by studying how scientists use equipment in the laboratory or the field.

In another example, Hofmann redesigned instructions for Sprint Canada's new telephony services, many of which were specifically intended to be sold to recent immigrants. Here, the goal was to eliminate as many translation costs as possible, and Sprint was willing to test their many audiences to confirm that the results were successful. Rather than hiring test participants via a recruiter, Hofmann discovered that test participants could be hired much more cheaply (by an order of magnitude) from a temp agency such as Kelly Services. Such agencies offer a very powerful tool for usability testing: they maintain detailed records on all of their employees, including their education, skills and skill levels (e.g., computer experience), ethnicity, language skills, and so on, thereby allowing testers to be as specific as they want in recruiting test participants. By testing various combinations of purely abstract (pictures) to purely literal (text) instructions, Hofmann arrived at a combination of words and pictures that communicated effectively (an 85% success rate in task completion tests), allowing them to reduce their languages to two (English and French, Canada's two official languages). This is precisely the kind of inexpensive approach we could use to test the effectiveness of scientific technology transfer.

In a third example, Hofmann set out to help Hewlett-Packard reduce their documentation costs for an installation guide. This effort occurred during the redesign of a computer terminal, with the goal of developing a wordless setup guide, much like what Ikea provides for their products. Problems with the traditional approach included the need to localize 200-page manuals into more than 16 languages, with obviously huge translation costs and many less-obvious costs, such as the need to create a separate part number for each manual and maintain and manage inventories of these manuals. To assist in the redesign of both the product and the manual, they brought in their inexpensive video cameras from home and recorded details of how engineers performed the setup, including obtaining documentary evidence of how dangerous some aspects of the installation were (e.g., sharp edges of parts caused blood loss in some instances), and how frustrating others were (e.g., audible cursing during the assembly of difficult parts). The video made a strong case to management for modifying the design, such as covering sharp parts and labeling other parts directly so that the label information could be excluded from the documentation. One interesting insight was that although some linguistic groups read information from left to right and others from right to left, all groups read from top to bottom on a page and from top page to bottom page; binding the manuals at the top edge thus neatly eliminated the problem of having two different sets of manuals for the two different reading patterns. Again, this example illustrates how observing real users of a product and how they use the product can provide important design insights.

Hofmann provided a few additional useful tricks. Using small text boxes (2´3 inches) in storyboards is a useful tool for forcing yourself to create concise descriptions. In some cases, and particularly for abstract concepts, words are more effective than visuals, or must be added to visuals for clarity. As Jakob Neilsen has noted, any feedback is better than none, even if that feedback can only be provided by your officemates or family members—or even yourself. (This is a particularly useful tactic when, as happens distressingly often, employers forbid their technical communicators to contact users of their product.) As Hofmann's examples illustrate, guerrilla tactics such as field visits to users are a powerful way to gain insights, often for not much money. When it's not possible to arrange these visits, sometimes thinking laterally reveals the solution. In one case where it was necessary to test products with a Chinese audience, there was no budget to travel to China to conduct tests, but enough of a budget to purchase inexpensive webcams that could be couriered to each test participant. This solved the problem nicely; combining the video feeds with chat software provided an elegant solution to not being able to be physically present during testing.

Information visualization

In this session, Phylise Banner (pbanner@skidmore.edu) provided a philosophical take on how we humans process visual information, with an emphasis on teaching us how to think visually and think about visual thinking rather than providing predigested design solutions. (She introduced her presentation by recommending the book Visual Intelligence, by Ann Marie Seward Barry, which nicely complements what she was about to discuss.) This is an approach I strongly favor, since I share her belief that it's better to learn how to think through a problem than to memorize rote responses that often have limited applicability.

Visualization is the process of transforming observations into communication, even though the communication is inherently fictional; after all, ink on paper or dots on a computer screen are not the same thing as the object they portray. The process of communicating visually is complicated by what viewers bring to the dialogue: they not only perceive the dots, but also infer information about how and why a visual was created and what the designer was attempting to communicate through their design.

Visual perception is tightly related to how the brain processes visual signals from the eyes. Some features of this processing are consistent both between and within cultures. For example, if you draw any closed, curved shape, then add a circle inside the shape and near one of the borders, then join a triangle outside that shape but adjacent to the circle, it is nearly impossible to create an image that doesn't resemble a bird—even though it's unlikely that any real bird looks anything like what you've drawn. Similarly, if you draw two horizontal lines side by side (– –), add a vertical line below them and between the two horizontal lines ( | ), and then add a third horizontal line ( – ) below the vertical line, then enclose this image within any shape, it's essentially impossible to produce a design that doesn't resemble a human face. However, our interpretation of many other images is strongly shaped by our history (education, experience) and our cultural interpretations of certain images. For example, different cultures assign different meanings to the same color; white is the color of death in China, but the color of purity and innocence in Western culture. Those who indulge in cross-cultural communication need to be keenly aware of the risks of using symbols without a deep understanding of the other culture.

Visual perception is always composed of three factors: a distal stimulus (what you're looking at), a proximal stimulus (how that image is detected by your eyes), and a percept (what you imagine those sensory signals to represent). When we see something, we attempt to match our internal representation of the image to the image's context, and in so doing, use our prior experience to help us assign meaning to the image. This process involves classification, an attempt to discover order or patterns in nature. An important role of design is to facilitate this process by using familiar visual symbols in such a way that the matching process becomes easier. Maps are also ways to link internal knowledge with the outside environment. In that context, I've always liked Richard Saul Wurman's treatment of the word map as an acronym; I paraphrase his explanation as "Making Able to Perceive".

Because every graphic is an interpretation of reality, successful communication requires that the designer and the viewer share enough knowledge to establish a connection between their world views. (This is why "art appreciation" courses exist: our culture is sadly impoverished in visual literacy, and the education these courses provide enables even naïve viewers to understand something of what an artist was trying to say or achieve.) An often neglected component of visual information is the emotion it is intended to evoke (and sometimes emotions that are unintentionally evoked). As technical communicators, constrained by the modern Western scientific mindset, we tend to forget about this and in so doing, fail to take advantage of the power of affect (emotional response) in a well-crafted visual. Some additional insights can be gained from the book Imagination and the Meaningful Brain by Arnold Modell.

Unclogging brain bandwidth by reducing cognitive load

In this session Jane Bozarth (info@bozarthzone.com) discusses how information can be presented in such a way as to avoid overloading the recipient's ability to receive, process, and understand the information (i.e., their "brain bandwidth"). Overload occurs when too much information is presented in too little space or time. To avoid this problem, a useful design trick is to identify the 20% of the information that is most important, and eliminate the remaining 80%. (The actual numbers are less important than their relative magnitudes.) The goal is to reduce the "cognitive load", a term that can be translated simplistically as how hard the brain must work to deal with incoming information; heavier loads represent a more difficult task.

Humans have two main sensory channels for receiving information from our environment: an auditory–verbal channel, and a visual–pictorial channel. Both channels have a limited capacity, and a powerful design strategy involves dividing information between the two channels rather than overloading a single channel. Consider, for example, how difficult it is to read a book while someone is talking to you: both streams of information compete for limited space in the auditory–verbal channel. Contrast this with how easy it is to examine a technical drawing while someone explains what you should be looking at and why it is relevant: the oral information enters the brain via one channel while the visual information enters via another channel, thereby avoiding competition for the limited bandwidth. This is why, for example, reading lengthy Powerpoint slides to your audience is less effective than simplifying the slides and letting your voice carry most of the content: the information is divided among two channels rather than one. In this case, an additional complication is that we tend to read faster than people talk; in addition to the two forms of words (written and spoken) vying for space in the same auditory–verbal channel, the lack of synchronization between the two forms of word processing complicates the task of processing the information. I've used this knowledge successfully in my own presentations by producing short bullet points that only take a second or two to read; by the time I've had a sip of water or taken a deep breath, my audience has finished reading and is ready to pay attention to what I have to say, their minds already primed by the bullet point.

To understand how cognitive load affects communication, it's helpful to distinguish between working memory (sometimes referred to as representational or short-term memory) and long-term memory. Working memory is where you hold information while you work to understand and respond to it, whereas long-term memory is where you store pre-existing information and the source of connections between old memories and the new information being held in working memory; once the connections are established, the information can be transferred to long-term memory, where it becomes permanently available. The goal of design is to help your audience make this transfer. High cognitive loads are a problem because they can overload working memory, leading to a loss of information (just like overfilling a cup of coffee) and difficulty finding the time required to process information and move it into long-term memory.

Many cognitive processes interfere with the process of receiving and processing information. One of the better-known examples is referred to as the split-attention effect: when you are forced to divide your attention between two sources of information, you can't devote your full attention to either. This can be a relatively simple problem, such as when you must glance back and forth between printed documentation and the computer screen or when you must constantly refer back to a key or legend to understand a graphic, or something considerably more complex and dangerous, such as talking on your cell phone while driving. In both cases, we have only limited attention, and being forced to divide it among too many streams of information simultaneously can compromise the communication. This is why, for instance, Powerpoint presentations with sound tracks, multiple graphical animations, text-heavy slides with animated text fly-ins, and the speaker's voice become impossible to comprehend: there are too many signals vying for too little attention.

A famous design dictum is that the design is complete when there is nothing left to remove; I'm familiar with this from Antoine de Saint-Exupery, who observed that “A designer knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.” Edward Tufte is another well-known advocate of visual minimalism. Diagrams are an interesting example of efficient communication, since the process of abstraction (preserving only the key details) both reduces the cognitive load (fewer distracting details to ignore) and takes advantage of the powerful visual processing channel.

Another problem relates to what Bozarth named compellingness: when something is so distracting that it incessantly draws your attention, you focus on the distraction rather than the information you should really be focusing on. Apparently, a major cause of airplane accidents on the ground is that pilots become so distracted by all their instrument displays that they forget to actually look out the window and watch where they're going. (Having recently purchased a new Toyota Prius, I can testify that this is a real effect: the urge to keep an eye on the fuel-consumption display is nearly irresistable, even after several weeks with the car.)

Cognitive loads can be intrinsic, and related to the difficulty of the concept being communicated; we have no control over this, other than through our efforts to simplify and clarify. Loads can also be extrinsic, and related to extraneous details; we have considerable control over this aspect of a design. Loads can also be what Bozarth described as germane, meaning that they relate to the relevance of a communication effort to the audience and thus to their motivation to pay attention; we have some control over this because we can design simple, attractive communications that are directly relevant to the audience's needs and desires, and can engage them by turning dictation into dialogue (e.g., by creating a game). This can pose problems when designing a communication for both beginners and experts; the two audiences have different needs and desires, and thus require different communication strategies (e.g., simple, unchallenging exercises for beginners versus complex, demanding exercises for experts). Various strategies exist for meeting both needs in documentation (e.g., providing "more info." buttons and optional complex problems for the experts), but in the classroom, it's hard to strike the right balance; sometimes all you can do is to try enlisting the experts to mentor the beginners.

Conceptual diagrams for science communication

In this presentation, Joanna Woerner (jwoerner@umces.edu) and Caroline Wicks (caroline.wicks@noaa.gov) discussed their efforts to create conceptual diagrams that use symbols such as icons to present the essential details of a concept. Their goal is to synthesize and abstract complex information using the power of graphics to simplify the communication. They distinguished between conceptual diagrams and cartoons (which rely on humor and context), box-and-arrow drawings (e.g., flowcharts), and data graphics. One goal of creating conceptual diagrams is to help clarify thought processes, and thereby improve understanding, identify gaps in a body of knowledge, generate ideas, reveal priorities, identify key elements, and help groups synthesize information along the way to reaching a consensus. This can work, in part, because the same information may be presented in mutually complementary ways, as when text and graphics combine to make the meaning of an image clearer. Because of the simplification process, conceptual diagrams are a useful way to bridge the gap between scientists and the general public, thereby creating a shared vision.

Symbols can be used to represent both tangibles (e.g., aquatic organisms) and intangibles (e.g., flows, processes). Because visual symbols are inherently equivocal, standardizing symbols goes a long way to ensure accurate communication: as is the case in the letters we use in written language, symbols communicate very efficiently once they have been learned. To support this goal, the speakers are part of a group striving to create a standard symbol and image library that can be used by others in their own conceptual diagrams. This collection currently comprises more than 1500 icons (soon to reach a total of 3000) and more than 2000 photos, and everyone is welcome to contribute new images.

As part of the presentation, Woerner and Wicks provided examples of nested diagrams, such as smaller, more detailed images zoomed out from a larger image that provides less detail but more context (sometimes referred to as blowouts), and sequential flows that show spatial or temporal changes. They also presented examples of hybrid diagrams that layer data of various forms on top of a conceptual diagram. To make the process of diagram creation more concrete, they led us through two class exercises in which volunteers were asked to play a version of Pictionary that they call "Conceptionary" with people sitting at their table: the goal was to create crude prototype diagrams that could be used to communicate often-subtle scientific concepts such as acid rain causing the death of trees or bleaching of coral reefs; the power of this technique to create understanding and consensus in real-time between participants in a meeting was revealed by the fact that most volunteers were able to successfully communicate their concepts to the rest of the table in less than 2 minutes, even though no one admitted to being a professional artist. Even I succeeded, and my lack of artistic skills is legendary!

Knowledge transfer between academics and practitioners

Knowledge transfer (often called "technology transfer" in the sciences) is of keen interest to readers of this newsletter, as bridging the gap between scientists and the general public is an increasingly important part of our work. Thus, we have much to learn from any other group that faces the same challenge.

Joel Kline (jkline@lvc.edu) presented the results of his study of communication between university professors (academics) and technical communication practitioners in New Zealand. New Zealand has a two-level university system, in whch research universities focus on academic work and train graduate students, whereas polytechnic institutes train practitioners. As in North America, many professors of technical communication have never actually practiced their trade, and therefore don't adequately understand the nature of the work and the challenges that practitioners face. On the flip side of the coin, practitioners don't appreciate the work done by academics because they often see it as too detached from the real world and irrelevant to their concerns. This is a great shame because, as I've always believed and as Kline confirms, both groups benefit greatly when they make efforts to discuss their mutual concerns and learn from each other. From the academic side of the divide, the problem is that there appears to be no unified model of knowledge transfer in technical communication, and possibly no clear perception of the benefit from doing so; from the practitioner side, there's no clear return on the time investment required to listen to the academics, and also no clear model for how this dialogue should occur. As a result, each community is asking research questions that don't interest the other community. Kline refers to this as the WhoGARA problem: "who gives a rat's ass?"

Interactions between academics and practitioners come in various forms: face to face, via publications, and online. In my own experience, these forms fail for a variety of reasons. Face to face meetings most often arise in learning environments, such as when a practitioner attends university to attain a degree, or at conferences. But relatively few practitioners seek a degree, even though salary surveys show that this can improve their future earning potential, and you'll often see a single conference become two conferences in which academics talk to academics, practitioners talk to practitioners, and neither group crosses the floor to talk to the other because their perceived interests don't overlap. Publications are another obvious common ground, except for the "common" part of that name: Academics see little value in publications such as Intercom because the theoretical sophistication is usually low and there is no career-related incentive (e.g., tenure) to publish in such venues; practitioners, on the other hand, recognize the potential value of articles that appear in peer-reviewed journals, but simply lack the time to extract that value from papers that are often forbidding, theoretically dense, and turgidly written. These stereotypes raise a nearly impenetrable barrier between the two communities. Last but not least, there are virtual communities established online. These suffer from all of the above problems, plus problems unique to the medium combined with a lack of time to sort through the often-high level of traffic in such forums. All three modes of interaction suffer from a form of learned contempt for the other community: like the feud between the Montagues and the Capulets, the original reasons for the feud may be long forgotten

Those who responded to Kline's survey of New Zealand practitioners provided some interesting insights into the nature of the problem. They did not favor any specific form of virtual interaction with academics, though e-mail and discussion groups received the highest ratings (used by 30 and 25% of practitioners, respectively). Peer-reviewed journals were an unpopular source of information, with Technical Communication receiving the highest rating but not a truly high rating (only 28% had "ever" read a paper in the journal), and other journals received even less attention; though low, this response rate actually compares favorably with that of the most highly rated non-peer-reviewed publication, Intercom (at only 33%). Clearly, publications are not a high priority for practitioners, and academic resources (the teachers and their journals) were rated lowest of all possible sources of information; they received the lowest "useful or very useful" rating, and the highest "never or rarely useful" rating. Practitioners rated colleagues and seminars (including conferences and 1-day workshops) as their most useful sources of information. Professional associations were seen as another valuable source of knowledge. Colleagues, seminars, and associations collectively accounted for nearly 60% of the total sources of knowledge. Despite these pessimistic results, practitioners acknowledged—at least in theory—the value of contacts with the academy, but most felt a significant disconnect between the two worlds. Echoing my own experience, they generally did not believe that creating an online community to facilitate this dialogue would be effective. As a result, Kline notes that "we cannot simply provide a technological channel between the communities and expect it to work".

I attended Kline's presentation because I've been doing knowledge transfer work for nearly 20 years, and because understanding this work is an interest near and dear to my heart. For scientific communicators, many of whom engage in knowledge transfer activities between audiences separated by a gap as wide as that between the the academics and practitioners in Kline's study, the lessons of this study are clear. First and foremost, steps must be taken to break down the barriers that separate the two communities: communication cannot happen if neither party can hear the other party's voice, or will listen to it if they do hear it. In scientific knowledge transfer, as in technical communication, the technical communicator's role must become that of mediator or translator between the two communities, helping each to understand the other and helping to identify ways of making each party's message audible and comprehensible. Various possibilities suggest themselves to me: Academics should be rewarded equally for theoretical and practical research, particularly when the practical research occurs in the workplace, in close cooperation with practitioners. Practitioners should be given strong incentives to join the academic world through opportunities such as adjunct professorships and joint research projects. Papers published in peer-reviewed journals should include a section entitled "implementation" or "practical considerations" that could make practitioners more interested in reading the papers; if the implications of the research are clear, more practitioners will be willing to invest the time to read the full paper. Conversely, "popular" publications such as Intercom should consider routinely publishing the Abstracts or "implications" sections of relevant journal articles, giving practitioners a reason to consult the journal to learn more. Simple steps such as these should provide a strong step towards bridging the divisive gap between academics and practitioners, whether in scientific or technical communication.

Closing keynote speech: Richard Saul Wurman

I've been a fan of Richard Saul Wurman (rsw@wurman.com) ever since I stumbled across a copy of Information Anxiety, which ignited my passion for information design. Though now in his 70s, Wurman appears as energetic as ever, and every bit as syncretic; in a long, entertaining, rambling presentation, he flitted between concepts like a bee visiting flowers to collect pollen, creating innumerable useful cross-pollinations along the way. (Indeed, I'm not surprised to have found his speech much like his books: nuggets of important information floating in a sea of fascinating distractions.) It's hard to unite these disparate thoughts into a coherent narrative, since much of what he said was almost a "greatest hits" collection of bon mots and followed his own observation that "nobody has ever wanted anything I've ever done"; Wurman does it anyway because it interests him, and in satisfying his own curiosity, he's produced more than 80 books that proved very interesting indeed to a great many readers. Rather than trying to assemble his talk into a narrative, I'll simply present a few of the thoughts that struck home as I listened:

Wurman concluded with an overview of his "19.20.21 project", which is designed to explore 19 cities around the world that will have more than 20 million people in the 21st century.


©2004–2024 Geoffrey Hart. All rights reserved.