Geoff-Hart.com: Editing, Writing, and Translation

Home Services Books Articles Resources Fiction Contact me Français

You are here: Articles --> 2007 --> Editorial: And now for something completely different

Vous êtes ici : Essais --> 2007 --> Editorial: And now for something completely different

Editorial: And now for something completely different—the 2007 conference report

by Geoff Hart

Previously published as: Hart, G. 2007. Editorial: And now for something completely different—the 2007 conference report. the Exchange 14(2):2, 11–18.

This time, I've decided to take a break from the usual editorial essay and try something a bit different: a report on some sessions I attended at the STC annual conference in Minneapolis. Because of various SIG-related business, ad hoc meetings, and the inevitable scheduling conflicts, I didn't get to attend all the sessions I wanted to see. Plus, there weren't many sessions specifically of interest to scientific communicators (a topic I'll return to at the end of this essay), so I was forced to pick and choose a fairly eclectic group of topics. Herewith, my selections and some of the more important and relevant points:

Keynote speech: Simon Singh

Simon Singh (http://simonsingh.net) is that rarest of birds: someone equally at home in the particle physics lab, writing popular science books, and directing documentary films. As the keynote speaker, Simon spoke about his documentary on Fermat's last theorem (http://simonsingh.net/The_TV_Film.html), and although I'll bet that very few audience members (possibly some of you!) had any prior interest in or understanding of the subject, he still managed to captivate the audience. This was due in large part to Simon's own considerable charisma, but he also made several points relevant to all technical communicators.

For example, he emphasized the importance for a film-maker of establishing a relationship and rapport with the person being interviewed—a point anyone who's interviewed scientists and engineers to get the facts will appreciate. Some of the key scenes obtained during filming would have been impossible without that relationship. Getting close to your collaborator in film is particularly important given that a typical hour-long documentary may have room for no more than about 8000 words; most of the remaining content will have to be carried by the interview subjects, and they'll carry that burden much better if they can interact naturally with you. You'll still have to cut a lot, and really focus in on the essentials, and that's a task all of us face. To satisfy his own fascination with the subject and provide the details that were inevitably cut from the documentary, Simon used his research to produce a book entitled Fermat's Last Theorem in the U.K. and Fermat's Enigma in the U.S.—an interesting difference in title resulting from someone's perception of the trans-Atlantic cultural and language barrier. For more information on Simon's books, visit his Web site (http://simonsingh.net/Books.html).

A somewhat thornier issue involved Simon's conscious choice to edit the words expressed by a speaker during an interview. In this specific case, a mathematician being interviewed used the word "primes" to refer to prime numbers. Given the lack of time and space to explain the meaning of this term, Singh replaced the word with "numbers" while editing the film. The ethics of this situation are of interest to scientific communicators because we often face similar issues on the job when we must simplify complex scientific information for a more general audience. Singh's take on this was that his edit did not substantially change the meaning, and thus did not misquote the author; more importantly, the final words conveyed the most important part of the meaning (i.e., numbers, not the specific type of numbers). Though I'm not personally comfortable with the ethics of his choice (changing a quote) and don't consider the change to have been necessary, I have no problem whatsoever with the larger principle of conveying only the key information to a non-technical audience, even if this requires more simplification than a scientist or a purist like me might prefer.

Case studies of publishing in special environments

Nicoletta Bleiel (nickyb@componentone.com) started this session with a talk on the use of templates as an aid in single sourcing. If you've already grappled with the issue of standardization, much of the material in this presentation will be familiar and relevant to you. Bleiel made some strong points that I've seen escape many other technical communicators. For example, even the best-designed template in the world may not be quite as clever as you'd like to think. The only way to tell for sure is to test the template using real sample text—an often neglected step. A few other points of particular interest:

David Caruso (dcaruso@cdc.gov), a health communications specialist with the National Institute for Occupational Safety and Health (http://www.cdc.gov/niosh/), spoke about his efforts in risk communication. Caruso's challenge was to present a wide variety of safety-related technical information to a widely varied audience in the mining industry. He faced many constraints in this work, including the need for multiple modes and media of communication (e.g., journals for mining engineers vs. videos for miners) and the ubiquitous problems of "my subject-matter expert (SME) is too busy to speak to me" and shrinking budgets. Many of the SMEs he works with must publish in peer-reviewed journals and give presentations at professional conferences to advance their own careers, yet the audience who actually benefit from their research (miners) will rarely if ever have access to this information. Thus, alternative means of information dissemination were required. At the time, NIOSH generated many printed reports and some very old-fashioned VHS videos that contained good information, but which clearly did not excite their audience enough to be routinely used; moreover, the two were packaged separately and often became separated. His solution was to implement packaging that kept the videos and text together in a single package; using the commercial model of the time (think Disney videos) of an oversized VHS case with a space for printed materials. This has since been replaced by the standard Amray DVD case which has clips to hold printed materials alongside the disc.

Caruso found it challenging to obtain the necessary content within the constraints of the NIOSH research project framework, and to convince SMEs that the technical content would not be unacceptably diluted by the need to simplify it for a non-SME audience. Thus, there was considerable need to educate SMEs about audience needs, basic communications principles, and information design, and to gain enough credibility with them to permit a change to the new approach. I won't go into more detail, since I'm hoping to coax Caruso into contributing an article for this newsletter, but his initial results have been highly promising.

Gluing their eyes to your screen

STC Fellow Karen Schriver (kschriver@earthlink.net) has gained a well-deserved reputation for performing insightful, comprehensive research and literature reviews, and presenting that research in a format that's palatable to notoriously theory-averse practitioners. Her talk built on her successful 1997 book, Dynamics in Document Design, by exploring how more recent research has improved our understanding of best practices in visual design. Some key points to keep in mind:

A viewer's emotional response to a design precedes their interpretation of the design, and is almost instantaneous (it takes about 50 milliseconds). For example, if a page looks dense and intimidating, viewers may be strongly demotivated to keep reading. This initial impression (positive or negative) can be difficult to change, and tends to be very consistent across a range of viewers, thus it's really true that "you never get a second chance to make a first impression".

Writing for the Web—2007 edition

STC Fellow Ginny Redish (ginny@redish.net, http://redish.net/) reviewed the published literature and her own research on "best practices" in writing text for Web sites. Her presentation focused initially on the topic of designing a Web page to increase the ease of finding information, and continued with information on increasing the effectiveness of a given presentation once the information has been found and will be read.

Redish started by reminding us about the need to emphasize our audience: an effective site focuses on the most important things visitors have set out to accomplish, not necessarily on our goals for creating the site. We often forget that their goal is never "visit our Web site"; instead, the goal is to accomplish something specific. As Redish noted, "People don't come to the Web for documents: they come for information." In this context, good content can be defined as content that is easy to find and reach, that answers the reader's questions, and that thereby solves their problem. Readers also prioritize currency (information that is updated with an appropriate frequency), ease of use, and high quality. For any Web page that is time-sensitive, it's important to assign an owner to the page who will ensure that the page remains up to date—or who will remove it from your site when it is no longer relevant.

People achieve their goals by scanning, skimming, and selecting what to read in more detail: statistics show that people spend an average of 25 to 30 seconds on a home page, 45 to 60 seconds on interior pages reached from a home page, less than 2 minutes in total before deciding whether to abandon a site or keep looking, and about 4 minutes actually doing the reading once they find what they're seeking. (These data come from Nielsen and Loranger's 2006 Prioritizing Web Usability.) Clearly, we don't have much time to attract and hold their attention! Readers often won't scroll below the bottom edge of the screen unless we give them a good reason to do so; thus, it's important to keep as much important information as possible within the top of the screen, at least until we get to a page that is designed for extended reading. Pathway pages take advantage of this principle: they serve the primary goal of getting people to their next destination in a hurry, and should thus emphasize information that accomplishes this goal. Experience has increasingly shown that we shouldn't expect people to read extensively before they reach their goal, and that we must design to support browsing instead. Redish's new book Letting Go of the Words (www.redish.net/writingfortheweb) will be of particular interest to those of us who are word geeks, yet who must also design Web pages: the book will provide many strategies for minimizing the quantity of text and thereby increasing the effectiveness of your Web content.

Redish noted an important distinction between information on the Web and in print: even though readers always interact with text to some extent, reading on the Web is much more clearly a dialogue between the reader and the Web site because the Web site responds to questions (i.e., by letting us follow links or use search tools). This suggests that the kind of thought process used in think-aloud usability studies can be applied to Web design: if you can understand the kind of question a reader will ask, you can write to answer that question and the questions that follow logically from that first answer. In such an interaction, it's important to "give people the scent", a metaphor borrowed from the use of bloodhounds to track people: in the context of the Web, the "scent" is the ensemble of clues that reassure browsers they're following the right path and should keep going. People will keep browsing and clicking and scrolling only so long as they're confident they're headed in the right direction. One complication in such dialogues is that you're not there to explain and revise your text, thus you must carefully manage the reader's initial interpretation: readers actively guess what you mean when they read headings and links, and those guesses are often biased by their current mental state.

One commonly advocated best practice for Web design involves converting titles into links rather than using "click here" links, thus online titles must be clearer than their on-paper equivalents, particularly if there won't be much context beyond the title to convince a reader to follow a link. This means that greater care is required when choosing a link title. "Hover text" that appears when you hold the mouse cursor over a link can provide additional information that clarifies the meaning of the link, but because the presence of this text is not obvious and because displaying it requires an additional step, it's not as effective a solution as choosing a clear title in the first place. The choice of link text can be even more important for certain audiences. For example, blind users "scan with their ears", and sometimes choose to "display" (i.e., read aloud) only the links on a page when using screen-reader software. (Interestingly, this follows the skimming approach used by sighted readers, but using a different sense.) Thus, links should be distinct: they should not all begin with the word "link", and should emphatically not all say "click here".

Copies of the slides from this presentation can be obtained from the STC Web site: <http://www.stc.org/54thConf/sessions/sessionMaterials05.asp?ID=67>.

Information architecture for mobile devices

Ever-popular speaker Bogo Vatovec (bogo_stc@bovacon.com, http://bovacon.com/) discussed some of his research and design work for portable devices such as cell phones. Given the small screen size in much scientific and technical equipment, and the increasing use of portable devices such as handheld dataloggers, his presentation provided some potentially important lessons for scientific communicators.

Challenges faced by designers start, most obviously, with the small screen size: the devices wouldn't be portable if they had a large screen! But designers also face challenges that arise from differences in keyboard features and operating systems. Worse yet, market forces lead to endless revisions and updating of portable devices to include the latest and greatest features. So even within a given company's product line, you may have severe backward-compatibility issues between versions of the same hardware. Memory and processing power are also severely limited, even for current-generation equipment, so minimalist design becomes especially important.

Because we rarely have any control over the display device, except in the limited situation of dedicated hardware and software combinations, we must emphasize content over form. This may mean, for example, that we must adopt "fluid design" principles, such as designing text to reflow to fit the screen size rather than hardwiring text and layout for a specific screen size. Our lack of control over the output device, combined with the relatively low processing power of these devices, means that some products simply won't work on a portable device: there will be no HD-ready Flash animations available for your cell phone any time soon, particularly if the file must be downloaded over an erratic cell connection rather than uploaded to the device over a fast USB connection. Where graphics are important, they must be redesigned to fit the screen; resampling is rarely satisfactory, particularly when middleware (software that lies between a server and the portable device) makes its own attempts to optimize a graphic. For some special-purpose applications, it may be better to write your own software for the mobile instead of relying on built-in software (such as Web browsers) so that you can control both the content and the presentation.

Because portable devices do not permit large pages with multiple columns, it becomes important to abstract information into its simplest functional components, and design around those components so you can more easily remap the information onto the available screen space—essentially, all we have available is a one-column display that supports simple text better than complicated layouts. In converting tabular or multi-column information from a typical  overcrowded Web page, we may find that it's necessary to convert each cell or column in the grid into a single screen on the mobile device. Sometimes the content must be entirely different because it's not possible to map a single chunk of content between two radically different uses; the in-office and on-the-road tasks and the contexts in which those tasks are performed may simply differ too greatly. This suggests that some information can be single-sourced, but other information must be custom-designed for mobile use. Although tools exist to automatically convert content between in-office and mobile devices, they don't yet work well or consistently (particularly if the designer did not think of the possibility of using the information on a mobile device), and may not do so for a long time to come. Some things convert better than others: a paragraph of text works well, for example, but a complex home page will probably have to be redesigned from scratch.

Before developing or repurposing information for a mobile device, it's important to carefully think about your goals and the constraints that might stop you from reaching those goals. Creating a business case is a very useful tool to support this thought process because it forces you to consider whether your ideas are economically feasible. If you are forced to create two entirely separate sets of information, that can effectively double the effort required to generate and maintain the information. (Vatovec did not say "double"; I chose that word to dramatize the potential magnitude of the cost increase if you double the amount of information you must create and maintain. The actual increase may be even greater.) If you decide to proceed anyway, you must be confident there are enough users to justify the high cost of creating and maintaining a new body of information, particularly if you must support a diverse range of mobile devices. Here, more than in many other applications, you must have a deep understanding of the task you're designing the information to support and the context in which that task will be performed. Then, given the limitations of a typical portable device, you must design based on the absolute minimum amount of detail you can provide that will support that task; whereas superfluous details can be supported on a 17- or 19-inch monitor, they must be ruthlessly eliminated for a portable device. The HTML Dog site (http://HTMLdog.com) offers a report on how CSS and HTML work on a wide range of mobile devices, including different models and versions of a given device, and this represents a great resource for this kind of design project.

It's important to remember that portable devices are much less usable than office computers: the input devices are smaller and harder to use, and the small screen size creates a need for considerably more scrolling and clicking. Where possible, you can minimize the burden on the user by identifying the most likely choices and using them as default values, and choosing options such as pick-lists to minimize the need to type. Providing clear feedback on the status of an operation is even more important than on an office computer, since the response latency may be much higher for mobiles due to a combination of low CPU power and low bandwidth for transferring data. Bandwidth constraints also suggest that it may be more efficient to download relatively large pages and use scrolling rather than breaking the information into many chunks, each of which must be downloaded separately; however, we face a tradeoff because older devices may have insufficient memory to receive a single longer page, forcing us to break information into multiple pages.

Because the small screen provides much less context, it's easier to forget where you are, thereby greatly increasing the risk of becoming "lost in cyberspace". Thus, it's helpful to repeat navigation elements that minimize the need to scroll back to the top of a page just to remind yourself where you are. Making context explicit (e.g., using breadcrumbs) helps. In this context, using a zoom-in architecture (with few options per screen, but a deep hierarchy of screens reached from these few options) works better than the broad architecture that has been proven to work well on the Web. Using flowcharts is a great way to test such an information architecture before you actually try to implement it.

Information retrieval

Kristine Henke (khenke@us.ibm.com) and Korin Bevis (kbevis@us.ibm.com) talked about developing ways to organize and categorize information so it will be easier to find. They noted that search tools are becoming the premier means of finding information online, but that people still do navigate using hierarchies. To support both approaches, they talked about classifying information to support progressive refinement of a search, gradually narrowing down the list of hits, or (when following a series of links) to ensure that the hierarchy is clear throughout the navigation. If you've studied biology before, you can think of this as akin to working your way through a dichotomous key to identify an organism. (And if you're not familiar with keys, here's one useful primer: http://en.wikipedia.org/wiki/Dichotomous_key).

Good classification begins with an appropriate division of information into higher-level categories. A familiar model for online help might be a division of topics into tasks, concepts, reference material, examples, and scenarios. Each of these categories can then be subdivided into subcategories, then into further subcategories, eventually reaching a single topic or small group of topics, with each topic presenting only one idea or focusing on only one thing. Consistency in defining the various categories is an important part of making this approach usable. For example, all tasks could begin with gerunds (printing, finding, saving, etc.), whereas all examples could begin with the word "Example:" and all references could be written as noun phrases. This consistency helps readers recognize more quickly where a given topic belongs among the categories. Topics can also be organized hierarchically into "topic collections". The problem with hierarchies that aren't based on taxonomy is that even when they're logical and defensible, they may simply not match the hierarchy that is assumed by users of the information.

Taxonomies help, since they clarify the relationships among categories and subcategories sufficiently well that users of the taxonomy can figure out where they are and where they're going. (This clearly relates to the concept of "scent", discussed earlier in this article under the Redish presentation.) A good taxonomy requires clear parent–child relationships, and consistent use of controlled terminology to assist in the recognition of those relationships. For example, you could classify animals as pets vs. non-pets, subclassify pets into cats and dogs, then subclassify cats into long-haired and short-haired. You can design a hierarchy to support a specific purpose, borrow and modify someone else's hierarchy, or evolve a hierarchy based on the actual content. At IBM, Henke and Bevis and their colleagues chose the latter approach, which required careful analysis of the details of every document in the document collection being classified so as to create mutually exclusive subject groups (also called "facets"). In listening to their description, it occurred to me that such an analysis resembles the thought process underlying indexing in many ways, and must be similarly systematic and consistent—indeed, indexers can provide good guidance in this analysis because they have considerable expertise in this kind of analytical process.

Once you have an initial collection of topic keywords, it's important to begin testing them with future users of the taxonomy: apart from ensuring that your taxonomy will be usable, it's also much easier to revise a taxonomy at this early stage than after the information has been classified, because reclassifying information requires re-examination of all documents to account for the proposed changes. Usability experts can help in this phase of your analysis, since they have considerable expertise identifying common problems. However, before wasting anyone else's time, it's important to do your own tests so you can eliminate problems that are only obvious after you've already created them. Parsing a taxonomy (thinking through the meaning of the relationship between two levels in the hierarchy) is a useful tool for accomplishing this. Using the abovementioned animal example, you could parse the hierarchy as follows: "A short-haired cat is a kind of cat, a cat is a kind of pet, and a pet is a kind of animal." All information in the hierarchy must follow a similar logic. Illogical results identified by this type of analysis suggest a problem with the hierarchy. Taxonomies should also permit inheritance: if you define something as a cat, it can automatically be classified as a pet and an animal. As in all usability testing, it's important to not focus exclusively on your own artificial tasks: real users inevitably reveal things that you've missed or misunderstood. When testing their performance, pay close attention to learning at what point they would give up on a search: real-world users give up long before people who know they are being tested and watched.

This kind of hierarchical taxonomic classification works best with software that lets you progressively narrow down your results by repeating the search within each set of results. For example, after finding all animals in your initial search, you could narrow your search to include only animals that are pets, then only pets that are cats, then only short-haired cats. The taxonomy you have developed can also be used to create the kind of expanding and contracting tree structure found at most online stores: as you select a category, the site automatically presents all relevant subcategories, allowing you to dynamically narrow your options until you obtain a manageable number of products to compare. An advantage of this approach is that it facilitates backtracking if you've gone too far down the wrong path. In such structures, displaying status indicators such as "page 2 of 12" or "203 hits" helps users recognize when they have narrowed their results enough that they can stop searching and start reading. The granularity of the classification (how many subclassifications are permitted or required) depends both on the nature of the content and how it will be accessed. At very fine levels of granularity, where the hierarchy can grow fairly deep, it's helpful to ensure that the context is clearly visible (e.g., use breadcrumbs to show the path followed to reach a certain point).

Maintenance is also an issue. Taxonomies must be reviewed periodically to ensure that they remain relevant and correct, and to account for new information that doesn't fit into existing categories. This allows you to maintain the categories when necessary. Long-term monitoring of the use of your taxonomy will also reveal whether users are learning to use it, or whether problems you hoped would disappear are stubbornly persisting, possibly suggesting the need for a redesign. Documenting your decision processes and the decisions that resulted from these processes will help future workers understand what you've done and how to use that knowledge to classify new information or revise the taxonomy.

A call for papers!

I started this essay with the complaint that there wasn't much specifically related to scientific communication at the conference. In my view, there's only one way to fix this: for us to participate. If you're interested in speaking at next year's conference, drop me a line and tell me what areas you'd be interested in presenting; if you're only interested in attending someone else's presentation, send me a list of topics that would encourage you to attend the conference and I'll try to find suitable speakers. The process for selecting the conference program has changed radically in the few years since I served on the program committee, so I can't make any promises; proposals are still screened in (or out!) based on their quality, but the program committee also excludes otherwise-suitable topics if they simply don't fit into the larger theme for the conference or the overall "user experience" they are hoping to achieve for a particular theme (such as usability).


My essays on scientific communication have now been collected in the following book:

Hart, G. 2011. Exchanges: 10 years of essays on scientific communication. Diaskeuasis Publishing, Pointe-Claire, Que. Printed version, 242 p.; eBook in PDF format, 327 p.


©2004–2024 Geoffrey Hart. All rights reserved.