You are here: Articles --> 2009 --> Editorial: Learning from Scientific American
Vous êtes ici : Essais --> 2009 --> Editorial: Learning from Scientific American
by Geoff Hart
Previousy published as: Hart, G. 2009. Editorial: Learning from Scientific American. the Exchange 16(2): 2, 7–8.
In their December 2006 editorial, the editors of Scientific American reported a vexing problem that should sound familiar to many technical communicators: like most print magazines, Scientific American has an average publication delay of about 3 months between the time an issue is "put to bed" and the time it appears in your mailbox. That delay is common to most traditional printed periodicals, but its magnitude varies among publications (newspapers, for instance, have much shorter delays). The delays arise from many factors that are common to all publishing processes, but for a glossy monthly such as Scientific American, some likely culprits include:
When part of your goal in publishing is to be timely, these delays pose clear problems, since your nimbler competitors—including daily newspapers, radio and television, and Web sites—have the option of publishing "breaking news" within hours or days. It was this problem that spurred Scientific American to a noteworthy innovation.
The news story that prompted their innovation was the discovery of a 3.3-million-year-old hominid skull, nicknamed "Selam", leading to much ado in September in the news media. Rather than waiting 3 months for their story to appear in print, when it would be "old news", Scientific American's editors chose an unusual alternative strategy: they published an early draft of the article on their Web site, where they treated the article as a work in progress and invited ongoing commentary. Based on feedback about how the article should be expanded and refined for the print edition, the author (Kate Wong) was able to continue her research and produce a deeper, richer, more correct article by the time the December issue finally arrived in mailboxes around the world.
If you occasionally produce software or other documentation, this approach should excite you. Imagine for a moment if we were able to post the skeletal forms of our user manuals and online help on a Web site and invite users of the software to provide ongoing feedback as the documentation set matures. Users of our product, whether timid neophytes or brash power users, would have a chance to point out holes in our documentation, highlight confusing or inaccurate writing, propose indexing keywords, and report errors. (This could be easily done during the beta testing phase that most software undergoes.) Based on that feedback, we could produce an increasingly accurate, complete, and high-quality documentation set by the time the product was ready to ship. Best of all, working in this manner would replace the conventional one-way dictation that documentation represents with a sense of ongoing dialog and partnership between writers and their audience. Although I chose to illustrate this approach for documentation, any other type of information (including Scientific American articles and other science writing) could undergo a similar evolutionary process.
This isn't an entirely novel concept, since the approach I've described is how most open source software (www.opensource.org) is developed. It's also the fundamental model that underlies blogs ("Web logs", in case you wondered where that word came from) such as LiveJournal, wikis such as the Wikipedia, and various other emerging tools for online collaboration, such as GoogleDocs. The idea is novel because it reminds us that, as communicators, we require an audience, and that audiences are not passive sponges that exist solely to soak up our words of wisdom. On the contrary, communication always involves both a speaker and a listener, and the most interesting and effective communication involves a dialogue between both participants.
Of course, like any elegant theory, there are potential traps we can fall into if we don't pay careful attention to the dialogue and maintain a skeptical eye. Selection bias is a particularly serious problem, since the people most likely to respond by providing feedback may not be broadly representative of our overall audience. This means that we must be very careful indeed to subject the feedback to a reality check to confirm that it is broadly useful. Ignoring that problem can lead us to design information that does a great job of meeting the needs of only a small subset of our audience—specifically, those who aggressively use the Internet to seek information about how to use products. These people may still be the minority of our audience. Solving the problem of representativeness may require us to actively recruit representatives of several typical users of our information rather than relying solely on those who volunteer to provide feedback.
Version control and quality control are two other potentially serious problems. If anyone is allowed to modify the material we have produced, those who have an axe to grind or who simply desire to sow chaos can modify our information at a whim, leading to the loss of good work accomplished through previous iterations. Version control provides backups we can use to recover previous versions, but it's not trivial deciding what old information to discard and what new information to retain. A robust backup strategy is also crucial, since Web sites can be lost when the hardware that supports them is damaged, whether due to flooding during a hurricane or a fire. Sabotage, whether it represents an active attempt to target us or purely random damage inflicted by the latest Windows virus or worm, is also a possibility.
Quality control requires both active monitoring of the dialogue and a means of validating new information. Well-intentioned fools can easily damage our information by "correcting" things that required no correction, thereby creating misinformation. Wikipedia was forced to implement access controls to mitigate the problem of tampering, while still retaining an ability for readers to comment on entries in the encyclopedia. Allowing readers to comment on our information without such supervision can introduce factual errors, whether through simple ignorance or active malice. Although we have not yet seen widespread occurrences of industrial espionage, in which a competitor or other non-friend publishes information that attacks us, makes us look foolish, or introduces offensive and legally actionable statements on our Web site, there have been a few isolated incidents that give cause for concern. As publishers of online information, we are responsible for the consequences of that information, just as we are for our printed publications, even if someone else created the problematic information on our Web site. Thus, someone must take responsibility for vetting comments on our information and determining whether a comment is worthy of display and worthy of incorporating in the evolving body of information.
There are undoubtedly other problems with the use of the Internet to collaboratively evolve documentation or bodies of scientific information, and these will be discovered over time. Nonetheless, this phenomenon represents a promising new way to communicate with our audiences, and potentially a true revolution in the way we communicate. If you've considered testing out this approach to communication, contact me and let me know about your experiences. Better still, submit a full article that describes what you've done so others can learn from your work.
My essays on scientific communication have now been collected in the following book:
Hart, G. 2011. Exchanges: 10 years of essays on scientific communication. Diaskeuasis Publishing, Pointe-Claire, Que. Printed version, 242 p.; eBook in PDF format, 327 p.
©2004–2013 Geoffrey Hart. All rights reserved