Geoff-Hart.com:Editing, Writing, and Translation

Home Services Books Articles Resources Fiction Contact me Français

You are here: Resources --> 1995-1998 --> Accentuate the negative
Vous êtes ici : Ressources --> 1995-1998 --> Accentuate the negative

Accentuate the negative: obtaining effective reviews through focused questions

by Geoff Hart

Previously published, in a different form, as: Hart, G.J. 1997. Accentuate the negative: obtaining effective reviews through focused questions. Technical Communication 44(1):52–57.

Abstract

How you ask a question strongly determines the type of answer that you will obtain. For effective documentation reviews, conducted either in-house or as part of usability testing, it's important to use precise questions that will provide concrete information on which to base revisions. This paper proposes an approach to obtaining useful feedback that emphasizes negative, "what did we do wrong?" questions. This approach focuses limited resources on areas that need improvement rather than on areas that already work well and that don't require immediate improvement.

Introduction

It's well known that how you ask a question can determine the answers you obtain; indeed, entire textbooks have been written on the subject (e.g., Sudman and Bradburn 1982). For technical communicators, this phenomenon has an important influence on how we control the quality of our communication and how we obtain the feedback necessary to improve this quality. Most organizations use peer or other reviews to improve the quality of their publications, but for effective reviews of documentation or reports, you must ask questions whose answers direct you efficiently towards improvements.

Traditional review forms such as reader-response cards are typically based on multiple-choice questions or numerical rankings of various factors (e.g., the quality of the illustrations). These questions provide positive, "feel good" feedback ("our documentation received a good rating") as well as an apparently quantitative evaluation of the quality of the product ("we scored 8 out of 10 on user satisfaction"), but offer no objective basis for improvement. A purely numerical or multiple-choice approach can only identify the problem areas, not the causes of those problems. In addition, the approach fails to detect important details such as the ensemble of minor problems that together undermine successful communication or problems that readers eventually solved, but only after spending some effort to come up with a solution.

Results provided by the contrasting approach, which is based largely on "essay questions", can be difficult to analyze objectively because the responses are unconstrained, subjective, and highly variable in their context and their intent. As a result, compiling and analyzing the data is time-consuming. Moreover, carelessly framed questions make it easy for reviewers to answer simplistically without providing suggestions that can lead you towards improvement. For example, asking "is the writing clear and efficient?" encourages reviewers to answer "yes" or "no" and write nothing further; I've encountered this behavior so often that I've learned to rely on direct, personal communication and explicit instructions when I need to ensure a good peer review.

Even if a question specifically prevents simple answers, many reviewers will provide a vague, qualitative response that is too general to identify specific examples of problems. Essay questions also require more work from reviewers, who must write a response instead of simply checking off a box or circling a number on the review form. Since most reviewers are too busy to provide a full, detailed review, they will generally take any opportunity to minimize their work.

The most productive use of limited time and personnel involves solving problems, not trying to marginally improve parts of a manuscript that are already satisfactory. In this sense, the most useful approach involves asking questions that concentrate only on the problems, the "negative feedback" I alluded to in the title of this paper. To do so, you must ask readers to identify anything that interferes with their use of the information. In this approach, you don't worry about what you're doing right, because you don't have to improve this as urgently as you need to fix the things you're doing wrong. Accentuating (emphasizing) the negative has three overwhelming advantages:

  1. It builds a sense of partnership with the reviewers by focusing on their needs (by specifically asking "how did we make things difficult for you?") rather than on your own needs.
  2. It concentrates your attention on problems (i.e., things that require improvement) rather than on things that readers don't consider to be serious.
  3. Each reply provides the basis for a concrete revision to the manuscript or even to your style manual, so that you won't make the same mistake in future efforts.

Accentuating the negative is a commonsense approach precisely because it concentrates limited resources (time and personnel) on correcting the most serious problems. Since reviewers almost inevitably detect problems more readily than they detect examples of effective communication, this strategy draws on taht strength, while simultaneously focusing it on areas that are important to you. Metaphorically, a negative approach fills in the potholes and levels the bumps in the road to understanding rather than trying to determine how smooth the road is or to perfect the smooth parts of the road. The end result is communication that may have no perfect components, but that lacks serious impediments to comprehension. Subsequent revisions can aim for perfection.

Designing negative questions

To keep this paper short while still illustrating the power of the approach, I'll provide a few examples that address the three main components of printed information (text, illustrations, and page layout). A far more extensive list is certainly possible; for example, it should be possible to ask a question for each rule in your current style guide, and existing audience information might lead you to additional questions. I've chosen sample questions solely to illustrate the approach well enough that you can design more specific questions that address your unique needs. Comparable questions are certainly possible for online information or, indeed, for any information design exercise. With each example, I've also provided parallel examples of ineffective traditional questions to illustrate the difference in the sort of information that you would collect.

It's interesting to note that the sample questions I've provided, in addition to focusing on identifying problems rather than assigning a rating, are all framed in the form of instructions. Strictly speaking, you might wonder whether these are true questions at all, since they lack a terminal question mark. Practically speaking, each sentence seeks an answer, and this is the goal (if not the literal definition) of a question.

In the examples, I've also omitted the word "please" at the beginning of each question, because over-repetition of such words begins to appear ingratiating rather than solicitous. In an actual questionnaire, you could retain a polite tone by including a short cover letter that explains the importance of the reviewer's feedback. For short questions, you could introduce subsets of the list of questions with the introductory phrase such as "please answer...".

Questions about text

Questions about illustrations

Questions about page layout

Applying the principle

The preceding sample questions each produces an answer that you can act upon, which is the final test of a question's success and usefulness. However, the examples must lead to an understanding of how to design questions that produce quantitative results if they are to be truly useful. Following a specific structure can greatly improve the effectiveness of your questions:

  1. Start by defining your purpose specifically and clearly enough that you can frame a question. The last question in the previous section had a simple goal: to identify legibility problems that could be fixed by layout changes.
  2. Define a prototype question that appears to provide an appropriate answer.
  3. Ask several colleagues to answer the question. When they're done, evaluate the answers. You can greatly improve the effectiveness of this step by first explaining what information you hope to collect and then asking your colleagues how they could deliberately misinterpret the questions and thus provide useless answers.
  4. Determine whether you collected the answers you need. If any significant proportion of the answers fails to serve the purpose you defined in step 1, then you must reword the question. Repeat steps 3 and 4 until you get a sufficiently high proportion of useful answers. (The size of this proportion is up to you to determine.)
  5. If another goal is to provide a quality metric, evaluate whether the answers generated in step 4 can provide a quantitative measure of quality or an improvement in quality. An answer may be useful to you if it provides an indication of how to change your practices, but useless to your manager if there is no way to prove that the changes were effective.

This process is clearer in the form of an actual example. My employer conducts research to solve operational problems for the forest industry, and publishes reports that contain the results of our research and our recommendations on how to proceed. Certain of our reports reach our audience with an enclosed reader-response card. Under several headings (e.g., "writing style"), the cards provide three possible responses: good, average, and poor. This card was designed solely to alert us to any gross problems and to provide readers with an opportunity to vent, and was not intended to collect specific feedback; in fact, the cards generate a very low level of response and no specific suggestions for improvement, although we have received reassuringly few "poor" responses and a gratifying number of "good" responses.

Recently, we decided to conduct a more focused survey of our readership. Although Management restricted the survey to obtaining a broad initial evaluation of our effectiveness and to identifying problems, our communications team managed to insert some more pointed questions that would provide more detailed feedback on where we should focus our efforts. We broke up the evaluation of "writing style" into several more focused categories: writing style, level of detail, clarity of results, and a separate rating of quality for each report component (e.g., abstract, research methods, conclusions). In addition to asking about quality, we also asked respondents to tell us how important each category was to them. To conclude, we asked an open-ended question that let them identify any aspects of our publications that they wanted us to improve.

We're still analyzing the results of the survey, but overall, 80% or more of respondents rated the various aspects of writing style as "excellent" or "good", and no aspect was rated as less than "satisfactory". Moreover, 25% felt that there was no need to do anything to improve report quality. However, 14% suggested that we should state our conclusions more clearly. Since 99% of respondents suggested that the conclusions were very or quite important, versus (for example) only 73% for the methodology section, this suggests that we should devote more of our improvement efforts towards the conclusions than towards the methodology. In the upcoming second phase of our survey, we hope to contact respondents to discover what specific aspects of the conclusions we should improve. We will probably not ask about how to improve our methodology section, which has an acceptable rating and a much lower level of importance.

The review cycle

Early in the document development cycle, general (overview) reviews can help you to identify and resolve broad, general problems. As you move closer to the final product, however, reviews must become progressively more specific. Since each reviewer brings a unique perspective or philosophy to the review process, there is no guarantee that you will get the results that you desire if you simply hand over a manuscript and ask for comments. One important review strategy is to explicitly ask reviewers to look for certain specific types of problems. Typical types of review include technical reviews (are all the facts correct?), stylistic reviews (is the manuscript easy to read?), and editorial reviews. In addition to targeting the review more precisely to meet your specific needs, restricting the scope of a review reduces the amount of work that the reviewers must perform; for example, if an editor will review grammar and style, say so and tell the reviewers that they needn't waste their valuable time second-guessing the editor. Better still, edit the review copies first so that few such problems will remain to distract reviewers.

To further minimize the amount of work that reviewers must do, fill in as much information on the review questionnaire as possible before you submit the manuscript for review. This lets reviewers concentrate on finding problems rather than filling in bookkeeping information. For example, preprint the title of the manuscript, the date, and the name of the reviewer directly on the questionnaire; it's surprising how many reviewers forget to supply this information. Print this information on each page in case the reviewer inadvertently separates the pages.

For questions that address layout issues, suggest that reviewers photocopy any pages on which they identify problems, write numbers in the margin to identify the location of the problem, and write the number and its explanation on a separate sheet of paper. Since most near-final layouts provide inadequate room to write comments, this lets reviewers explain the problem at whatever length they consider appropriate rather than trying to fit a terse explanation into limited space. For online information, a comparable problem arises when the reviewer's software doesn't permit direct annotation of the screen display; here, the solution would be to provide a printout of each screen so that reviewers can mark their comments directly on the printout rather than having to spend time describing the screen so that you'll understand which one they're referring to.

Using the results

Every negative comment that you receive indicates a concrete problem (or problems) that you can analyze and take steps to resolve. Comments such as "I don't like turning my head to read, so label the vertical axes of graphs with horizontal text rather than vertical text" and "say slope instead of grade because the latter is too technical" form the basis for an effective style guide for future documents.

Since modern quality improvement techniques emphasize the use of quality metrics (quantitative measures of some factor that influences usability), it's important to develop a strategy that lets you create such metrics and use them to track your progress towards improving quality. At the beginning of this paper, I asserted that purely numerical ratings, though obviously quantitative, fail to provide specific suggestions for improvement, but the corresponding problem with my sample questions is that answers based on words are inherently non-numerical. One approach that resolves this problem is to assign each response to a specific category of comment, and to count the frequency of comments in each category. A quality improvement occurs if the number of problems in a given category decreases from one version of your information to the next.

Consider an explicit example based on one of my sample questions, "identify any words that were unfamiliar to you." Your report to the quality committee might be as follows:

I used two quantitative metrics in this example: the percentage of the reviewers who detected the problem suggests how common the problem is among our readers, and the frequency of the problem per page tells how common the problem is among our writers. A common problem for readers tells us that we must change our style in that area, whereas a common problem for the writers tells us that we must adjust our editing practices. In both cases, there is not only a quantitative measure of our success in resolving the problem, but also a specific solution to implement.

Asking for negative feedback doesn't completely replace other approaches to assessing and improving usability. Watching someone use documentation in a usability lab can provide critical information that reviewers might not otherwise remember to record while focusing on the manuscript itself. For example, direct observation may reveal that the documentation doesn't lie flat (e.g., the book keeps closing, thus losing the user's place), that the documentation is too large to hold comfortably and must instead be left on the desk, and that the reviewer must glance back and forth frequently between the book and the computer screen. The latter is also a clue that some of the printed information should become online information so that it will be visible simultaneously with the screen that it describes.

One potentially serious drawback of this approach is that it won't provide many positive comments. If one goal of your review process is to identify the value that you have added to the publishing process by creating more usable documentation, collecting only negative responses will provide a misleading impression. Taken out of context, this could get the entire documentation team fired because senior managers see only an overwhelmingly negative report without understanding that you intentionally excluded any positive comments. Since your quality improvement efforts may be evaluated by someone unfamiliar with your review approach, and since a steady diet of negative comments can lower your own morale, you should also try to collect a few positive comments. For example, you could include traditional "how highly would you rate this product (1= excellent, 10 = unacceptably bad)?" questions at the end of the review forms, as well as generic "what did you like about this manuscript?" questions. As always, define your questions based on the purpose they will serve.

We've all been told at some point not to dwell on the negative, but in the context of obtaining effective reviews, accentuating the negative will provide information that may not be easily available from any other approach. Moreover, it leads to a precise focus on the things that actually need improvement and thus offers the greatest potential for dramatic improvements in usability.

Reference

Sudman, S.; Bradburn, N.M. 1982. Asking questions: a practical guide to questionnaire design. Jossey-Bass Publishers, Washington, D.C. 396 p.

Acknowledgments

I thank Deborah Andrews and two anonymous reviewers for their important contributions to improving the quality of this manuscript.


©2004–2024 Geoffrey Hart. All rights reserved.