You are here: Articles --> 2012 --> Reimagining the review and revision process
Vous êtes ici : Essais --> 2012 --> Reimagining the review and revision process
by Geoff Hart
Previously published as: Hart, G. 2012. Revising the review-and-revision process: a case study of improving the speed and accuracy of technology transfer. Intercom February:22–27.
The phrase “it seemed like a good idea at the time” tends to pop up whenever organizations embark upon process improvement initiatives and discover a choking web of formerly good ideas that, years or even decades later, have fossilized to the point that they strangle innovation and no longer match the organization’s reality. At a former employer, I discovered this problem firsthand: a review and revision process that had evolved over 25 years had become increasingly inefficient, and increasingly frustrating to both managers and their authors. As an editor, and therefore a focal point of the publishing process, I became part of a team that set out to revamp the process by critically examining every step to determine whether it continued to serve a useful purpose or should be discarded or revised to match the modern context.
The results were impressive: we reduced the time to publication for our flagship lines of reports by more than 50%, while maintaining or improving the quality of our reports. Although I can’t promise you’ll achieve equally impressive results, my description of how we went about our review will help you see ways you can use a similar process to critique your own workplace processes, while simultaneously building a new process that your colleagues will be eager to embrace.
My former employer does operational research and solution development for a range of industrial and government clients. The primary output of this research is what’s referred to as technology transfer: this involves explaining the solutions to clients well enough that they can implement them, thereby solving problems and improving their operational efficiency and cost. Because much of the transfer was oral, via meetings and discussions, the tangible output was the reports that documented our research results. The specific exercise I’ll tell you about focused on these printed publications. At the time of the exercise, we had recently consolidated more than a dozen different report types into only four categories. I believe that the success of this major change created an openness to consider additional changes, and inspired the review exercise.
My division employed roughly 30 researchers plus their research assistants. The communication group comprised only four people: our manager, me (as editor), a graphic artist, and a desktop publisher. We produced on the order of 60 short (4 to 20 pages) reports annually, while performing other duties such as supporting our researchers in preparing presentations, software documentation, Web pages, and other communication products. The overall research process was efficient and powerful: each year, advisory committees composed of members from all client groups identified and prioritized key operational problems, and we then developed a research program to focus on these priorities. Researchers developed solutions, communicated them orally to the primary clients who faced these problems, then documented the solutions for other clients in the form of reports. Internal and external reviewers provided quality control, and authors revised reports—repeatedly—in response to their reviews. Subsequent management reviews led to additional revision.
Many facts of a researcher’s life slowed the publication process. In addition to the time spent performing the research, much of which was performed over weeks or months, researchers met with clients, often with little notice, to discuss issues that concerned them. In addition, they attended regional and national advisory committee meetings, where they presented research results, learned client needs, and provided ongoing liaison with clients. Because our success relied heavily on this ongoing, highly personalized and hands-on approach, these activities had a higher priority than writing reports. Because the researchers were primarily engineers, writing was not their primary skill. Thus, reports were often greatly delayed, and took longer to write than would have been the case with professional writers whose only responsibility was to write.
In the original process, early drafts underwent internal review by an author’s supervisor, then by the research director and editor, to produce something considered suitable for external review by at least two reviewers. The process was sequential, iterative, and inefficient: in addition to the three aforementioned reviews, there were at least two additional external reviews. After each review, authors revised their manuscript to address the reviewer’s concerns, then there were final reviews by the editor, research director, and division manager, each of which led to further revisions. The approved manuscript was then sent for layout, proofreading, approval by our head office, and then distribution. For various reasons (not all good), most of the process was performed on paper, even though Microsoft Word was available to everyone in the office to permit onscreen editing. In part, this approach persisted because laptop computers were not universally available, and not all authors used them once they became available. A typical manuscript endured at least 12 review phases: 3 for the author, 3 for the editor, and one management review (6 in total) after each of these reviews. During each review, a manuscript might be revised repeatedly until both the author and the reviewer were satisfied.
The result was a series of high-quality reports that were well respected by our clients and by other research agencies. Nonetheless, the process clearly required far too much re-handling of manuscripts. Because of a researcher’s many other responsibility, there was little control of author or reviewer deadlines. What had formerly seemed logical and effective had slowly evolved into an increasingly inefficient process: despite the excellent quality, production times sometimes exceeded 6 months. Surprisingly, this was for reports numbering at most tens of pages, not the hundreds of pages many technical communicators produce during a comparable period. Again, it’s important to note that producing reports was only a small proportion of the researcher's job description; for most technical communicators, writing is the primary or only responsibility.
To make our process more efficient, we began by examining and clearly describing every step. This was not an ISO 9000 exercise, since an ISO consultant determined that this certification was not useful in our context. However, ISO 9000 techniques can clearly guide or provide data for any such exercise. For each step, we calculated the mean time requirement and the number of times a manuscript was handled. We obtained this data from tracking sheets attached to each manuscript; each person who handled the manuscript signed the sheet and added a date. This analysis let us estimate the completion time for each step, thereby revealing and quantifying problems. Discussions with all stakeholders let us determine how long each step should ideally take (after accounting for each stakeholder’s many responsibilities and work constraints). We then brainstormed solutions to narrow the gap between actual and ideal times. One key goal was to eliminate the repeated re-handling of manuscripts, so we rigorously evaluated whether each step was truly necessary. Finally, we implemented and tested the proposed solutions, adjusting them as necessary based on the results.
We chose the kaizen approach, a Japanese form of continuous quality improvement with several important features:
During a series of meetings, participants decide what works and should be retained, what works inefficiently enough to require improvement, and what provides no useful value and should therefore be discarded or replaced.
Our kaizen exercise revealed many problems:
To create effective outlines before beginning to write, authors worked with the editor. We then held a planning meeting to ensure that all stakeholders who could potentially reject a manuscript, in whole or in part, later in the review process, had a chance to identify problems and propose solutions. At this meeting, the author, their supervisor, the research director (as a management representative), and the editor or leader of the communications team rigorously critiqued the outline. Only after everyone agreed the flow of information was effective and the proposed content was complete and logical did the author begin writing. This eliminated a major cause of revisions that used to occur late in the review process, when a stakeholder identified problems that required the most painful revisions, rather than at the beginning, when revision was easy. Resolving those problems right from the start eliminated the need to solve them later, when changes required time-consuming backtracking to gain the approval of previous reviewers.
To track manuscripts once they entered the review process, we used the task-management system provided by the combination of Microsoft Outlook and Microsoft Exchange Server. This let us pass reports from one review phase to the next, automatically assigning a task and deadline to the person responsible. The system let us see the status of each report at a glance, monitor progress towards deadlines, and send reminders when necessary. To support this part of the process, managers at all levels agreed to enforce deadlines and make them part of the annual performance appraisal.
To eliminate the inefficiency of working on paper, we insisted that everyone write their reports in Microsoft Word. All review and revision was subsequently performed using the onscreen editing tools provided by Word. To ensure smooth implementation of this approach, I provided instruction and ongoing handholding for all authors; I already had strong relationships with my authors, and I strengthened these relationships by working directly with authors and reviewers to help them become comfortable with the technology. Where possible, we demonstrated flexibility by helping everyone to develop custom approaches that worked best for them, including allowing some authors to review manuscripts on paper and then transfer their changes into Word.
We managed reviewers in several ways. First, we carefully selected reviewers during the planning meeting. Rather than assuming they were both qualified and available, we asked them to confirm this. In particular, we told them when the manuscript would be ready for review and asked whether they could commit to reviewing it at that time. If not, we chose another reviewer and confirmed their availability. About 2 weeks before the scheduled review, we reconfirmed each reviewer’s availability, since life sometimes throws unexpected changes into anyone's schedule. We asked how best to meet each reviewer’s needs; for example, not all had reliable e-mail access (particularly those living in remote areas) or used Word, so we occasionally sent manuscripts on a CD or as a faxed printout. Once a review was underway, we gently reminded each reviewer of their deadline a few days before the review was actually due.
Importantly, we streamlined the review process by encouraging each participant to do the job right the first time instead of doing a cursory job just to get the manuscript off their desk, thereby ensuring that the manuscript would return for additional work. We convinced everyone that investing time to do the job right was worthwhile by demonstrating (using the data we collected) that it took less time to do the job right once than it used to take in the old system. In particular, we asked reviewers and authors to discuss changes and reach consensus rather than allowing authors to unilaterally reject a review suggestion; that routinely led a reviewer to make the same suggestions, this time with a bit of anger, until the author gave in and accepted them. Whereas manuscripts used to bounce back and forth between author and reviewer multiple times, problems were now resolved by a short phone call or office visit. This drastically reduced delays created by repeated re-handling of a manuscript at each stage of the review. We also greatly reduced the number of management reviews. Because a management representative participated in the planning meeting and approved the content and approach for each report, and because managers trusted the communication group's competence, they were willing to “let go” and perform fewer reviews.
Before implementing these changes across the entire organization, we included a trial and exploration stage in which several volunteers from each stakeholder group tested the new process and provided detailed feedback. Our goals were to confirm that our theoretically superior process worked in practice, and to detect and solve problems before anyone but the volunteers encountered them. By limiting the test to a few people who fully expected to encounter and solve problems, we avoided the common error of implementing a solution before it was debugged. No matter how good a process looks on paper, eliminating testing creates a serious risk of losing staff support when problems that could have been prevented arise. (Remember: everyone has their own work to do, and they can’t afford to delay that work because of problems we could have prevented.) Careful testing before implementation let us present a proven solution that was developed and rigorously reality-checked by their peers. As a result, the new process was quickly and broadly accepted.
We went to great lengths to ensure that all stakeholder concerns were identified so they could meet their responsibilities within the new process:
The latter point deserves particular emphasis: the goal of tracking problems was to help staff find ways to avoid them in the future, not to punish anyone. The acerbic phrase “beatings will continue until morale improves” makes it clear why this is important.
The new process began with a rigorous planning meeting to ensure that all stakeholders defined the nature and content of the deliverables. Internal and external reviews were then performed, simultaneously rather than sequentially whenever possible, followed by layout, proofreading, publication, and distribution of the reports. This only slightly reduced the overall number of phases, but dramatically decreased rehandling of manuscripts during each phase. The philosophy of "do it right the first time” was generally honored, and the philosophy of discussing rather than imposing changes was embraced.
Several of our proposals worked very well. The planning meetings let authors write an effective document in a single step, instead of requiring multiple major revisions late in the review process, and minimized management’s involvement. Editing before the reviews made the reviews more efficient because reviewers could focus on substantive issues rather than structural and grammatical issues. Onscreen editing was universally adopted. Because most authors wrote few reports each year, they did not develop proficiency with the editing tools, and required frequent assistance. This took much of my time, but reinforced the sense of teamwork between editor and author. Even when authors disliked the intensity of my edits, they appreciated my efforts and knew they could always rely on me for help.
The tracking system let us quickly see each report’s status at any time, so no reports slipped through the cracks and were forgotten. Because we defined reasonable deadlines for each phase of the process in cooperation with the stakeholders rather than imposing artificial deadlines, the overall time allowed for each phase of the process was reasonable, with flexibility to account for an author’s constraints. For example, if an author told us they would be away from the office doing field research or meeting clients around the time of a deadline, we used that information to define a more reasonable deadline. Because we enforced deadlines once they were negotiated, everyone generally met their deadlines. A nice surprise was how reviewers became more responsive and routinely met their deadlines, even when they were external to our organization and we had no authority over them. The decreased number of review phases and the greatly decreased re-handling of manuscripts during each phase correspondingly decreased overall time requirements; by tracking times at every stage, we proved that doing the job right the first time, even if it took longer, took less time overall than repeating the job several times.
Though we did not quantify the time savings for each participant, the overall results were impressive: total time from the completion of the research to publication decreased from the original range of 6 to 12 months to a consistent maximum of less than 3 months in the new process! Though we did not quantify the quality of the reports, informal discussions suggested that we at least maintained and potentially improved the quality. In short, we decreased production times, decreased the drain on staff time, and maintained or improved the quality of our reports. Best of all, the process was still working with only minor modifications more than 5 years after its implementation. That's important, because process improvement most often fails when it doesn’t reflect how people actually work; if the process doesn’t fit the needs of those who use it, people quickly revert to their old habits.
Of course, some of our ideas didn’t work out. However, testing the process with a small group before full implementation provided evidence of successes and opportunities to correct failures. For example, some participants in the kaizen exercise believed that the "do it right the first time" philosophy would let us eliminate proofreading of reports after layout. I strenuously objected, and although the participants did not agree with my advice, they were willing to let me track and quantify the kinds of errors that occurred during layout. I found too many errors to let us eliminate proofreading, so we restored a formal proofreading stage to our new process. Although the homegrown task management system worked, it was kludgy and had many problems. It was subsequently replaced with a more refined system based on Microsoft SharePoint.
Why did this process work so well? How might it be generalized to other organizations, other process reviews, and forms of continuous improvement other than the kaizen method? My readings of the performance improvement literature and my experience with our kaizen exercise suggest the following key points:
For those who are interested in trying this approach, chapters 15 to 18 of my book Effective Onscreen Editing provide a detailed overview of how to implement onscreen editing that will be easily modified for application to many other process improvement exercises.
Hart, G.J. 2000. The style guide is dead: long live the dynamic style guide! Intercom, March:12–17.
Hart, G.J. 2000. Persuading reviewers to review. Intercom, April:44.
Hart, G.J. 2000. Why edit on screen? Intercom, September/October:34–35.
Hart, G.J.S. 2003. Redesigning to make better use of screen real estate. Pages 337–350 in M.J. Albers and B. Mazur, editors. Content and complexity: information design in technical communication. Lawrence Erlbaum Associates, Mahwah, NJ. 368 p.
Hart, G.J. 2004. Practical and effective metrics. Intercom February:6–8.
Hart, G. 2006. Designing an effective review process. Intercom, July/August 2006:18–21.
Hart, G. 2006. Effective outlining: designing workable blueprints for writing. Intercom, September/October 2006: 18–19.
Hart, G. 2010. Effective onscreen editing: new tools for an old profession. 2nd ed. Diaskeuasis Publishing, Pointe-Claire. 507 p.
©2004–2017 Geoffrey Hart. All rights reserved