Usability in the Development Process

 

Microsoft Corporation

October 2000

Summary: This paper discusses the iterative, cyclical design process, including the four tenets of user-centered design, two types of the product design process, and how usability activities fit into and benefit every phase of product development. (9 printed pages)

Contents

Introduction Using an Iterative, Cyclical Design Process Envisioning Phase Planning Phase Developing Phase Stabilizing Phase Preparing for the Next Version References & Resources

Introduction

How Usability Works For You

To put it simply, including usability testing from the very beginning of the product development cycle, and through every phase of your project, will save you redevelopment during the final crunch.

This paper begins by discussing the iterative, cyclical design process. The first part reviews the four tenets of user-centered design as identified by Gould, Boies, and Lewis. It then follows a description of two types of the product design process, the waterfall method and the spiral method. The remainder of the paper briefly describes each phase of product development and talks about how usability activities fit into and benefit every phase. The phases of development as defined here are: Envisioning, Planning, Developing, Stabilizing, and Preparing for the Next Version.

As you read each section, notice how often the user is brought into the process. The point of involving the user at every phase is to help you save expensive rework at the end of the project and to create a product that users are happy to use and find easy to learn and use over the long term.

Using an Iterative, Cyclical Design Process

An iterative, cyclical design process lends itself easily to user-centered design. User-centered design includes four important tenets identified by Gould, Boies, and Lewis (1991):

  • Early focus on users. Designers should concentrate on understanding the needs of users early in the design process.
  • Integrated design. All aspects of the design should evolve in parallel, rather than in sequence. Keep the internal design of the product consistent with the needs of the user interface.
  • Early and continual testing. The only currently feasible approach to software design is an empirical one: the design works if real users decide it works. Incorporating usability testing throughout the development process gives users a chance to deliver feedback on the design before the product is released.
  • Iterative design. Big problems often mask small problems. Designers and developers should revise the design iteratively through rounds of testing.

For many years, the waterfall process for product design was standard. In this method a project proceeds through unilinear, sequential phases. This method uses milestones as transition and assessment points and assumes that each previous step is complete before the next phase begins. The waterfall method can be effective for a complex project where multiple vendors have responsibility for different aspects of the project (for example, one vendor does the requirements analysis, another the specification, and so on). However, using this method makes it very difficult to as you go along and discover changes you need to make.

In contrast, the spiral product design process is iterative and cyclical (Software Engineering Economics, Barry W. Boehm, 1981). This process allows for more creativity and makes it easier to make changes as you go along. When you follow the spiral design process you will find that you can be in different phases for different functional areas of the product. This method lends itself easily to the user-centered design approach to product development.

The spiral product design process has six phases: envisioning, planning, prototyping, developing, stabilizing, and preparing for the next version.

Envisioning Phase

The envisioning phase of product development is where you define the goals and scope of the project. In this phase, a vision statement, design goals, risk assessment, and project structure are produced.

The following usability activities are typically done during the envisioning phase.

Contextual Research

Based on the methods explained in Beyer and Hotzblatt’s 1997 book, Contextual Design, this type of research involves observing a user doing whatever it is that lets you get closest to an activity. If you haven’t decided what you will build yet, but you think a market opportunity exists, use contextual research to explore the activity. You can find out what you can help the user do and how easy it will be to implement it. Don’t look for specific features; rather, look for design opportunities.

Contextual research helps provide focus for the project. It is most successful when the project is a major upgrade or a brand new product. With a major upgrade or new product you won’t have a thorough understanding of what people are doing, how they are doing it, or the problems or obstacles they face. With a minor upgrade you will more likely have this information from product support, previous research, and so on. In this case you’re basically perfecting an existing design, so contextual research is not as necessary.

Contextual research works best when you have a cross-disciplinary team working on contextual research led by a usability engineer.

Competitive Testing

Competitive usability testing allows you to set quantitative usability goals for the product—speed of task completion, number of errors per task, and so on. This results in a quantitative measure of success, even if your competition is a manual process you are automating. Competitive testing is often done in conjunction with marketing. When marketing representatives do a competitor evaluation, they only compare product features. Usability testing is more concerned with task performance using those features.

Competitive testing might not seem appropriate for products only used inside your company. However, if you think about it, you are also theoretically competing with your own previous version of a product or process. Internal products might be competing with a manual process—the product must be more efficient and better than the existing process.

One way to do competitive testing is to conduct a study that compares the performance of competing products. For example, a benchmark study of other people’s products. When choosing the competitors to test, think of products beyond computers: If your product involves online transactions, one of your competitors might be electronic cash. From the results of the study you can determine its most frequent and most important features.

User/Audience Analysis

Know your users! Do everything you can to understand the characteristics of your users. Consider how support calls might decrease if you build your product based on the characteristics of the end users of your product. Imagine that your users find the product easy to use and it contains the features they need. Ask yourself, “What are the relevant characteristics of my users for the products I am going to build?” For example:

  • Computer experience
  • Age
  • Amount of training
  • Social relationships between groups of users
  • Special needs (accessibility)

You can gain some of this information through contextual research. For example, you can observe a few people to develop assumptions and then validate your assumptions through a survey or a sampling. Your Human Resources or training department might have relevant information; for example, how much training new employees receive. A market researcher might also have this information. Gathering this information is sometimes easier for internal applications than retail applications since your users are a more specific group than the general public.

Planning Phase

The planning phase is where the first real designing takes place. In this phase early user interface ideas are created in prototype form, drawing on the knowledge uncovered during the previous phase. A prototype can be anything from cards describing concepts or functions, simple paper sketches of screens, bitmaps of screens printed on paper, online versions created in a program like Macromedia Director with limited interactivity (also called click-throughs), to online versions with substantial interaction created with HTML or Microsoft Visual Basic®. Most of the time, you will find that the more fidelity a prototype has, the less likely a user is to suggest major changes, so it is well worth the effort to start testing with paper prototypes.

Depending on what kind of product you are designing, you might do some or all of the activities described below. If you spend the time doing these tasks in the planning phase and with a prototype, you should encounter far fewer usability problems during the developing phase.

User Scenarios

Create your own user scenarios that list what typical users for your product can and cannot do. With user scenarios, you create a “story” about how your users use the software you’re designing based on high-level design decisions from your earlier contextual research and user/audience analysis. These scenarios can be storyboards, online Macromedia Director movies, simple flowcharts, or simply narrative text. An elaborate form of user scenario is the “day-in-the-life” video. This type of video shows actors as “users” interacting with a simulated system during their daily activities. User scenarios lead into the more specific details you look for in task analysis.

Task Analysis

Task analysis determines how a task will be performed in the new product. You must do this before you can write a specification. It is important to use task analysis to determine if the tasks you are planning to support actually reflect reality. Analyze a task for fidelity. As far as the attributes of the product, how complete is the task? Analyzing for fidelity can either mean looking at everything the user must do to complete one task or a surface-level look at everything a user must do through all tasks or features. Don’t worry about being exhaustive—focus on the essentials.

Some questions and activities to consider:

  • What is a task in this context? Contextual research should help you identify and describe tasks people perform.
  • Create sequence diagrams that describe the interaction between tasks done by the users, as well as between users and the product.
  • Decide what the functional areas will be during the envisioning phase. Pose the question, “What specific tasks we will support?”
  • Create storyboards or sequence schematics with a product designer.

Heuristic Evaluations

Heuristic evaluations involve a small set of evaluators who look at the interface and judge it based on basic usability principles. Heuristic evaluations allow you to find and fix usability problems throughout the iterative design process. If you fix the problems as you go along, you will save yourself a lot of work during crunch time when it is much more difficult and expensive to change live code.

As detailed by Jakob Nielsen in Usability Engineering (1994), a heuristic evaluation consists of:

  1. Each evaluator goes through the interface several times, inspects the various dialog elements, and then compares them with a list of recognized usability principles.
  2. The evaluators collaborate to consolidate the output into a list of usability problems in the user interface, annotated with references to those usability principles that were violated by the design.
  3. Once each evaluator performs a heuristic evaluation individually, they come together and consolidate their findings.

In the early stages of development heuristic evaluation can be a very effective method for discovering usability problems.

Cognitive Walkthroughs

A cognitive walkthrough means carefully reviewing the number and type of steps the interface requires the user to go through to accomplish a task, including those the user has to do in his or her head. What you want to focus on is what users have to recall or what they have to calculate—cognitive tasks that can make your product either easy or difficult to learn and use. The cognitive walkthrough helps you identify potential usability problems as well as holes in your specifications!

According Gregory Abowd’s Performing a Cognitive Walkthrough, to do a cognitive walkthrough you need four things:

  1. A detailed description of the prototype of the system such as a preliminary specification would provide. It doesn’t have to be complete, but it should be fairly detailed. Details such as the location and wording of a menu can make a big difference.
  2. A description of the task the user is to perform on the system. This should be a representative task that most users will want to do.
  3. A complete, written list of the actions needed to complete the task with the given prototype.
  4. An indication of who the users are and what kind of experience and knowledge the evaluators can assume about them.

Given this information the evaluators step through the action sequence (item 3 above) to determine if users can be reasonably expected to perform those steps.

GOMS

GOMS is a method for describing a task and the user's knowledge of how to perform the task in terms of Goals, Operators, Methods, and Selection rules.

Card, Moran and Newell proposed the original GOMS formulation. They also created a simplified version, the Keystroke-Level Model (KLM). Bonnie E. John developed a parallel-activity version, CPM-GOMS, and David Kieras developed a more rigorously defined version, Natural GOMS Language (NGOMSL). All of these techniques are based on the same GOMS concept.

  • Goals are simply the user's goals, as defined in layman's language. What does he or she want to accomplish by using the software? In the next day, the next few minutes, the next few seconds?
  • Operators are the actions that the software allows the user to take.
  • Methods are well-learned sequences of subgoals and operators that can accomplish a goal, such as cut and paste.
  • Selection rules are the decision rules that users are to follow in deciding what method to use in a particular circumstance.

A GOMS model consists of descriptions of methods necessary to accomplish desired goals. The methods are steps consisting of operators that the user performs. If more than one method is available to accomplish a goal, then selection rules are used to decide the appropriate method in this circumstance.

Card Sort

Card sorting is one usability technique used early in this phase to understand users’ conceptual models of information. The basic task during a card sort is for participants to organize cards with a description on them into piles with items that belong together. After creating the piles, the participants can also generate names, labels, or descriptions for the piles they create.

Card sorting is used to:

  • Reveal users’ conceptual models of a task domain.
  • See how users group or classify items.
  • See how users think about the relationship and similarity between items.
  • Translate the users’ conceptual models into a design.

Iterative Usability Test

Iterative usability testing of a prototype design provides another valuable way early in the product cycle to find out how easy or difficult users find the interface to use. Making changes at this phase is much easier and much less expensive than waiting until after development has started.

The amount of data you can collect in a usability lab from a prototype depends on the robustness of the prototype. For paper prototype testing, the usability engineer is the computer and sits with the user during the test.

In many cases rigorous usability testing is overkill. During the prototyping phase, you can still conduct valid usability testing using simplified methods, often called “discount” usability testing.

As described by Jakob Nielsen, an iterative usability test incorporates:

  1. User and task observation—watching users, keeping quiet, and letting users do what they would normally do.
  2. Scenarios—using a kind of prototyping that reduces the number of features and level of functionality.
  3. Simplified thinking-aloud testing—one user at a time on a set of tasks and asking them to “think out loud.”
  4. Heuristic evaluation—judging the interface based on basic usability principles.

Developing Phase

The developing phase is where the product is implemented in real code. During this phase you can now begin usability testing early builds of the actual product. You might still be working with prototypes quite a bit in this phase, but more of the product will be finished as time passes. Not all of the features will be in the development phase at the same time so you might switch back and forth from prototypes to real code.

Ideally, you will be able to spend most of your time polishing, having worked out the major problems in the prototyping phase.

Live Code Test

Having users test a live code version might be useful in discovering problems specific to using the product on a computer. These problems usually have less to do with conceptual issues than with the design. They typically involve rather low-level interaction issues such as selecting items onscreen, dragging and dropping, and dynamic graphics that are only available with the actual product. For most aspects of your product, live code doesn’t necessarily have more fidelity than a paper or another prototype, so don’t delay usability testing until you have live code.

Usability Lab Test

In the developing phase, you can conduct usability lab testing, similar to the iterative usability testing in the planning phase. However, since more of the product is complete, you can measure more tasks. You might still use a mockup in Director, or you might work with a slightly changed build for the usability test. As time goes on, and more and more of the product is finished, it will feel less like a prototype. However, the problem with testing with a “finished” product is that since so much work has been done and so little time is left, you should not expect too much to change based on your findings.

Note

   

As part of this task you might also conduct focus groups or cluster analysis to usability test the product.

Stabilizing Phase

The stabilizing phase occurs when development ends and bugs are fixed to create a stable product that’s ready to ship. The usability focus in this phase is fine-tuning. New features and expensive usability enhancements should be documented for the next version.

Benchmark Testing

Benchmark usability testing is similar to integration testing in Quality Assurance. The goal of a benchmark test is to provide reliable quantitative data on the usability of a product across the top tasks users will want to accomplish. The object of these tests is less to identify problems (as is the case with most usability tests) and more to assess the state of the product’s usability.

For a benchmark test, look at the features of a product and break the features down task by task. In the stabilizing phase, especially for a complex product, you will not be able to make changes to the UI that improve every single task. Ideally you want to determine what the top tasks are and make sure they are the most usable. The lower priority tasks can be less usable and go on the list to be worked on for a later version.

Preparing for the Next Version

Think of this phase as starting the process over. You go through many of the same tasks you did in the Envisioning and Planning phases. For example, you will conduct:

  • Competitive testing—During the stabilization phase, this is testing with your own product to compare it to data previously collected on the competition.
  • Field studies—Like contextual research (which helps answer "What do we build?"), use what you have built to find out what problems exist that can be fixed for the next version.
  • Instrumented version studies of events—An instrumented version of the software basically spies on itself and logs data on events. You instrument a product to look for usage trends across large numbers of sessions and users.

References & Resources

Articles and Books

  • Boehm, Barry W. Software Engineering Economics. NY: Prentice Hall, 1981. (ISBN: 0138221227)
  • Dumas, Joseph S., and Janice C. Redish. A Practical Guide to Usability Testing. London: Intellect Books, 1999. (ISBN: 1841500208)
  • Helander, Martin, Thomas K. Landauer, and Prasad V. Prabhu, eds. Handbook of Human-Computer Interaction. North-Holland, 1997. (ISBN: 0444818766)
  • John, B. E. "Why GOMS?" ACM Interactions, vol. 2, no. 4 (1995): 80-89.
  • Microsoft Site Server Deployment & Administration.
  • Nielsen, Jakob. Usability Engineering. Boston: AP Professional, 1994. (ISBN: 0125184069)

Organizations

Other Online Resources