How to Develop Basic User Personas

“Before I can walk in another person’s shoes,
I must first remove my own.” – Brian Tracy

This article discusses the basics for developing of a set of user personas. User personas are not a research method, but a communication tool. They are a visual, creative way to convey the results of research about the users of your software or website. Personas are fictional characters developed to represent aggregate statistical averages to profile a user group. They do not express all types of user or their concerns, but can give a reasonable representation of who might be using your product and what some of their goals or issues might be. Personas are not fixed precise definitions of your users; they are more of an empathy creating tool. They can help you get into the mindset of a user type and communicate that mindset to others.

One created, personas are a way of humanizing your users and providing a canvas on which you can superimpose ideas on how a given user type might interact with your system. A little bit like criminal profiling (minus the crime and vilification), you can think of it as role playing with the intention of gaining new insights into the other party. User Personas should not be confused with engineering use cases or marketing demographic profiling.

Why develop user personas?

  • A key benefit to developing user personas is to provide user-centric objectivity during product design, and reinforce the idea that you and your colleagues’ version of a great user experience is not necessarily the same as your end users.
  • User Personas can dispel common stereotypes about users, e.g., the site needs to be so easy to use the VP’s grandmother can use it.
  • Later in the design process, user personas can help you communicate the usefulness of a specific design to stakeholders.

While I used to be somewhat limited in using persona, I did create them a few years ago for a top-five business school. The project was in the context of a site redesign that included an overhaul of the site’s information architecture. I conducted a fairly exhaustive audit of the MBA space which included a competitive review, expert review, and stakeholder interviews. At the time, I didn’t create personas for myself, I just used the underlying audit data to restructure the site’s content and define a simplified navigational model. However, later, when preparing to present the new information architecture to the client, I took the time to create a set of user personas based on the original data I used to make my design decisions. They were created out of a need to help illustrate to the client how the new structure better addressed the needs of their prospective students, and to assure them that user’s concerns were being addressed. After realizing how effective persona can be for stakeholder buy-in and alignment I have since used them much more frequently in a range of UX projects.

When are personas most useful?

  • User personas are useful when there are a number of unsubstantiated assumptions floating around about who your users are. E.g. early adopters, soccer moms, the VP’s grandma, etc.
  • When there is a tendency for developers and product managers to make design decisions based on; their own personal preferences, technical constraints, chasing short term sales, or any other reasoning that does not consider the end user.
  • When there is a general lack of clarity about who your end user is, or when the data you have regarding your users does not seem to directly translate into insights about their behavior.
  • Whenever you need a communication tool to advocate and encourage empathy for end users.

Development Lifecycle

Persona creation is best done during requirements gathering stage. However, getting the most out of your user personas means that they should also be referenced during development and testing, as questions and changes arise. They should also be compared to, and updated against actual user testing and post-release feedback. If properly evolved and maintained, personas can be an effective guidepost throughout the development and product lifecycle.

Limitations of User Personas

  • It’s easy for personas to be taken too literally or become a stereotype. Personas should never be considered a definitive archetype of your users, doing so runs the risk of turning your definitions into pseudoscience, akin to phrenology. Direct user testing and feedback insights should take precedence over anything user persona derived.
  • User personas are not precise; they are limited by the innate prejudices, interpretations, and reference points of the authors of the personas and any subsequent user scenarios.
  • User personas are not an appropriate communication tool for everyone; some people prefer a well crafted presentation of the source data over a persona which can be perceived as containing superfluous data.

How to implement user personas

With a little research, a set of basic personas is actually fairly easy to put together. The first step is to pull research from multiple sources to gather data about potential users. Next, synthesize this data to form user groups and select representative data from each user group to form a user data profile. Finally, add supplemental character information to create a believable persona. As a general rule, you should start with trying to define three to five user types, adding additional types as appropriate. I would recommend against defining more than about eight or 10. Beyond this number, you start to exceed people’s ability to reasonably keep track of the information.

Consider This…

If you think you need more than ten personas, or have already been heavily relying on personas to drive development consider as a next step investing in an ethnographic study. Real profiles and insights derived from the results of an ethnographic study will provide more actuality than fictitious personas.

  1. Conduct Research. Pull together any reliable data you can find might describe your users or give insight into their goals, concerns or behaviors. Collect any potentially relevant demographics such as; gender, age, race, occupation, household income. Let’s, for example, take the case of developing personas around prospective MBA students. In addition to client interviews I was able to pull a large amount of key data from sources on the Internet. Some references I found were:
    • Graduate Management Admissions Council
    • GMAC: 2004 mba.com Registrants Survey
    • The Black Collegian: ‘The MBA: An Opportunity For Change’
    • The Princeton Review: ‘Study of Women MBAs’
    • National Center for Education Statistics (NCES)
  2. Form Relevant User Groups. To continue with the business school case study, research on MBA students showed that there were a growing number of minority women enrolling in MBA schools. In addition, the client indicated a desire to further attract this type of student. Armed with that initial seed data, I defined a user group of minority women who were considering, had applied to, or attended business school. Further research allowed me to compile a list of statistical information about this group (such as average age, current occupation, family status, etc) as well as formulate some ideas about their interests, concerns and expectations. When compared to more “traditional” MBA students, these women were; a few years older on average, more likely to be married with children, and were highly-focused on what specific opportunities this type of degree offered post graduation.
  3. Create and Embellish the Persona. Start by turning the data you have for the user group into a single individual. The “Minority Women” user group was converted into a thirty six year old, African American woman named Christine. At this point it’s ok and actually preferable to fill in the gaps with plausible details about this person to bring them to life a bit; e.g. “Christine Barnhart is a marketing communications manager who balances her time between work and family…” Since this is in large part a tool to facilitate empathy for the user group, as much realistic information as possible should be incorporated to develop the character. As long as the “filler” is not negated by other data it should be fine.  Even still, after you have finished writing your personae take a step back and ask yourself; “Is it reasonable to expect this person to exist?” Alter your description if necessary, or move on to the next persona.
  4. Develop and Test User Scenarios. Now that you have your personas and an idea of what your user types are, we can make some assumptions about what functionality or information they might find useful. You will need to develop some User Scenarios to help test any hypothesis you have. Unlike a use case, which is a description of how system functionality interacts (i.e. contact data is retrieved, error message is returned), user scenarios focus on the user’s perspective. A simplified User Scenario for our “Christine” persona might look like:
    • While successful in her career, Christine is not entirely sure she has the background and experience to be admitted to a top business school. In addition, she has a lot of specific questions about the program’s culture. She is concerned that it may be too aggressively competitive for her to get the most out of her business school experience.Christine goes to the program website and immediate looks at the eligibility and admission requirements. She takes her time looking through the information and prints out a few pages. Next, she starts to browse around through site, trying to get a feel for the culture. She finds a section that has video interviews with students and so she watches a few and begins to get a sense of whether this school is a fit for her or not.

User Stories Not to be interchanged with a user scenario (or use case), a user story is a short two or three sentence statement of a customer requirement. It is customer-centered way of eliciting and processing system requirements within Agile development methodologies[1].

Basic Persona Template

There are a wide range of user persona templates, and they can get rather complicated. Considering that the intention of this type of tool is effective communication, I tend to gravitate toward the clean, simple layouts that are one page or screen. Below is a basic example template. Don’t get too focused on templates however, if you find information that you think is useful or relevant but it doesn’t fit your template, rework the template to fit the data.

Persona Layout - Template



[1] More information on user stories can be found at: http://www.extremeprogramming.org/rules/userstories.html

Card Sorting: A Primer

Change your language and you change your thoughts.” –  Karl Albrecht

Card sorting is a specialized type of user test that is useful for assessing how people group related concepts and what common terminology they use.  At its simplest form, a card sort is the processes of writing the name of each item you want to test on a card, giving the cards to a user and asking him or her to group like items into piles. There are however, a number of advanced options and different types of card sorting techniques.

Open Card Sort – An “open” card sort is when you do not provide users with an initial structure. Users are given a stack of index cards, each with the name of an item or piece of content written on it. They are then asked to sort through and group the cards, putting them into piles of like-items on a table. Users are then asked to categorize each of the piles with the name that they think best represents that group/pile. Open card sorts are usually conducted early on in the design process because it tends to generate a large amount of wide ranging information about naming and categorization.

Given the high burden on participants and researchers, I personally find an open card sort to be the least attractive method for most contexts. It is, however, the most unbiased approach. As a general rule, I would reserve this method for testing users with a high degree of expertise in the area being evaluated, or for large scale exploratory studies when other methods have already been exhausted.

Closed Card Sort – The opposite of an open sort, a “closed” card sort is when the user is provided with an initial structure. Users are presented with a set of predefined categories (usually on a table) and given a stack of index cards of items. They are then asked to sort through the cards and place each item into the most appropriate category. A closed sort is best used later in a design process. Strictly speaking, participants in a closed sort are not expected to change, add, or remove categories. However, unless the context of your study prevents it, I would recommend allowing participants to suggest changes and have a mechanism for capturing this information.

Reverse Card Sort – Can also be called a “seeded” card sort. Users find information in an existing structure, such as a full site map laid out on index cards on a table. Users are then asked to review the structure and suggest changes. They are asked to move the cards around and re-name the items as they see fit. A reverse card sort has the highest potential for bias; however, it’s still a relatively effective means of validating (or invalidating) a taxonomic structure. The best structures to use are ones that were defined by an information architect, or someone with a high degree of subject matter expertise.

Modified Delphi Card Sort (Lyn Paul 2003) – Based on the Delphi Research Method[1], which in simple terms refers to a research method where you ask a respondent to modify information left by a previous respondent.  The idea is that over multiple test cycles, information will evolve into a near consensus with only the most contentious items remaining. A Modified Delphi Card Sort is where an initial user is asked to complete a card sort (open, closed, or reverse), and each subsequent user is asked to modify the card sort of their predecessor. This process is repeated until there is minimal fluctuation, indicating a general consensus. One significant benefit of this approach is ease of analysis. The researcher is left with one   final site structure and notes about any issue areas.

Online Card Sort – As the name implies, this refers to a card sort conducted online with a card sorting application. An online card sort allows for the possibility of gathering quantitative data from a large number of users. Most card sorting tools facilitate analysis by aggregating data and highlighting trends.

Paper Card Sort – A paper sort is done in person, usually on standard index cards or sticky notes. Unlike an online sort, there is opportunity to interact with participants and gain further insight into why they are categorizing things as they are.

Why Use Card Sorting?

  • Card sorting is a relatively quick, low cost, and low tech method of getting input from users.
  • Card sorting can be used to test the efficacy of a given taxonomic structure for a Web site or application. While commonly used for website navigation, the method can be applied to understand data structures for standalone applications as well.
  • When designing new products or major redesign efforts.
  • When creating a filtered or faceted search solution, or evaluating content tags
  • For troubleshooting, when other data sources that indicates users might be having a hard time finding content.

When is Card Sorting Most Useful?

Development Life-cycle

Card sorts are useful in the requirements gathering and design stages. Depending on where you are in the design process you may get more or less value from a given method (open, closed, reverse, etc).

Limitations of Card Sorting

  • The results of card sorting can be difficult and time consuming to analyze; results are rarely definitive and can reveal more questions than answers.
  • The results of a card sort will not provide you with a final architecture; it will only give you insight possible direction and problem areas.

How to Conduct a Card Sort

Card sorts are one of those things that are somewhat easier to conduct than to explain. Because there are so many variations, I’ve decided to illustrate the concept with a walkthrough of an actual project case study. I was recently brought into a card sorting study by a colleague of mine[2] who was working on a complex taxonomy evaluation. The project was for a top toy retailer’s e-commerce site. After weeks of evaluating traffic-patterns and other data, my colleague had developed what he hoped would be an improved new site structure. He wanted to use card sorting techniques to validate and refine what he had developed.

  1. Define your research plan. Our research plan called for some online closed cards sorts to gather statistically relevant quantitative data, as well as the rather innovative idea to go to one of the retail locations and recruit shoppers to do card sorting onsite. The in-store tests would follow a reversed sort, using the modified Delphi method. I.e. Shoppers would be shown the full site structure and asked to make changes. Each shopper would build off of the modifications of the previous shopper until a reasonable consensus was achieved.
  2. Prepare your materials. In the case of in-store card sorts, we needed to take the newly defined top and second level navigation categories and put each on its own index card. The cards would be laid out on two long banquet tables so participants could see the structure in its entirety. Single page reference sheets of the starting navigation were printed up so we could take notes on each participant and track progressive changes. We had markers and blank index cards for modifications. A video camera would be used to record results, and a participant consent form was prepared.
  3. Recruit Participants. Unlike lab-based testing where you have to recruit participants in advance, the goal for the in-store testing was to directly approach shoppers. The idea was that not only would they be a highly targeted users group, but that we would be approaching them in a natural environment that closely approximated their mindset when on the e-commerce site i.e. shopping for toys. Because we would be asking shoppers to take about 10-20 minutes of their time, the client provided us with gift cards which we offered as an incentive/thank you. Recruitment was straightforward; we would approach shoppers, describe what we were doing and ask if they would like to participate. We attempted to screen for users who were familiar with the client’s website or at least had some online shopping experience.
  4. Conduct the Card Sort. After agreeing to participate and signing the consent form, we explained to the participant that the information on the table represented potential naming and structure for   content on the e-commerce site. Users were asked to look through the cards and call attention to anything they didn’t understand or things they would change. They could move items, rename them or even take them off the table. Initially we let the participant walk up and down the table looking at the cards. Some would immediately start editing the structure, while others we needed to encourage (while trying not to bias results) by asking them what they had come into the store for and where might they find that item, etc. After an initial pass, we would then point out to the participant some of the changes made by previous participants as well as describe any recurring patterns to elicit further opinions. After about 15 participants, the site structure stabilized and any grey areas were fairly clearly identified.
  5. Fig 3 Card Sort
     
    Figure 3: Sample Card Sort Display: Cut Index Cards on Table

  6. Prepare the Analysis. At the end of the study, there was a reference sheet with notes on each participant’s session, video of the full study, and a relatively stable final layout. With this data, it was fairly easy to identify a number of recurring themes, i.e. issues that stood out as confusing, contentious, or as a notable deviation from the original structure. As in any card sort, the results were not directly translatable to a final information structure. Rather, they provided insights that could be combined with other data (such as online sorting results) to create the final taxonomy.

Additional Resources

 


[1] The Delphi Research Method http://www.iit.edu/~it/delphi.html

[2] David Cooksey, Founder & Principal, saturdave, Philadelphia, PA

How to Prototype for User Testing

“If I have a thousand ideas and only one
turns out to be good, I am satisfied.”
– Alfred Bernhard Nobel

UXD prototyping is a robust topic and difficult to adequately cover in this type of overview guide. Therefore, I do so with the caveat that I am only providing the tip of the iceberg to get you started, with some references to where you can learn more.

Prototyping is not a research method, it is a research tool. An important thing to understand about UXD prototypes is that the term “prototype” itself can mean something slightly different to the UXD community than to the software development community at large. For example, one of the more common engineering usages of the term refers to an operational prototype, sometimes called a proof of concept. This is a fully or partially functional version of an application. UXD prototypes are, however, not usually operational. Most often they are simulations focused on how a user might interact with a system. In the case of the “paper prototype”, for example, there is no functionality whatsoever, just paper print-outs of software screens.

There are many different names for types of prototypes in software engineering, most of which describe the same (or very similar) concepts. Here are some of the most common terms:

  • Operational – Refers to a fully or partially functional prototype that may or may not be further developed into a production system. User testing prototypes can be operational, such as during late stage validation testing, though a program beta is more commonly used at this stage.
  • Evolutionary –As the name implies, an evolutionary prototype is developed iteratively with the idea that it will eventually become a production system. User testing prototypes are not “evolutionary” in the strictest sense if they do not become production systems. However, they can be iterative and evolve through different design and testing cycles.
  • Exploratory– Refers to a simulation that is intended to clarify requirements, user goals, interaction models, and/or alternate design concepts. An exploratory prototype is usually a “throw-away” prototype which means it will not be developed into a final system. User testing prototypes would usually be considered exploratory.

Semantics aside, prototypes are probably the single most powerful tool for the researcher to understand user behavior in the context of the product being developed. Some commonly used prototypes in user testing are;

  • Wireframes – A wireframe is a static structural description of an interface without graphic design elements. Usually created in black, white and grey, a wireframe outlines where the content and functionality is on a screen. Annotated wireframes are wireframes with additional notes that further describe the screens’ content or interactivity.
  • Design Mockups – Similar to wireframes, a design mockup is a static representation of an interface screen. However, design mockups are full color descriptions that include the intended graphical look and feel of the design.
  • HTML Mockups –Used in web site design, HTML mockups refer to interface screens that are created in the Hypertext Markup Language and so can be viewed in a browser. Most often, HTML mockups simulate basic functionality such as navigation and workflows. HTML mockups are usually developed with wireframes or a simplified version of the intended look and feel.
  • Paper Prototype – A paper prototype is literally a paper print-out of the designed interface screen. A paper prototype could be of wireframes or design mockups. In addition, it could be one page to get feedback on a single screen, or a series of screens intended to represent a user task or workflow.
  • PDF Prototype – A PDF Prototype consists of a series of designed interface screens converted into the Adobe Acrobat (.pdf ) file type. Like HTML prototypes, PDF prototypes can simulate basic functionality such as navigation and workflows. However, they take less time to create than HTML prototypes and can be created by someone without web development skills.
  • Flash Prototype – A prototype developed using Adobe’s Flash technology. Flash prototypes are usually highly interactive, simulating not only buttons and workflows, but the system’s interaction design as well. In addition to being a quick prototype development method, Flash prototypes can be run via the web or as a desktop application making it very portable tool.

Why Use Prototyping

  • It saves money by allowing you to test and correct design flaws before a system is developed.
  • It allows for more freedom to explore risky, envelope-pushing ideas without the cost and complexity of developing it.
  • Since prototypes are simulations of actual functionality they theoretically bug-free. Test results are less likely to be altered or impeded by implementation issues.

When is a Prototype Most Useful?

  • When you are trying to articulate a new design or concept
  • When you want to test things in isolation (i.e. graphic design separate from information layout separate from interaction design)
  • To gather user feedback when requirements are still in a state of flux and/or can’t be resolved
  • To evaluate multiple approaches to the same user task or goal to see which users prefer

Limitations of Prototyping

  • A prototype will never be as accurate as testing on a live system; there is always some deviation between the real world and the simulation.
  • Depending on where you are in the iterative research process, there is a point of diminishing returns where the amount of effort to create the prototype is better put into building a beta.

Creating a UXD Prototype

Because the actual prototype creation process is highly specific to what you are creating and what tools you are using (paper napkins, layout tool, whiteboard, etc) I’ve included a few considerations for defining a prototype instead of detailing the mechanics of creating one.

  1. Consider your Testing Goals. – Are you looking to understand how users perform a specific set of tasks? Do you need to watch users interact as naturally as possible with the system? Or are you trying to get users’ responses to various experimental ideas and concepts? The answers to these questions will help you make some key decisions about the structure of your prototype.
  2. Decide on Degree of Fidelity. What level of fidelity will the prototype achieve? Here, the term fidelity refers to the degree to which a prototype accurately reproduces the intended final system. A low fidelity prototype might be a PowerPoint deck of wireframes. A high fidelity prototype could be an interactive simulation with active buttons and representative content.A good rule of thumb is to develop the lowest fidelity prototype possible to achieve the goals of your study. This ensures a lack of commitment to the ideas presented and allows more time and money for the recommended iterative process. If a significant amount of time is taken to create an initial prototype with all the bell and whistles, designers are less willing to see when the concept is not working, less likely to change their designs. In addition, the amount of time that goes into building one high fidelity prototype would have been better used building multiple lower fidelity versions that allowed for more testing in-between each revision.
  3. Scripted Tasks or Natural Exploration? – Another consideration when defining your testing prototype is what content and functionality should be included. Will a preset walkthrough of key screens be sufficient, or do the goals of your study require that users are able to find their own way around the system? On average, I tend to think that enough insight can be gleaned from a series of walkthrough tests and other research methods to warrant using these, leaving the open-ended user-directed tests to be conducted on a product beta or via A/B testing[1] on a live system. With a sufficiently complex system you can quickly hit a point of diminishing returns regarding the amount of effort it takes to simulate functionality vs. actually building it.

Additional Resources

  • Microsoft Visio (office.microsoft.com/en-us/visio) – The “old school” standby for wireframes and workflows.
  • Adobe InDesign (www.adobe.com/products/indesign) – Arguably the industry standard tool. Layout tool with key functionality conducive to prototype development.
  • Balsamiq – (www.balsamiq.com) – Good low-cost alternative when you only need the basics.
  • Further Reading – Rettig, Marc. “Prototyping for Tiny Fingers.” Communications of the ACM. Vol. 37, No.4. April, 1994.


[1] A/B Split Testing refers to a testing method where in a live system an alternate, experimental design is shown to a percentage of users while the baseline control design is shown to the rest. Analysis comes from observing any notable differences in user behavior between the experimental design and the control.

An Introduction to User Testing

“You can see a lot by observing.”Yogi Berra

User Testing is an established method for evaluating the effectiveness of an application or set of design concepts. It involves interviewing and observing users as they interact with a system or prototype. User testing is commonly used to help validate an interface, resulting in a set of insights to improve a specific design. However, depending on the goals of your research, user testing can be used to elicit feedback from users about concepts that will be used to inform additional research.  From an implementation (and cost) standpoint, user testing can be as simple as a handful of users discussing design sketches on paper; or as formal as a sophisticated lab study with a dozen carefully screened users and a panel of observers.

The value and accuracy of a user testing study is not measured by the technology used, but by the appropriateness of the interview techniques and competence of the moderation. That said, there are a number ways to conduct user testing.

Lab Testing – User testing conducted with users in a lab environment. While lab configurations vary, the basic components are most often; a computer setup with additional chair for an interviewer, some method of capturing the session such as a video camera, and a two-way mirror for observers in an observation area. Some labs approximate a home environment with couches and furniture. However, a sophisticated setup with observation area is not necessary. The first user testing lab I created was in an extra office. It had been vacant due to the fact that it was an odd triangular shaped space. This made an observation window or other advanced amenities impractical. We got by with a desk, computer, two chairs, webcam, and screen-capture software.

Remote Testing – Remote testing is conducted with users in their home or office environment. The interviewer conducts the study remotely via testing software. Depending on the software used, the interviewer can speak with the user, see his or her actions on the computer screen, and view the user via webcam.  In addition to the sometimes significant cost savings when compared to lab testing, remote testing allows users to stay in their own environment using their own setup and so can provide more true-to-life observations. In addition, remote testing allows you to recruit users from diverse populations and markets without having to have them come to a set location.

Field Testing – The most accurate type of user testing is conducted by the researcher in the user’s environment. The interviewer may go to a user’s home or office and sit with them as they use the system, or even ride with them in their car to monitor mobile phone use. Field testing may not be practical or economical all cases, especially when testing consumer oriented products. However, it can provide additional insights that are otherwise impossible to obtain. Field research is particularly useful in a business context for testing operational systems such as billing or call center applications. It can be exceedingly difficult for users to give substantive feedback on transactional-type systems without using them in the context of their day to day work.

In addition to ways of testing, there are a few different types of interview techniques.

Task-Oriented Inquiry – As the name implies, task-oriented inquiry is when a user is asked to perform a specific task or set of tasks. I.e. “How would you…?” Researcher can then observe the user and ask follow-up questions about what they are thinking, and how they perceive the process (sometimes called the “think out loud” methodology). When conducting the study it is valuable to both observe and ask for the user to evaluate tasks. Often, the user’s perception of the process deviates from what the observational data shows. For example, a user might in reality struggle with a task, but then indicate that it was easy. Having both types of information provides a clearer window into the user’s mindset and what’s actually going on. Task-oriented inquiry is particularly useful for evaluating a system design and for validating against standard usability metrics.

Contextual Inquiry – Contextual inquiry is observational data collected as users use a system, i.e. user “show-and-tell”. In the strictest sense, it is a field study technique by which the researcher observes the users in their own home or office, interacting not only with the system but their environment (answering the phone, talking with co-workers, etc). However, the basics of a contextual inquiry can be used in a lab or remote testing scenario. The researcher asks the user to use an application or website the way they would naturally and then observes the user interacting with the system, for example: please show me how you normally access your favorite shopping sites. The researcher may ask some clarification questions as needed, for example:. I noticed you went to a search engine first, why is that?

Ethnographic Interviewing – A variation on contextual inquiry, ethnographic interviewing is where, instead of directly observing the user interact with a system, the researcher asks questions about the environmental issues around system usage. While this type of information is considered most accurate when directly observed in the user’s environment (i.e. in an ethnographic study), ethnographic interviewing can offer substantive insights when direct observation is either impractical or impossible. For example: “We’re interested in how you shop online. Tell me, when and where do you usually do your online shopping? You said you shop from your desk at work, how your desk is setup?”

Why Conduct User Testing?

  • User testing allows you to gather direct feedback from users and collect observational data that will help you improve your designs.
  • User testing will reveal the majority of usability problems before you release the software.

When is User Testing Most Useful?

  • When you want to validate the success of your system design.
  • When you want to explore the concepts and contexts of a potential system.

Development Lifecycle

User testing is often used during the early design stages to test concepts and in later design stages to drill down into the most successful designs for intended use. User testing is also helpful in the quality assurance phase to evaluate implementation details.

Limitations of User Testing

  • Tests are limited by the quality of your test’s materials and methodology
  • User testing will never be as accurate as beta testing or identify 100% of the issues that will occur in the field.

How to Conduct User Testing

  1. Define your research plan. A research plan for user testing includes considerations such as the goals of your research, what you will be testing, who you will be testing, how many people you will test, how you will recruit participants, and the mechanics of how you will conduct the research itself.  At a high level you’ll need: a participant recruiting screener, a script (more formally called a test protocol), a testing location, moderator, participants, and your prototype or live system. Your study can be a simple “friends and family” paper prototype test, or a formal study. Either way, having a thought-out, documented plan will facilitate the process and provide credibility to your study once complete.
  2. Develop a screener. A recruiting screener is the criteria by which you will select your participants. The screener is used to determine if a potential participant matches the characteristics and demographics defined by your research plan. The screener should not only disqualify users based on answers to questions, but it should indicate how many of each type of user (such as the number of users in each age range) need to be recruited for the study.
  3. Develop a Test Script. – A test script is the outline of the discussion and questions or topics the researcher or moderator will cover with the participant when conducting the test. It should include a full walkthrough of the test such as the welcome, purpose of the study, ice-breaker topics, permission requests, evaluation scenarios or questions, closing feedback and handing out any incentive once the test is complete.
  4. Moderator & Location. You’ll need to identify your moderator and testing location in accordance with your research plan. You want the user to be comfortable and feel free to respond honestly. A usability testing session is usually an artificial environment so it is important to put users at ease so they will behave as naturally as possible. Moderators should be able to be objective and ask questions to elicit feedback without swaying results.
  5. Recruit Participants. There are a number of ways you can recruit; for larger formal studies it is common to hire a market research firm to get people. However, you can build your own list of participants. Normally, you do not need more than five participants for most user research tests (Nielsen 2000). However, anticipate the fact that people will back out or not show up, and recruit a few alternates.
  6. Conduct Testing. Before you get started, make sure the participant is familiar with the environment and understands that you are not testing them, but the system. Follow the test script, but be open to actions that may fall outside the predefined activities. You may need to balance letting users go off on their own, with reining them back to predefined tasks.
  7. Analyze Results. Categorize your findings and bubble up relevant insights for your report. If you outsource testing, personally view all the interviews or review the video. Summaries are helpful but are only one interpretation; you’ll miss a lot if you don’t see for yourself.
  8. Schedule Readout. As with all user research methods, conducting the study is only half the process; you need to evangelize the results. After conducting the read-out, publish your documentation and let people know where you’ve placed the information.

Fig 6b Paper Test

Figure 6: User Testing with Paper Prototypes. Users are asked to describe what they see as well as to expound on how they would expect to perform certain tasks.

Additional Resources

  • Ethnio (www.ethnio.com) – recruit people from your website for research
  • UserView (www.techsmith.com/uservue) – Web based remote user testing tool
  • Morae (www.techsmith.com/morae) – User testing application
  • Craigslist (www.craigslist.org) – Popular community board to recruit participants

User Experience Design Myths

This article attempts to dispel some common myths you may have heard about adopting user research or a user-centered design approach.

We Don’t Need UXD Because…

  • Our users are early adopters, or are our employees…
  • We are just trying to create a technical proof of concept…
  • We work iteratively in an Agile development environment…

Whatever the reason, consider this, there is no demographic that likes a poorly designed product. UXD foundations are arguably the most direct and efficient means to a well designed product. They can be adapted to any type of application or product lifecycle stage. Therefore, while your product may not need a complex UXD process or a specialized UXD team, the basics of user-centric design are always appropriate and will unilaterally result in a better product when appropriately applied.

UXD is Too Expensive

UXD methods can easily be modified to fit your budget. Each research method in this guide can be adapted to be implemented inexpensively and with very meager resources. In fact, an appropriately defined research plan should reduce costs by getting you much closer to what your end users want with far fewer problems post release. Maintenance costs related to unmet or unforeseen user needs can be as high as 80% of the overall development lifecycle costs. (Pressman, 1992) There is good reason why UXD is a growing field within software design; it shows a strong ROI.[1]

UXD Slows Down the Development Cycle

UXD can reduce and simplify the product development process. A common misconception about user-centric design is that it adds to, and slows down the development cycle. And yes, UXD can be incorporated in such a way that it needlessly adds time to the process. However, it does not have to. In fact, as early as 1991, a study found that usability engineering demonstrated reductions in the product development cycle by 33 to 50%. (Bosert, 1991) A well integrated, process-appropriate UXD effort will not only produce a more successful product, but will reduce development time and costs.

UXD is mostly useful for Consumer Products

Some form of UXD is useful for any system a user can interact with. Can users easily find what they need? Are common tasks simplified and not an unnecessary drudgery? Are the labels clear and do they use commonly accepted terms? All these UX related questions are as relevant to a consumer-focused e-commerce site as they would be to a billing operations system. UX Methods can be adapted and applied to information/marketing interfaces as well as transactional applications. The main difference is the goal, i.e. successful UXD on a consumer product usually drives more sales; while success in an operations system usually takes the form of higher adoption rates and increased productivity.

UXD Only Affects the Presentation Layer

UXD is more than skin deep. Don’t get me wrong, with a background in the arts; I see the value in making things look good, and most people respond to a pleasing visual design. But thinking of UXD as a presentation layer process will substantively limit its ability to improve your product. A good example of how UXD has an impact on functionality is in the case of faceted categorization for parametric searches[1]. User Research can not only help you determine how these fields should be laid-out, but how to categorize the data, what to name categories, and what kind of search groupings users want.

User Research will help us Define the Right User Experience

Well, you can try. But it is important to understand that there is no single “correct” user experience for a product. The process of interpreting the results of User Research and deciding how the resultant insights should be translated into final designs is probably best considered an art informed by science. It’s quite possible (and common) to have two totally different, yet viable, directions that both address the same user goals and requirements. (In these cases, additional criteria can be used to determine what direction to take i.e. budget, time, brand alignment, other features etc.) Despite its inherent lack of absolutes, User Research can, however, give you the best educated-guess possible regarding your users behavior, addressing upwards of 80% of issues before taking a product to market.

User Research is Market Research

User research is not market research. While there is a strong brand/marketing component to user experience design, the research methods are distinct with different methodologies, considerations and results. Unlike market research, UX research is less concerned with what features are available, or what the marketing messages are, as it is with how successful a specific design is in articulating its features and how usable and accessible the product is for the end user.

Market research is business-centric; it uses the analysis of data in an attempt to move people to action. While user research is (you guessed it) user-centric, its goal is to analyze user behaviors and preference to better design for them. Often, these two are means to the same end. Sometimes however, there is a conflict. An e-commerce website could have a promotional pop up screen that most users find annoying, despite the fact that it generates a good deal of revenue. The UXD practitioner should be free to advocate for the users’ goals, and the marketer for the business’ goals. Any resulting compromises should be considered in the broader context of the company’s brand.

User Research is only useful During Requirements Gathering

While very useful during requirements gathering, there are real benefits of incorporating this research throughout the software development lifecycle. Validation is a big part of user research. What in the design phase sounded like a good idea and tested well, might not work as well in its final implementation. There are a number of inevitable changes and revisions that occur during development. It’s important to retest and validate your release after all the pieces of the puzzle have been put together. This can be achieved with participant-based user testing or by providing structured feedback mechanisms in a beta or limited pilot program.

Here are some sample User Research methods for each phase of the software development lifecycle:

  • High-Level Requirements

o    Ethnographic Studies

o    Concept Prototypes & Testing

o    Surveys with a Broad Focus

o    Competitive Reviews

  • Detailed Requirements

o    Validation Tests for Screen and Workflows

o    Tactical Surveys

o    Graphic Design Reviews

  • Development & QA Testing

o    Validate Changes and Workarounds

  • Release/Deployment

o    Feedback Forms

o    User-centric Beta or Pilot Testing

  • Maintenance

o    A/B Split Testing

The Politics of the Artichoke: Selling your ideas in an organization, one stakeholder at a time

PHILADELPHIA, May 3, 2010 – Software Strategist, Dorothy M. Danforth will give a presentation May 5 at 1 p.m. on “The Politics of the Artichoke: Selling Your Ideas in a Large Organization, One Stakeholder at a Time” at J. Boye Philadelphia 2010, the premiere Northeast conference for online professionals both inside and outside the firewall, March 4-6.

“The politics of the artichoke (or ”la politica del carciofo”) is an Italian expression referring to a savvy strategy that deals with your opponents one at a time,” Danforth said. “In this case study, I’ll discuss how—as a consultant to Comcast working with a small internal team—our group was able to successfully give our interactive design ideas a broad, far-reaching life of their own.”

Success lies in a group’s ability to evangelize a plan, socialize it throughout an organization, evolve the plan, and allow others to take the ownership needed to see it to fruition, said Danforth. “As we look at this case study, we’ll go over the details of how we made that happen, step by step.”

Danforth will also be hosting a discussion on “Delivering on Your Brand’s Promise through User Experience Design”. The roundtable will focus on how to develop on-brand user experiences across multiple platforms and how UXD as a practice can promote better brand alignment through its methodologies.

The conference is organized by J. Boye, an international, independent networking and knowledge-sharing firm with more than 250 member organizations. For more information about the conference, go to http://www.jboye.com/conferences/Philadelphia2010.

About Danforth Media
Danforth Media is a Philadelphia-based software design consultancy specializing in User Experience Design (UXD) for desktop, Web, mobile, and set top devices. Services include user-centered research and design strategy. Dorothy M. Danforth, the company’s founder and principal, has fifteen years experience with software usability design and research working with Fortune 500 and emerging technology companies. For more information, go to www.danforthmedia.com

Easy-to-Read Guide Turns User Experience Research into Practical Tool for Any Business New Book Gives Comprehensive Overview, How-To Tips

FOR IMMEDIATE RELEASE

CONTACT:

Dorothy M. Danforth
Danforth Media
215-439-8173
dorothy@danforthmedia.com

Easy-to-Read Guide Turns User Experience Research into Practical Tool for Any Business
New Book Gives Comprehensive Overview, How-To Tips

PHILADELPHIA, PA, Dec. 17, 2009 – For software professionals who want to dip their toe in user experience research, Dorothy M. Danforth has produced a comprehensive, yet highly readable guide that relies on real-world examples from Fortune 500 companies to highlight key concepts and outline practical applications.

Written in plain English, User Experience Research: Stories from the Field uses examples, anecdotes, resources, and practical templates from completed and on-going research efforts to provide an easy-to-understand overview of the field and its usefulness in software design.

“Professionals can read this on a plane or in a day or so and come away with not only a foundational understanding of the methods, but also ideas, tips and tricks to help them start using these techniques in their own organizations,” said Danforth.

The easy-to-read guide provides a framework for using multiple types of insight-generating research that will reveal a more holistic and realistic view of how users will likely respond to a system. It includes an overview of the most common user experience research methods. Each overview is supplemented with context for when and how to use each method, and what insights that method might offer.

Software developers, graphic designers, information architects, product managers, and other information-technology professionals who produce, design, or develop software can purchase the eBook, published by the IEEE Computer Society, at http://www.computer.org/portal/web/readynotes.

So far, the guide has garnered good reviews. One independent reviewer called the guide, “A very interesting read with well-presented positions.” Another wrote that, “There’s a lot of good content in there, and I really like that [it summarizes] each technique with strengths, weaknesses, and further references.”

About Danforth Media

Danforth Media is a Philadelphia based software design consultancy specializing in User Experience Design (UXD) for desktop, Web, mobile, kiosk, and set top devices. Services include user centric research, interface design, prototyping, and software vendor analysis. Dorothy M. Danforth, founder and principal consultant for Danforth Media, has fourteen years’ experience in software design and usability research for Fortune 500 and emerging technology companies. An experienced speaker and UX evangelist, Dorothy has authored an eBook on user research methods through the IEEE Computer Society. In addition to research methods, the guide offers a number of vital tips and tricks for fostering UX best practices within an organization. For more information, go to www.danforthmedia.com.

International User Experience Conference Draws 316, Showcases 47 Speakers

Philadelphia, PA (PRWEB) November 25, 2009

More than 300 programmers, information architects and designers from around the globe met in Moscow to discuss emerging trends and best practices in User Experience Design, an approach that gives the needs, wants, and limitations of end users top priority at each stage of the design process.

“It was interesting to see the complementary approaches taken by different designers around the world,” said Dorothy M. Danforth, a keynote speaker at the conference. “The Russian presentations tended to focus on hard data points, while U.S. designers look a bit more at accounting for intangibles.”

Danforth spoke at the conference on the foundational elements of user focused research strategies for new products and ventures. She outlined various low-cost, high-impact methods available to Web designers and UX professionals when creating new products, scenarios for when and how to use these methods, as well as insights on how to get the most out of early state R&D processes.

Other speakers included Bill Buxton, Microsoft; Dmitry Satin, UsabilityLab, Russia; Silvia Zimmermann, UPA International; Andrew Sebrant, Yandex; Theo Mandel, Consultant, Thyra Rauch, IBM; and Alexander Oboznov, Russian Academy of Science Institute of Psychology.

Moscow hosted UPA Europe, the 3rd annual User Experience Russia on Oct. 26-28. With a theme of “User experience design: the journey from discovery to advocacy”, the conference drew 316 attendees to the main conference sessions and 44 participants in specialty workshops that were transmitted as webinars.

“The conference pulled in top names from around the world to assess the current state of User Experience Design and talk about the future possibilities of focusing on the user,” said Danforth. “While still a growing field, over the past ten years or so user-centered design has emerged as the predominant approach to software design. With a user-focused approach, we are able to maximize ease-of-use when we roll out new products, reducing transition time and increasing productivity.”

About Danforth Media
Danforth Media is a Philadelphia-based software design consultancy specializing in User Experience Design (UXD) for desktop, Web, mobile, kiosk, and set top devices. Services include user centric research, interface design, prototyping, and software vendor analysis. Dorothy M. Danforth is founder and principal consultant for Danforth Media. An experienced speaker and UX evangelist, Dorothy is currently authoring an eBook on user research methods through the IEEE Computer Society. In addition to research methods, the guide will offer a number of vital tips and tricks for fostering UX best practices within an organization. For more information, go to http://www.danforthmedia.com/about.

###