Soliciting Quantitative Feedback

“A successful person is one who can lay a firm foundation
 with the bricks that others throw at him or her.”  – David Brinkley

User surveys and feedback forms are arguably the easiest and most inexpensive methods of gathering information about your users and their preferences.  Like competitive analysis, surveys are also used in a marketing context. However, UXD surveys contain more targeted questioning about how usable a website or software is, and the relative ease with which people can access content. There are a few different types of user surveys. Some are different questioning methods; others approach the user at a different point in time.

Feedback forms – Feedback forms are the most common way to elicit input from users on websites, but they can be implemented for desktop applications. A feedback form is simply a request for input on an existing application or set of functionality. A feedback form can be a single, passively introduced questionnaire that is globally available on a website for users to fill out. They can also be short one- or two-question mini-forms placed throughout an application in relevant places such as a help/support topic, or when an application unexpectedly terminates.

Tips for Feedback:

  • Keep it simple. A shorter, simpler survey will be completed more frequently.
  • Always offer a comment field or other means for users to type free-form feedback. Comments should be monitored and categorized regularly.
  • If your marketing team has already established a feedback form, try to piggy back off of their efforts. If not, consider collaboration. Much of the information collected in a feedback mechanism will be useful to both groups.

Surveys – A survey is not all that different from a feedback form, except that they are usually offered for a limited amount of time, involved some sort of targeted recruitment effort, and the content of the questions are not necessarily related to an existing site or application. Surveys can be implemented in a few ways, depending on the goals of your research.

  • Intercept Surveys – Intercept surveys are commonly used on web sites. An intercept survey is a survey that attempts to engage users at a particular point in a workflow, such as while viewing a certain type of content. Intercept surveys usually take the form of a pop-up window or overlay message. Because these are offered at a specific point in a process (e.g. the jewelry section of an e-commerce department store.), the results are highly targeted.
  • Online Surveys – Online surveys can be conducted to learn about a website, application, kiosk, or any other type of system. However, the survey itself is conducted online. There are many advantages over traditional phone or in-person surveys, not the least of which the ease of implementation. There are a number of online survey tools available that will make it easy for you to implement your survey in minutes. Most offer real-time data aggregation and analysis.
  • Traditional Surveys – There is definitely still a place for surveys conducted in person or on the phone. While a bit more costly and time consuming these methods can reach users who would not otherwise be reachable online or those who normally will not take the time to fill out a survey on their own.

Why Conduct User Surveys?

  • User Surveys are an effective, economical method of gathering quantitative input.
  • You can poll a very large number of users when compared to the number of users who participate in user testing or other research methods.
  • Since surveys (online versions in particular), are usually anonymous, you are more likely to get more honest and forthright responses than from other forms of research.
  • Surveys can be implemented quickly, allowing you to test ideas in an iterative process.

When is a Survey Most Useful?

  • When you have specific targeted questions that you would like answered by a statistically relevant number of people. Polling attitudes, etc.
  • When there is a specific problem that you want to investigate.
  • When you want to show the different between internal attitudes and perceptions of an issues verses the attitudes and perceptions of your target market
  • When you want direct feedback on a live application or web site.
  • When you want to use ongoing feedback to monitor trends over time, and gauge changes and unexpected consequences.

Development Life-cycle

Surveys are useful in the requirements gathering stage as well as follow-up feedback elicitation during the maintenance stage, i.e. post release.

Limitations of User Surveys

  • User surveys are entirely scripted. So while skip logic[1] and other mechanisms can offer a degree of sophistication, there is little interactive or exploratory questioning.
  • Users usually provide a limited amount of information in any open ended questions.
  • Because there is little room to explain or clarify your questions, surveys should be limited to gathering information about tangible, somewhat simplified concepts.
  • In the case of passive feedback forms, most people won’t think to offer positive feedback. In addition, not everyone with a complaint will take the time to let you know.

How to Conduct a User Survey

  1. Define your research plan. As in any research effort, the best starting point will be to define and document a plan of action. A documented plan will be useful even if it’s just a bullet list covering a basic outline of what you will be doing. Some considerations for your user survey research plan are:
    • What are the goals of your survey? What are you hoping to learn from participants? Are you looking for feedback on what currently functionality, to explore concepts and preferences, or to troubleshoot an issue?
    • Who will you survey? Are you polling your existing users? Do you need to hear from businesses, teens, cell phone users, or some other targeted population?
    • How many responses will you target? While even a small number of responses are arguably more valuable than no input at all, you will want to try for a statistically relevant sample size. There are formulas that will tell you how many survey participants are needed for the responses to be representative of the population you are polling. For most common UXD uses, however, 400-600 participants should be sufficient.[2]
  2. Select a survey tool. There are a number of online survey tools to choose from so you’ll need to select an application that fits your goals and budget. Even if you will be conducting the surveys in person or on the phone, consider using survey software as they offer substantial time savings in processing and analyzing the results. Once you have selected the survey software, take some time to use the system and get to know what options are available. A basic understanding of your survey tool can help reduce rework as you define your questions.
  3. Develop your survey. This is usually the most time consuming part of the process. While seemingly straightforward, it’s quite easy to phrase questions in a way that will bias responses, or that are not entirely clear to participants—resulting in bad or inaccurate data.
    1. Define your Questions. Consider the type of feedback you would like to get and start outlining your topics. Do you want to learn how difficult a certain process is? Do you need to find out if people have used or are even aware of certain functionality? Once you have a basic outline, decide on how to best ask each of your questions to get useful results. Some possible question types include:
      • Scale – Users are asked to rate something by degree i.e. 1 to 10, Level of Agreement. Be sure to include a range of options and a “not applicable” where relevant.
      • Multiple Choice – Users are asked a question and given a number of pre-set responses to choose from. Carefully review multiple choice answers for any potential bias.
      • Priority – Users are asked to prioritize the factors that are important to them, sometimes relative to other factors. A great exercise to help users indicate priority is the “Divide the Dollar” where users are asked to split up a limited amount of money by dividing it among a set of features, attributes etc.
      • Open Ended – Users are asked a question that requires a descriptive response and are allowed to provide an open response, usually via text entry. Opened ended questions should be used sparingly in this type of quantitative study; it’s difficult to get consistent responses and they are difficult to track and analyze.
    2. Edit & Simplify.  Your goal is to write questions that the majority of people will interpret in the same way. Consider both phrasing and vocabulary. Try to write in clear, concise statements, using plain English and common terms. Your survey should be relatively short and the questions should not be too difficult to answer. In addition, be sure to check and double check grammar and spelling. I once had an entire survey question get thrown off due to one missing letter. We had to isolate the data and re-publish the survey.
    3. Review for Bias. Review the entire survey, including instructions or introductory text. Consider the sequence of your questions; are there any questions that might influence future responses?  Take care not to provide too much information about the purpose of the survey; it might sway user’s responses. Keep instructions clear and to the point. As with all research, you need to be ever vigilant in identifying areas of potential bias.
    4. Would you like a piece of candy?
      Yes (95%)            No (5%)

      Would you like a piece of caramel?
      Yes (70%)            No (30%)

      The Surgeon General recently came out with a study that shows candy consumption as the most common cause of early tooth loss. Would you like a piece of candy?
      Yes (15%)            No (85%)

      Thinking Ahead
      Recruit for future studies. At the very end of your survey, ask users if they’d like to participate in future studies. If they agree, collect their contact details and some additional demographic information. This will help you build up your own database of participants for more targeted future testing. Many users will want to be assured that their information will not be connected to their survey responses, and that you will not resell their data.

  4. Implement & Internally Test. Implement the survey using the survey tool you selected. Next, test the survey internally, this can mean you and a few colleagues, or a companywide involvement. Testing the survey internally will help ensure your questions are clear, without error, and that you are likely to get the type feedback you want.
  5. Our Users, Ourselves

    Data from an internal test can be additionally helpful. Getting feedback from people in your company or department will likely produce biased results. While it’s important to isolate that data before conducting the actual survey, you can use feedback from your internal group to illustrate any differences (or notable similarities) in their responses from those of your users. E.g. 90% of our employees use Twitter, while only 35% of our users do.

  6. Recruit Respondents. Your research plan should have outlined any appropriate demographics for your survey. But now that you are ready to make your survey live, how will you reach them? If you have a high traffic web site or large email list, you can use these to recruit participants. If not, you may need to get creative. Consider recruiting participants through social networking sites, community boards or other venues frequented by your target population.
  7. Prepare the Analysis. – One of the nice things about the online survey tools is that the analysis is usually a breeze. You may, however, want to put together an overview of some of the insights and interpretations you gathered from the data, e.g. “Since a relatively small number of our customers tend to use social networking sites for business referrals, we may want to lower the priority of getting Twitter integrated onto our website in favor of other planned functionality.” As with all UXD research, present your findings and publish any results.

Additional Resources

  • Sample Size Calculator (www.surveysystem.com/sscalc.htm) – a sample size calculator that will determine the sample size required for a desired confidence interval.
  • SurveyMonkey (www.surveymonkey.com) – offers a popular online hosted survey tool that works well for basic surveys.
  • SurveyGizmo (www.surveygizmo.com) – is comparable to SurveyMonkey, but offers a somewhat less robust reporting at a slightly lower fee.
  • Ethnio (www.ethnio.com) – is primarily a recruitment tool, but offers some basic survey functionality. Ethnio can be used to drive users from your website to another third party survey.

 

 


[1] Skip logic refers to the ability to skip a question or set of questions in a survey based on the users response to a preceding question.

[2]  For a survey of the general population where the data has a 95% degree of confidence and 4-5% margin of error.

Card Sorting: A Primer

Change your language and you change your thoughts.” –  Karl Albrecht

Card sorting is a specialized type of user test that is useful for assessing how people group related concepts and what common terminology they use.  At its simplest form, a card sort is the processes of writing the name of each item you want to test on a card, giving the cards to a user and asking him or her to group like items into piles. There are however, a number of advanced options and different types of card sorting techniques.

Open Card Sort – An “open” card sort is when you do not provide users with an initial structure. Users are given a stack of index cards, each with the name of an item or piece of content written on it. They are then asked to sort through and group the cards, putting them into piles of like-items on a table. Users are then asked to categorize each of the piles with the name that they think best represents that group/pile. Open card sorts are usually conducted early on in the design process because it tends to generate a large amount of wide ranging information about naming and categorization.

Given the high burden on participants and researchers, I personally find an open card sort to be the least attractive method for most contexts. It is, however, the most unbiased approach. As a general rule, I would reserve this method for testing users with a high degree of expertise in the area being evaluated, or for large scale exploratory studies when other methods have already been exhausted.

Closed Card Sort – The opposite of an open sort, a “closed” card sort is when the user is provided with an initial structure. Users are presented with a set of predefined categories (usually on a table) and given a stack of index cards of items. They are then asked to sort through the cards and place each item into the most appropriate category. A closed sort is best used later in a design process. Strictly speaking, participants in a closed sort are not expected to change, add, or remove categories. However, unless the context of your study prevents it, I would recommend allowing participants to suggest changes and have a mechanism for capturing this information.

Reverse Card Sort – Can also be called a “seeded” card sort. Users find information in an existing structure, such as a full site map laid out on index cards on a table. Users are then asked to review the structure and suggest changes. They are asked to move the cards around and re-name the items as they see fit. A reverse card sort has the highest potential for bias; however, it’s still a relatively effective means of validating (or invalidating) a taxonomic structure. The best structures to use are ones that were defined by an information architect, or someone with a high degree of subject matter expertise.

Modified Delphi Card Sort (Lyn Paul 2003) – Based on the Delphi Research Method[1], which in simple terms refers to a research method where you ask a respondent to modify information left by a previous respondent.  The idea is that over multiple test cycles, information will evolve into a near consensus with only the most contentious items remaining. A Modified Delphi Card Sort is where an initial user is asked to complete a card sort (open, closed, or reverse), and each subsequent user is asked to modify the card sort of their predecessor. This process is repeated until there is minimal fluctuation, indicating a general consensus. One significant benefit of this approach is ease of analysis. The researcher is left with one   final site structure and notes about any issue areas.

Online Card Sort – As the name implies, this refers to a card sort conducted online with a card sorting application. An online card sort allows for the possibility of gathering quantitative data from a large number of users. Most card sorting tools facilitate analysis by aggregating data and highlighting trends.

Paper Card Sort – A paper sort is done in person, usually on standard index cards or sticky notes. Unlike an online sort, there is opportunity to interact with participants and gain further insight into why they are categorizing things as they are.

Why Use Card Sorting?

  • Card sorting is a relatively quick, low cost, and low tech method of getting input from users.
  • Card sorting can be used to test the efficacy of a given taxonomic structure for a Web site or application. While commonly used for website navigation, the method can be applied to understand data structures for standalone applications as well.
  • When designing new products or major redesign efforts.
  • When creating a filtered or faceted search solution, or evaluating content tags
  • For troubleshooting, when other data sources that indicates users might be having a hard time finding content.

When is Card Sorting Most Useful?

Development Life-cycle

Card sorts are useful in the requirements gathering and design stages. Depending on where you are in the design process you may get more or less value from a given method (open, closed, reverse, etc).

Limitations of Card Sorting

  • The results of card sorting can be difficult and time consuming to analyze; results are rarely definitive and can reveal more questions than answers.
  • The results of a card sort will not provide you with a final architecture; it will only give you insight possible direction and problem areas.

How to Conduct a Card Sort

Card sorts are one of those things that are somewhat easier to conduct than to explain. Because there are so many variations, I’ve decided to illustrate the concept with a walkthrough of an actual project case study. I was recently brought into a card sorting study by a colleague of mine[2] who was working on a complex taxonomy evaluation. The project was for a top toy retailer’s e-commerce site. After weeks of evaluating traffic-patterns and other data, my colleague had developed what he hoped would be an improved new site structure. He wanted to use card sorting techniques to validate and refine what he had developed.

  1. Define your research plan. Our research plan called for some online closed cards sorts to gather statistically relevant quantitative data, as well as the rather innovative idea to go to one of the retail locations and recruit shoppers to do card sorting onsite. The in-store tests would follow a reversed sort, using the modified Delphi method. I.e. Shoppers would be shown the full site structure and asked to make changes. Each shopper would build off of the modifications of the previous shopper until a reasonable consensus was achieved.
  2. Prepare your materials. In the case of in-store card sorts, we needed to take the newly defined top and second level navigation categories and put each on its own index card. The cards would be laid out on two long banquet tables so participants could see the structure in its entirety. Single page reference sheets of the starting navigation were printed up so we could take notes on each participant and track progressive changes. We had markers and blank index cards for modifications. A video camera would be used to record results, and a participant consent form was prepared.
  3. Recruit Participants. Unlike lab-based testing where you have to recruit participants in advance, the goal for the in-store testing was to directly approach shoppers. The idea was that not only would they be a highly targeted users group, but that we would be approaching them in a natural environment that closely approximated their mindset when on the e-commerce site i.e. shopping for toys. Because we would be asking shoppers to take about 10-20 minutes of their time, the client provided us with gift cards which we offered as an incentive/thank you. Recruitment was straightforward; we would approach shoppers, describe what we were doing and ask if they would like to participate. We attempted to screen for users who were familiar with the client’s website or at least had some online shopping experience.
  4. Conduct the Card Sort. After agreeing to participate and signing the consent form, we explained to the participant that the information on the table represented potential naming and structure for   content on the e-commerce site. Users were asked to look through the cards and call attention to anything they didn’t understand or things they would change. They could move items, rename them or even take them off the table. Initially we let the participant walk up and down the table looking at the cards. Some would immediately start editing the structure, while others we needed to encourage (while trying not to bias results) by asking them what they had come into the store for and where might they find that item, etc. After an initial pass, we would then point out to the participant some of the changes made by previous participants as well as describe any recurring patterns to elicit further opinions. After about 15 participants, the site structure stabilized and any grey areas were fairly clearly identified.
  5. Fig 3 Card Sort
     
    Figure 3: Sample Card Sort Display: Cut Index Cards on Table

  6. Prepare the Analysis. At the end of the study, there was a reference sheet with notes on each participant’s session, video of the full study, and a relatively stable final layout. With this data, it was fairly easy to identify a number of recurring themes, i.e. issues that stood out as confusing, contentious, or as a notable deviation from the original structure. As in any card sort, the results were not directly translatable to a final information structure. Rather, they provided insights that could be combined with other data (such as online sorting results) to create the final taxonomy.

Additional Resources

 


[1] The Delphi Research Method http://www.iit.edu/~it/delphi.html

[2] David Cooksey, Founder & Principal, saturdave, Philadelphia, PA

An Introduction to User Testing

“You can see a lot by observing.”Yogi Berra

User Testing is an established method for evaluating the effectiveness of an application or set of design concepts. It involves interviewing and observing users as they interact with a system or prototype. User testing is commonly used to help validate an interface, resulting in a set of insights to improve a specific design. However, depending on the goals of your research, user testing can be used to elicit feedback from users about concepts that will be used to inform additional research.  From an implementation (and cost) standpoint, user testing can be as simple as a handful of users discussing design sketches on paper; or as formal as a sophisticated lab study with a dozen carefully screened users and a panel of observers.

The value and accuracy of a user testing study is not measured by the technology used, but by the appropriateness of the interview techniques and competence of the moderation. That said, there are a number ways to conduct user testing.

Lab Testing – User testing conducted with users in a lab environment. While lab configurations vary, the basic components are most often; a computer setup with additional chair for an interviewer, some method of capturing the session such as a video camera, and a two-way mirror for observers in an observation area. Some labs approximate a home environment with couches and furniture. However, a sophisticated setup with observation area is not necessary. The first user testing lab I created was in an extra office. It had been vacant due to the fact that it was an odd triangular shaped space. This made an observation window or other advanced amenities impractical. We got by with a desk, computer, two chairs, webcam, and screen-capture software.

Remote Testing – Remote testing is conducted with users in their home or office environment. The interviewer conducts the study remotely via testing software. Depending on the software used, the interviewer can speak with the user, see his or her actions on the computer screen, and view the user via webcam.  In addition to the sometimes significant cost savings when compared to lab testing, remote testing allows users to stay in their own environment using their own setup and so can provide more true-to-life observations. In addition, remote testing allows you to recruit users from diverse populations and markets without having to have them come to a set location.

Field Testing – The most accurate type of user testing is conducted by the researcher in the user’s environment. The interviewer may go to a user’s home or office and sit with them as they use the system, or even ride with them in their car to monitor mobile phone use. Field testing may not be practical or economical all cases, especially when testing consumer oriented products. However, it can provide additional insights that are otherwise impossible to obtain. Field research is particularly useful in a business context for testing operational systems such as billing or call center applications. It can be exceedingly difficult for users to give substantive feedback on transactional-type systems without using them in the context of their day to day work.

In addition to ways of testing, there are a few different types of interview techniques.

Task-Oriented Inquiry – As the name implies, task-oriented inquiry is when a user is asked to perform a specific task or set of tasks. I.e. “How would you…?” Researcher can then observe the user and ask follow-up questions about what they are thinking, and how they perceive the process (sometimes called the “think out loud” methodology). When conducting the study it is valuable to both observe and ask for the user to evaluate tasks. Often, the user’s perception of the process deviates from what the observational data shows. For example, a user might in reality struggle with a task, but then indicate that it was easy. Having both types of information provides a clearer window into the user’s mindset and what’s actually going on. Task-oriented inquiry is particularly useful for evaluating a system design and for validating against standard usability metrics.

Contextual Inquiry – Contextual inquiry is observational data collected as users use a system, i.e. user “show-and-tell”. In the strictest sense, it is a field study technique by which the researcher observes the users in their own home or office, interacting not only with the system but their environment (answering the phone, talking with co-workers, etc). However, the basics of a contextual inquiry can be used in a lab or remote testing scenario. The researcher asks the user to use an application or website the way they would naturally and then observes the user interacting with the system, for example: please show me how you normally access your favorite shopping sites. The researcher may ask some clarification questions as needed, for example:. I noticed you went to a search engine first, why is that?

Ethnographic Interviewing – A variation on contextual inquiry, ethnographic interviewing is where, instead of directly observing the user interact with a system, the researcher asks questions about the environmental issues around system usage. While this type of information is considered most accurate when directly observed in the user’s environment (i.e. in an ethnographic study), ethnographic interviewing can offer substantive insights when direct observation is either impractical or impossible. For example: “We’re interested in how you shop online. Tell me, when and where do you usually do your online shopping? You said you shop from your desk at work, how your desk is setup?”

Why Conduct User Testing?

  • User testing allows you to gather direct feedback from users and collect observational data that will help you improve your designs.
  • User testing will reveal the majority of usability problems before you release the software.

When is User Testing Most Useful?

  • When you want to validate the success of your system design.
  • When you want to explore the concepts and contexts of a potential system.

Development Lifecycle

User testing is often used during the early design stages to test concepts and in later design stages to drill down into the most successful designs for intended use. User testing is also helpful in the quality assurance phase to evaluate implementation details.

Limitations of User Testing

  • Tests are limited by the quality of your test’s materials and methodology
  • User testing will never be as accurate as beta testing or identify 100% of the issues that will occur in the field.

How to Conduct User Testing

  1. Define your research plan. A research plan for user testing includes considerations such as the goals of your research, what you will be testing, who you will be testing, how many people you will test, how you will recruit participants, and the mechanics of how you will conduct the research itself.  At a high level you’ll need: a participant recruiting screener, a script (more formally called a test protocol), a testing location, moderator, participants, and your prototype or live system. Your study can be a simple “friends and family” paper prototype test, or a formal study. Either way, having a thought-out, documented plan will facilitate the process and provide credibility to your study once complete.
  2. Develop a screener. A recruiting screener is the criteria by which you will select your participants. The screener is used to determine if a potential participant matches the characteristics and demographics defined by your research plan. The screener should not only disqualify users based on answers to questions, but it should indicate how many of each type of user (such as the number of users in each age range) need to be recruited for the study.
  3. Develop a Test Script. – A test script is the outline of the discussion and questions or topics the researcher or moderator will cover with the participant when conducting the test. It should include a full walkthrough of the test such as the welcome, purpose of the study, ice-breaker topics, permission requests, evaluation scenarios or questions, closing feedback and handing out any incentive once the test is complete.
  4. Moderator & Location. You’ll need to identify your moderator and testing location in accordance with your research plan. You want the user to be comfortable and feel free to respond honestly. A usability testing session is usually an artificial environment so it is important to put users at ease so they will behave as naturally as possible. Moderators should be able to be objective and ask questions to elicit feedback without swaying results.
  5. Recruit Participants. There are a number of ways you can recruit; for larger formal studies it is common to hire a market research firm to get people. However, you can build your own list of participants. Normally, you do not need more than five participants for most user research tests (Nielsen 2000). However, anticipate the fact that people will back out or not show up, and recruit a few alternates.
  6. Conduct Testing. Before you get started, make sure the participant is familiar with the environment and understands that you are not testing them, but the system. Follow the test script, but be open to actions that may fall outside the predefined activities. You may need to balance letting users go off on their own, with reining them back to predefined tasks.
  7. Analyze Results. Categorize your findings and bubble up relevant insights for your report. If you outsource testing, personally view all the interviews or review the video. Summaries are helpful but are only one interpretation; you’ll miss a lot if you don’t see for yourself.
  8. Schedule Readout. As with all user research methods, conducting the study is only half the process; you need to evangelize the results. After conducting the read-out, publish your documentation and let people know where you’ve placed the information.

Fig 6b Paper Test

Figure 6: User Testing with Paper Prototypes. Users are asked to describe what they see as well as to expound on how they would expect to perform certain tasks.

Additional Resources

  • Ethnio (www.ethnio.com) – recruit people from your website for research
  • UserView (www.techsmith.com/uservue) – Web based remote user testing tool
  • Morae (www.techsmith.com/morae) – User testing application
  • Craigslist (www.craigslist.org) – Popular community board to recruit participants