Tips for Establishing User Research in an Organization
Q. How was God able to create the world in seven days?
A. No legacy system.
Unlike “God” in this corny systems integrator joke, our realities constantly require us to deal with legacy; systems, processes, and attitudes. This article discusses some critical success factors for getting back credible, valuable research data. It also provides some ideas on project management and obtaining stakeholder buy in. Many of these tips come directly from real project experiences, so examples are provided where applicable.
Be Prepared for Setbacks
A good friend and mentor of mine once told me—after a particularly disappointing professional setback—that it’s quite possible to do everything the “right” way and for things to still not work out. What he was saying, in short, is that not everything is in our control. This understanding liberated me to start looking at all my efforts as a single attempt in an iterative process. As a result, when starting something new to me, I tend to try out a range of things to gauge a baseline for what works, what doesn’t, and what might have future potential. Anyone who has adopted this approach knows that as time passes, a degree of mastery is achieved and your failure percentage decreases. This doesn’t occur because you are better at “going by the book”. It happens because you get better at identifying and accounting for things outside your control.
That said, whether you are trying to introduce a UXD practice into your organization, or are just looking to implement some user research, the most realistic advice I can offer is the Japanese adage; “nanakorobi yaoki” or “fall down seven, get up eight.”
Account for Organizational Constraints
You don’t need a cannon to shoot down a canary. Be realistic; consider the appropriateness of your research methods in context with organization’s stage and maturity. It will not matter how well-designed or “best practice” your research is if the results cannot be adequately utilized. Knowing where you are in the lifecycle of a company, product, and brand will help set expectations about results. In this way, a product’s user experience will always be a balance of the needs of the user with the capacity of an organization to meet those needs. If you are developing in a small startup with limited resources, your initial research plans may be highly tactical and validation focused. You’ll probably want to include plans that leverage family and friends for testing, and rely heavily on existing third party or purchased research. Alternately, a larger organization with a mature product will need to incorporate more strategic, primary research as well as have use for more sophisticated methods of presenting and communicating research to a wide range of stakeholders.
Foster a Participatory Culture
Want buy-in? Never “silo” your user research.
Sometimes, particularly in large corporations, there can be a tendency for the different departments to silo or isolate their knowledge. This can be for competitive “Fiefdom Syndrome”[1] reasons, or more often than not, simply a lack of process to effectively distribute information. However many the challenges, there are some very practical, self-serving reasons to actively communicate your UXD processes and research. First, because anyone in your organization who contributes to the software’s design is likely to have an impact on the user experience, it’s your job to ensure that those people “see what you see” and are empowered to use the data you find. Second, UXD is an art, and like anything else with a degree of subjectivity you’ll need credibility and support if you want your insights and interpretations accepted. Lastly, UXD is complex process with many components; you will get more done faster if you encourage active company wide participation.
Tips on fostering participation:
- Identify parts of your UX research that could be performed by other functional areas E.g. surveys done by Customer Care, usability guidelines for Quality Assurance, additional focus group questions for Marketing
- Offer other areas substantive input into user testing, surveys, and other research. E.g. Add graphic design mockups into a wireframe testing cycle and test these with users separate from the wireframes
- Discuss process integration ideas with the engineering, quality assurance, product, editorial, marketing and other functions. Make sure everyone understands what the touch-points are.
- In addition to informing your own design efforts, present user research as a service to the broader organization; schedule time for readouts, publish your findings, and invite people to observe testing sessions.
Understand the Goals of your Research
Are you looking to explore user behaviors and investigate future-thinking concepts? Or, are you trying to limit exploration and validate a specific set of functionality? There is a time and place for both approaches, but before you set out on any research effort it is important that you determine the overarching goal of your research. There are some distinct differences in how you implement what I’ll call discovery research vs. validation research—each of which will produce different results.
- Discovery Research, which can be compared to theoretical research, focuses on the exploration of ideas and investigating users’ preferences and reactions to various concepts. Discovery research is helpful for new products, innovations, and some troubleshooting efforts. This type of research can compliment market research, but unlike focus groups, UXD discovery research explores things such as unique interaction models, or user behaviors when interacting with functionality specific to search or social media.
- Validation Research, which can be compared to empirical research, focuses more on gauging users’ acceptance of a product already developed, or of a high or low-fidelity prototype that is intended to be a design that will guide development. While a necessary aspect of the UXD process, validation research tends to be more task-based and less likely to call attention to certain false assumptions or superseding flaws in a systems design than discovery research might. An example of a false assumption that might not be revealed in validation research is the belief that an enhanced search tool is necessary. The tool itself may have tested very well, but the task-specific research method failed to reveal that the predominant user behavior is to access your site’s content through a Google search. Therefore, you might have been better off enhancing your SEO before investing in a more advanced search.
Crafting Your Research Strategy
Just as in any project effort, it is vital to first define and document your goals, objectives and approach. Not only does this process help you make key decisions about how you want to move forward, it will serve as your guidepost throughout your project, helping you communicate activities to others. After the research is conducted it provides credibility to your research by explaining your approach. A well -crafted research strategy provides an appropriate breadth and depth for a more complete understanding of what we observe. Consider small incremental research cycles using various tactics. An iterative, multi-faceted methodology allows for more cost efficient project life-cycles. It also mitigates risk since you only invest in what works.
Figure 2: an example of a “Discovery” research strategy developed for a media company. The strategic plan consisted of; audits, iterative prototypes, user testing & various events.
Consider a Research Calendar
A research calendar can help you manage communication as well as adapt to internal and external changes. It can help track research progress over time, foster collaboration, reduce redundancy, and integrate both team and cross-departmental efforts. A good research calendar should be published, maintained, and utilized by multiple departments. It should include recurring intervals for items such as competitive reviews and audits, calibrated to the needs of your product. Your calendar doesn’t have to be fancy or complicated; you can use an existing intranet, a company-wide Outlook calendar or even a public event manager such as Google or Yahoo! calendar. Regardless of the tool, your research calendar can help prevent people from thinking about user research as a “one up” effort. User research should be considered a living, evolving, integral part of your development process—a maintained research calendar with a dedicated owner appropriately conveys this. If your company is small or you are just getting started with user research, consider collaborating with other departments to include focus groups, UAT events, and QA testing to the calendar as well. Not only will it foster better communication it may also result in the cross pollination of ideas.
Be Willing to Scrap Your “Best” Ideas.
It’s easy to become enamored with an idea or concept, something that interests us, or “feels” right—despite the fact that the research might be pointing in a different direction. Sometimes, this comes from a genuine intuitive belief in the idea, other times it’s simply the result of having invested so much time and/or money into a concept that you’re dealing with a type of loss aversion bias[2]. Even after years of doing this type of research, I have to admit this is still a tough one for me, requiring vigilance. I have seen colleagues, whom I otherwise hold in high regard, hold on to ideas regardless of how clear it is that it’s time to move on. This tendency, to find what we want to find and to structure research to confirm our assumptions, is a always a possibility in user research. And while it can be
mitigated by process and methodology, it still takes a degree of discipline to step back and play devil’s advocate to your best, most fascinating ideas and look at them in the harsh light of the data being presented to you. Ask yourself: am I observing or advocating? Am I only looking at data that supports my assumptions, casting a blind eye to anything that contradicts? It can be a painful process, but if the idea can hold up to objective scrutiny, you might actually be on to something good.
Questions About This Topic?
I’m happy to answer more in-depth questions about this topic or provide further insight into how this approach might work for you in your company. Post a comment or email me at dorothy [at] danforthmedia [dot] com
ABOUT DANFORTH MEDIA
Danforth is a design strategy firm offering software product planning, user research, and user centered design (UCD). We provide credible insights and creative solutions that allow our clients to deliver successful, customer-focused products. Danforth specializes in leveraging user experience design (UXD), design strategy, and design research methodologies to optimize complex multi-platform products for the people who use them.
We transform research into smart, enjoyable, and enduring design.
www.danforthmedia.com
[1] Herbold, Robert. The Fiefdom Syndrome: The Turf Battles That Undermine Careers and Companies – And How to Overcome Them. Garden City, NY: Doubleday Business, 2004.
[2] “Loss Aversion Bias is the human tendency to prefer avoiding losses above acquiring gains. Loss aversion was first convincingly demonstrated by Amos Tversky and Daniel Kahneman.” -http://www.12manage.com/description_loss_aversion_bias.html
Conducting a Solid UX Competitive Analysis
“Competition brings out the best in products
and the worst in people.” – David Sarnoff
Most people are familiar with the concept of a competitive analysis; it’s a fairly standard business term to describe identifying and evaluating your competition in the marketplace. In the case of UXD, a competitive analysis is used to evaluate how a given product’s competition stacks up against usability standards and overall user experience. A comparative analysis is a term I’ve often used to describe the review of applications or website that are not in direct competition with a product, but may have similar processes or interface elements that are worth reviewing.
Often, when a competitive review is conducted, the applications or websites are reviewed against a set of fairly standard usability principles (or heuristics) such as layout consistency, grouping of common tasks, link affordance, etc. Sometimes, however, the criteria can be more broadly defined to include highlights of interesting interaction models, notable functionality and/or other items that might be useful in the context of the product being designed and/or goals of a specific release.
The Expert Review
Competitive reviews can be done in conjunction with an “expert” review which is a usability evaluation of the existing product. If doing both a competitive and expert review, it’s helpful to start out with the competitive review and then conduct the expert review using the same criteria. Completing the competitive review first allows you to judge your own product relative to your competition.
Why Conduct a Competitive Analysis?
- Understand how the major competition in your space is handling usability
- Understand where your product stands in reference to its competition
- Idea generation on how to solve various usability issues
- Get an idea of what it might take to gain a competitive edge via usability/UX
- If a thorough competitive review has never been conducted.
- When a new product, or major game-changing rebuild is being considered.
- Annually or bi-Annually to keep an eye on trends in your industry and on the web (such as changes in how social networking sites are integrated)
When is a Competitive Analysis Most Useful?
Competitive analysis is best done during early planning and requirements gathering stages. It can be conducted independent from a specific project cycle, or if used with a more focused criteria, it can help with the goals for a specific release.
Limitations of a Competitive Analysis
- A competitive analysis can help you understand what it will take to come up to par with your competitors; however, it cannot show you how you can innovate and lead.
- Insights can be limited by the knowledge level and/or evaluation abilities of the reviewer.
- They can be time consuming to conduct and need to be re-done on a regular basis.
How to Conduct a UXD Competitive Analysis
- Select your competition. On average, I would recommend to target no less than five, but no more than ten of your top competitors. The longer the competitive list is, the more difficult it will be to do a sufficiently thorough investigation. In addition, there becomes a point of diminishing returns where there is not much new going on in the space that hasn’t already been brought to light by a previous competitor.
- Define the assessment criteria. It is important to define your criteria before you get started to be sure you are consistent in what you look for during your reviews. Take a moment to consider some of the issues and goals your organization is dealing with and some hypotheses you uncovered during your audit and try to incorporate these into the criteria as possible. Criteria should be specific and assessable. Here is a short excerpt of a criteria used in an e-commerce site evaluation:
i. Customer reviews and ratings?
ii. Can you wish list your items to refer back to later?
iii. In-store pickup option?
iv. Product availability indication?
v. Can you compare other items against each other?
vi. What general shopping features are available? - Create a Spreadsheet. Put your entire assessment criteria list in the first column, and the names of your competition along the top header row. Be sure to leave a general comments line at the bottom of your criteria list; you will use this for notes during individual reviews. Some of the evaluation might be relative (i.e. rate quality of imagery relative to other sites 1-10), so it is particularly helpful to have one spreadsheet as you work through each of your reviews.
- Gather your materials. Collect the competitive site URLs, software, or devices that you will be reviewing so they will be readily available. A browser with tabbed browsing works great for web reviews. A tip for a mobile device application is that often simulators can be downloaded that allow you to display the mobile device software on your computer.
- Start Reviewing. One at a time, go down the criteria list while looking through the application and enter your responses. It can be helpful to use a double screen with the application on one view and the spreadsheet on the other. Take your time and try to be as observant as possible; you are looking for both good and bad points to report. As you review, write down notes on what you liked, what annoyed you and any interesting widgets you see. Take screen captures of interesting or relevant screens as you do each review.
- Prepare the Analysis. Create an outline of the review document including a summary area and a section for each individual review. Paste the assessment results and your notes from the spreadsheet into the document and use as a starting point for writing the report. You may need to grab additional screen captures of specific things that will be in your evaluation.
- Summarize your Insights. Now that you have the reviews done, you can look back what data pops out as most relevant. Some of the criteria results can be translated into summary charts and graphs to better illustrate the information.
- Schedule a Read Out. Take time to present your findings, setup a read-out for your colleagues. You may want to create a very high-level power point presentation of some of the more interesting point’s from your review. After conducting the read-out, publish your documentation and let people know where you’ve placed the information.
Consider this…
When selecting a list of competitors, instead of just asking, who does what we do? Think about user’s alternatives. Ask, who or what is mostly likely to keep users from using our software or going to our website? While not normally thought of as “competition”, alternatives for an operations support system could be an excel spreadsheet or printed paper forms.
Competitive Assessment Rubric
If you don’t have time for a full written competitive analysis, you can evaluate your competition with an assessment rubric. Because it results in a clear ranking, the rubric is a good “at a glance” way of communicating a software’s relative strengths and weaknesses to clients. Some things to note about this evaluation method:
- It is not a scientific analysis; it’s a short-cut for communicating your assessment of the systems reviewed. Like judging a talent competition, subjective ratings (i.e. “eight out of 10”) are inherently imprecise. However, if you use consistent, pre-defined criteria you should end up with a realistic representation of your comparative rankings.
- When creating the assessment criteria it is important to select attributes that are roughly equivalent in value. In the example below, “Template Layouts” and “Browsing & Navigation” were equally important to the overall effectiveness of the sites reviewed.
The following rubric was created to evaluate mobile phone activation sites, but this approach can be adapted to create ranking metrics for any application.
1. First, create the criteria and rating system by which you will evaluate each system.
1 – Poor | 2 – Average | 3 – Excellent | |
Marketing Commerce Integration | Option to “Buy Now” is not offered in marketing pages or users are sent to a third party site. | The option to “Buy Now” is available from the marketing pages but there are some usability issues with layout and transition | The marketing and commerce sites are well integrated and provide users with an almost seamless transition |
Template Layouts (Commerce) | The basic layout is not consistent from page to page and/or the activity areas within the layout are not clearly grouped by type of user task. | The layout is mostly consistent from page to page and major activity areas are grouped by task type. Some areas with information heavy content or more complex user tasks deviate from the established layout paradigms. | The site shows a high level of continuity both in page to page transitions and task-type groupings. Information heavy content and complex user tasks are well thought out and intuitive relative to the site’s established layout paradigms. |
Browsing & Navigation | The site lacks a cohesive Information Architecture. Information is not in a clear top-down hierarchy. There are numerous “orphan” or pop-up pages that do not fit within the site structure. Similar content is duplicated in multiple areas or is presented in multiple navigational contexts. | The site has a structured Information Architecture. Secondary and Tertiary navigation items are related to parent elements, but there may be multiple menus unrelated to the broader structure. There may be orphan pages of detail or less relevant information. | The Information Architecture is highly cohesive. Information is structured with a clear understanding of user goals. Everything has a logical place within the architecture; secondary menus are incorporated into the site structure or clearly transitioned. |
Terminology & Labeling (Commerce) | Terminology and labeling is inconsistent, confusing, or inaccurate. Different terms are used to represent the same concept. Some terms may not adhere to a common understanding of their meanings. | Terminology and labeling is consistent but could be more intuitive. Some unnecessary industry specific terminology or uncommon terms are used. | Terminology and naming is both intuitive and consistent. Only necessary industry specific terminology is used, in context, with help references. |
2. Next, evaluate the competition along with your own system, scoring the results.
|
|||||
Marketing & Commerce Integration | Template Layouts | Browsing & Navigation | Terminology & Labeling |
Total |
|
Our System |
2 (Average) |
1 (Poor) |
2 (Average) |
3 (Excellent) |
8 |
Company A |
3 |
3 |
2 |
3 |
11 |
Company B |
2 |
2 |
2 |
2 |
8 |
Company C |
1 |
3 |
1 |
2 |
7 |
Company D |
2 |
3 |
2 |
3 |
10 |
Company E |
3 |
… |
|||
Company F | |||||
Company G | |||||
Company H |
3. Once your table is complete, you can sort on the totals to see your rough rankings.
Foundations For A Great User Experience
User Experience Design Myths
This article attempts to dispel some common myths you may have heard about adopting user research or a user-centered design approach.
We Don’t Need UXD Because…
- Our users are early adopters, or are our employees…
- We are just trying to create a technical proof of concept…
- We work iteratively in an Agile development environment…
Whatever the reason, consider this, there is no demographic that likes a poorly designed product. UXD foundations are arguably the most direct and efficient means to a well designed product. They can be adapted to any type of application or product lifecycle stage. Therefore, while your product may not need a complex UXD process or a specialized UXD team, the basics of user-centric design are always appropriate and will unilaterally result in a better product when appropriately applied.
UXD is Too Expensive
UXD methods can easily be modified to fit your budget. Each research method in this guide can be adapted to be implemented inexpensively and with very meager resources. In fact, an appropriately defined research plan should reduce costs by getting you much closer to what your end users want with far fewer problems post release. Maintenance costs related to unmet or unforeseen user needs can be as high as 80% of the overall development lifecycle costs. (Pressman, 1992) There is good reason why UXD is a growing field within software design; it shows a strong ROI.[1]
UXD Slows Down the Development Cycle
UXD can reduce and simplify the product development process. A common misconception about user-centric design is that it adds to, and slows down the development cycle. And yes, UXD can be incorporated in such a way that it needlessly adds time to the process. However, it does not have to. In fact, as early as 1991, a study found that usability engineering demonstrated reductions in the product development cycle by 33 to 50%. (Bosert, 1991) A well integrated, process-appropriate UXD effort will not only produce a more successful product, but will reduce development time and costs.
UXD is mostly useful for Consumer Products
Some form of UXD is useful for any system a user can interact with. Can users easily find what they need? Are common tasks simplified and not an unnecessary drudgery? Are the labels clear and do they use commonly accepted terms? All these UX related questions are as relevant to a consumer-focused e-commerce site as they would be to a billing operations system. UX Methods can be adapted and applied to information/marketing interfaces as well as transactional applications. The main difference is the goal, i.e. successful UXD on a consumer product usually drives more sales; while success in an operations system usually takes the form of higher adoption rates and increased productivity.
UXD Only Affects the Presentation Layer
UXD is more than skin deep. Don’t get me wrong, with a background in the arts; I see the value in making things look good, and most people respond to a pleasing visual design. But thinking of UXD as a presentation layer process will substantively limit its ability to improve your product. A good example of how UXD has an impact on functionality is in the case of faceted categorization for parametric searches[1]. User Research can not only help you determine how these fields should be laid-out, but how to categorize the data, what to name categories, and what kind of search groupings users want.
User Research will help us Define the Right User Experience
Well, you can try. But it is important to understand that there is no single “correct” user experience for a product. The process of interpreting the results of User Research and deciding how the resultant insights should be translated into final designs is probably best considered an art informed by science. It’s quite possible (and common) to have two totally different, yet viable, directions that both address the same user goals and requirements. (In these cases, additional criteria can be used to determine what direction to take i.e. budget, time, brand alignment, other features etc.) Despite its inherent lack of absolutes, User Research can, however, give you the best educated-guess possible regarding your users behavior, addressing upwards of 80% of issues before taking a product to market.
User Research is Market Research
User research is not market research. While there is a strong brand/marketing component to user experience design, the research methods are distinct with different methodologies, considerations and results. Unlike market research, UX research is less concerned with what features are available, or what the marketing messages are, as it is with how successful a specific design is in articulating its features and how usable and accessible the product is for the end user.
Market research is business-centric; it uses the analysis of data in an attempt to move people to action. While user research is (you guessed it) user-centric, its goal is to analyze user behaviors and preference to better design for them. Often, these two are means to the same end. Sometimes however, there is a conflict. An e-commerce website could have a promotional pop up screen that most users find annoying, despite the fact that it generates a good deal of revenue. The UXD practitioner should be free to advocate for the users’ goals, and the marketer for the business’ goals. Any resulting compromises should be considered in the broader context of the company’s brand.
User Research is only useful During Requirements Gathering
While very useful during requirements gathering, there are real benefits of incorporating this research throughout the software development lifecycle. Validation is a big part of user research. What in the design phase sounded like a good idea and tested well, might not work as well in its final implementation. There are a number of inevitable changes and revisions that occur during development. It’s important to retest and validate your release after all the pieces of the puzzle have been put together. This can be achieved with participant-based user testing or by providing structured feedback mechanisms in a beta or limited pilot program.
Here are some sample User Research methods for each phase of the software development lifecycle:
- High-Level Requirements
o Ethnographic Studies
o Concept Prototypes & Testing
o Surveys with a Broad Focus
o Competitive Reviews
- Detailed Requirements
o Validation Tests for Screen and Workflows
o Tactical Surveys
o Graphic Design Reviews
- Development & QA Testing
o Validate Changes and Workarounds
- Release/Deployment
o Feedback Forms
o User-centric Beta or Pilot Testing
- Maintenance
o A/B Split Testing