Moving from Vision to Design: User-Centered Methods for New Product Definition

Seminar by Dorothy M. Danforth for the IEEE Computer Society Leading Professional Seminar Series. – 30 minutes

https://www.youtube.com/watch?v=JnymRBYY1vQ&feature=youtu.be

It’s a common scenario: A company is planning a new product or significant redesign. There have been various discussions about how the product should have a “great user experience” and “focus on the user.” But, there are also conflicting ideas on what a great experience might entail, along with competing priorities for what the product absolutely must do to be successful in the marketplace. ​

Where to begin? How do you break through the confusion and move towards a clarified product vision? Whether a large established corporation or lean start-up, organizations struggle with progressing from early ideation into clear requirements and a tangible design phase. This webinar will explore ways to leverage user experience design methods in the very early stages of the product life cycle.

This session covers the following:

  • An overview of practical user research and design planning methods useful for early stage products and redesigns
  • Strategies for leveraging these methods to refine a product’s vision and ensure features are tied to user goals
  • Examples of how keeping a focused eye on user needs can help resolve conflicting priorities and promote product team alignment

Conducting a User Experience Audit

“If you would understand anything, observe its
beginning and its development.”
– Aristotle

A User Experience (UX) Audit is a secondary research method that pulls together any potentially relevant existing information on your software product and it’s market, and then reviews what you find in the context of your design goals. It’s a straightforward approach I’ve use in just about every strategy project I’ve completed for clients.

This article is intended to illustrate the type of data commonly available that can be helpful. It is by no means an exhaustive list, but should be enough to point you in the right direction. I recommend starting any strategic effort with this approach because it is vital to have a baseline understanding of the market landscape before learning more about the users within that market. In addition, much of the information and insights gleaned from this type of evaluation can be used to directly inform other user research methods, such as persona or survey development. I usually start the audit process as early as possible via Internet research and by requesting client artifacts even when an engagement does not specifically call for a formal “audit”.

Most organizations have existing structures by which they pull in various usage and marketing metrics. However, this data is not usually being evaluated with a user-centric, user behavior mindset. A UX Audit entails skimming through a large volume of data to unearth a relatively small set of relevant informational nuggets. Even still, they are a worthwhile effort, and the audit’s scope can be defined in a manageable way.

Err on the side of collecting more information than less. If a client or stakeholder tells you the content you are requesting is not relevant, it is a good idea to be persistent and review the information for yourself. They might not be looking at with a “UX” mindset.

A UX Audit can be used to answer the following questions:

  • What are the current user trends and expectation for this industry/market?
  • What have we already tried? Of that, what worked and what didn’t?
  • What do our internal stakeholders think about our UX?
    What do they think is needed? Why?
  • What customer issues, needs or problems are indicated in the data? Of those, which might be addressed (in whole or part) by the product’s UX?
  • What ongoing metrics are being collected that UX can use in the future?

Why Conduct an Audit?

The most obvious benefit to conducting an audit is to avoid reinventing the wheel, i.e. conducting new primary research when the same information was already available. An equally productive reason is to help you formulate hypothesis about user behavior and/or issues with your product that you can then investigate further. A supplemental benefit is that the process can help you compile an accessible body of UX knowledge for your products that you can build upon over time

When are Audits Most Useful?

  • When undertaking the development of a new product.
  • Before starting a substantive re-design.
  • When starting a UX practice within an organization.
  • As an exercise for new staff to ramp into product knowledge.
  • If your company has accumulated a large amount of product research data that was conducted by different departments for different uses.

Development Life-cycle

In the context of the software development life-cycle, UX Audits will be most useful if conducted in the high level and detailed requirements gathering stages. Some audit materials can be re-evaluated post production as follow up research to track the effects of the product’s release e.g. customer care data, web analytics, sales data, etc.

Limitations of an Audit

  • There is no guarantee that you will find data that addresses any specific questions. Sometimes the data isn’t there or it is too abstract to be useful in the context of UXD.
  • It can be time consuming and somewhat overwhelming for the beginner to process the information, especially if audits are rarely conducted.
  • Because the audit materials are almost entirely secondary research, you are limited to the methodologies, goals, and potential flaws of the existing research.
  • A good audit involves a wide range of information sources. New companies and some industries might have difficulty pulling together sufficiently diverse sources during the first few audits. In some cases research might need to be purchased.

How to Conduct the Audit

The steps to conducting a UX Audit are straightforward. (source)At a high level, you gather the audit materials together, create a spreadsheet for notes, review the materials, document findings, and then develop your insights or hypothesis for further research.

  1. Pull together your audit materials. The start of a UX audit is an excellent time to engage colleagues from other departments; you can solicit their help in gathering information and get different groups involved with tracking data over time.
    • Stakeholder Interviews – Interviews are a  great starting point for a UX Audit and can go along way to help you gather the materials you need. You will want to speak (one on one) with internal stakeholders such as department heads, product managers and lead developers. You might already speak with these individuals, but interviewing them specifically about market landscape and customer issues may not only provide some good insights, but it can go a long way in gaining buy-in and support for your efforts. Be sure to ask each stakeholder for a list of their recommended materials and follow-up to get them.
    • Sales Statistics – While primarily used by sales and finance, some of this data can be useful for a UX Audit, particularly if you are reviewing the effectiveness of a lead generating, or e-commerce web site. One thing to look for would be information that indicates a problem with the messaging or help functions of the site. For example, if a site selling window curtains, has a higher ratio of online customers who return curtains they bought online due to “wrong size” than their in-store customers, this might indicate a potential issue with the clarity of size information on the site.
    • Call Center Data –Call centers are a great way to gather information about what ticks people off. While much of the information may not be relevant, you can usually gain some key insights about what is missing, or even better, get ideas on what you can proactively improve. As example, the online signup process for a broadband company I worked with had functionality that would tell users if they were eligible for services or not. A UX Audit of call center data showed that a percentage of customers who were initially told they were eligible, were actually ineligible after a closer review of their order details. While we were unable to resolve this programmatically in the short term, armed with this understanding, we were able to modify functionality and messaging to more appropriately set expectation for users.
    • Web Analytics – Quantitative web analytics will give you insight into how many people are visiting your web site, where they are coming from, what they are looking at, and some trends over time. Advanced analytical tools can be implemented and mined to give ever-increasing detail about what people are doing once they get to your site, where they tend to drop off, and where they go once they leave. I’ve had at least one corporate client who were not mining their web logs. Luckily, the data was being collected, just not used within an analytics software. We were able to get them setup with an appropriate package that allowed the team to view  historical and ongoing site usage.
    • Adoption Metrics – Feature adoption/usage metrics are a good way to assess the efficacy of desktop and/or web-based operational support system. These metrics can be system-tracked, but in some cases need to be manually investigated. While fairly easy for a SaaS or mobile provider, if you are a desktop application provider, you might only be able to get your customer adoption metrics through surveys or interviews if these monitoring touch-points have not been build into your system.
    • Feedback and Surveys Results – Many Marketing groups put out feedback forms and/or have released campaign-specific user surveys. These are usually not UX focused, but can offer some insights into your user’s preferences, attitudes and behaviors. Take some time to scan the comment fields and categorize them if possible. You can turn this information into useful statistics with supplemental anecdotes. E.g. “20% of user comments referred to difficulty finding something”. “I can’t find baby buggies, do you still sell them?”
    • Past Reviews & Studies – Any internal market research, usability studies, ethnographic studies, or expert reviews[1] conducted should be audited. Even if a study was conducted for a previous release or under a different context than your project, an audit may reveal some key informational gems and so are worth scanning through. In addition to your own critical eye, it is a wise idea to find out if others in the organization valued the research and why.
    • The Twitterverse and Blogosphere – While not relevant for all companies, review sites, blogs, Facebook, Twitter, and other social networking sites can offer a unique and unfiltered view of how customers perceive your software or website. Try searching your company or product’s name on Google and other sites to see what information is returned. Some of the social networking sites even offer functionality that helps you keep track. If people are talking, you may want to add this type of task to your research calendar at consistent intervals.
    • Specifications – Take a look at product functional specifications, roadmaps, and business analyses. Anything generated relatively recently that can give some background insight into why certain feature or functions have been developed might prove useful. Many of these documents have some relevant facts about users that were researched by the authors. At best it will save you some of your own research time, at worst you’ll have a better understanding of why certain decisions were made for what exists today.
    • Market Research – While market research might give insight into user demographics, this type of research is usually not directly translatable into how you should design your product. However, it can help you
      develop hypotheses about what might work and provide a framework for user personas and user narratives. These hypotheses can be tested through other research methods. Market Research can help you make a reasonable guess at things such as; users’ technology skill level, initial expectations, or level of commitment to completing certain tasks.
  2. Create a Spreadsheet. Create a spreadsheet listing all of the materials you will be auditing. You can use this as a means of tracking what was reviewed, and by whom if more than one person is working on the audit. The spreadsheet can also be used as a central place to put your notes, facts, insights, ideas and questions generated by the review of each audit material.
  3. Review the Materials. Review materials for any relevant information, updating your spreadsheet as you progress. While it sounds daunting, the audit process can be a fairly cursory review, you don’t need to view every bit of detail—just scanning can be sufficient. Remember, you are only trying to pull out the 10-15% of data that will be relevant to the goals of your project.
  4. Categorize Findings. After you’ve completed the review portion of the UX Audit, it’s time to clean up your spreadsheet notes, analyze the information, and categorize any findings. Try to distill what you’ve learned into high-level concepts that are supported by data points and anecdotes, followed by your hypothesis. An oversimplified example of categorized findings would be:
  5. Category – Way Finding (Users ability to find things) Data- 20% of feedback comments referred to users not being able to find what they are looking for.
    – A recent study indicates that if users can’t find an item within 3 minutes they leave the site.
    Anecdote
    “I can’t find baby buggies, do you still sell them?”
    Hypotheses
    We might have an issue with our site’s navigation or taxonomy. We might need a search function.

  6. Schedule a Read-out. Take time to present your findings, setup a read-out for your colleagues. After conducting the read-out, publish your documentation on the intranet, to a wiki, in a document management system or on a file share. Let people know where you’ve placed the information. Now is a good time to tentatively schedule the next Audit on your research calendar.

Additional Resources

  1. Pew Internet Life (www.pewinternet.org) – Internet research, ongoing
  2. Omniture (www.omniture.com)robust analytics package
  3. Web Trends (www.webtrends.com) middle range analytics package
  4. Google Analytics (analytics.google.com) – analytics with useful functions
  5. Forrester (www.forrester.com) – Market research
  6. ComScore (www.comscore.com) – Market research

Questions About This Topic?

I’m happy to answer more in-depth questions about this topic or provide further insight into how this approach might work for you in your company. Post a comment or email me at dorothy [at] danforthmedia [dot] com

Danforth-Media-Logo-SmallABOUT DANFORTH MEDIA
Danforth is a design strategy firm offering software product planning, user research, and user centered design (UCD). We provide credible insights and creative solutions that allow our clients to deliver successful, customer-focused products. Danforth specializes in leveraging user experience design (UXD), design strategy, and design research methodologies to optimize complex multi-platform products for the people who use them.

We transform research into smart, enjoyable, and enduring design.
www.danforthmedia.com

 


[1] Common term used to describe a usability evaluation conducted by a UX specialist.

Article’s ‘Audit All The Things” image source.

Conducting a Solid UX Competitive Analysis

“Competition brings out the best in products
and the worst in people.” –  David Sarnoff

Most people are familiar with the concept of a competitive analysis; it’s a fairly standard business term to describe identifying and evaluating your competition in the marketplace. In the case of UXD, a competitive analysis is used to evaluate how a given product’s competition stacks up against usability standards and overall user experience.  A comparative analysis is a term I’ve often used to describe the review of applications or website that are not in direct competition with a product, but may have similar processes or interface elements that are worth reviewing.

Often, when a competitive review is conducted, the applications or websites are reviewed against a set of fairly standard usability principles (or heuristics) such as layout consistency, grouping of common tasks, link affordance, etc. Sometimes, however, the criteria can be more broadly defined to include highlights of interesting interaction models, notable functionality and/or other items that might be useful in the context of the product being designed and/or goals of a specific release.

The Expert Review
Competitive reviews can be done in conjunction with an “expert” review which is a usability evaluation of the existing product. If doing both a competitive and expert review, it’s helpful to start out with the competitive review and then conduct the expert review using the same criteria. Completing the competitive review first allows you to judge your own product relative to your competition.

Why Conduct a Competitive Analysis?

  • Understand how the major competition in your space is handling usability
  • Understand where your product stands in reference to its competition
  • Idea generation on how to solve various usability issues
  • Get an idea of what it might take to gain a competitive edge via usability/UX
  • If a thorough competitive review has never been conducted.
  • When a new product, or major game-changing rebuild is being considered.
  • Annually or bi-Annually to keep an eye on trends in your industry and on the web (such as changes in how social networking sites are integrated)

When is a Competitive Analysis Most Useful?

Competitive analysis is best done during early planning and requirements gathering stages. It can be conducted independent from a specific project cycle, or if used with a more focused criteria, it can help with the goals for a specific release.

Limitations of a Competitive Analysis

  • A competitive analysis can help you understand what it will take to come up to par with your competitors; however, it cannot show you how you can innovate and lead.
  • Insights can be limited by the knowledge level and/or evaluation abilities of the reviewer.
  • They can be time consuming to conduct and need to be re-done on a regular basis.

How to Conduct a UXD Competitive Analysis

  1. Select your competition. On average, I would recommend to target no less than five, but no more than ten of your top competitors. The longer the competitive list is, the more difficult it will be to do a sufficiently thorough investigation. In addition, there becomes a point of diminishing returns where there is not much new going on in the space that hasn’t already been brought to light by a previous competitor.
  2.  
    Consider this…

    When selecting a list of competitors, instead of just asking, who does what we do? Think about user’s alternatives. Ask, who or what is mostly likely to keep users from using our software or going to our website? While not normally thought of as “competition”, alternatives for an operations support system could be an excel spreadsheet or printed paper forms.

  3. Define the assessment criteria. It is important to define your criteria before you get started to be sure you are consistent in what you look for during your reviews. Take a moment to consider some of the issues and goals your organization is dealing with and some hypotheses you uncovered during your audit and try to incorporate these into the criteria as possible. Criteria should be specific and assessable. Here is a short excerpt of a criteria used in an e-commerce site evaluation:
    i.    Customer reviews and ratings?
    ii.    Can you wish list your items to refer back to later?
    iii.    In-store pickup option?
    iv.    Product availability indication?
    v.    Can you compare other items against each other?
    vi. What general shopping features are available?
  4. Create a Spreadsheet. Put your entire assessment criteria list in the first column, and the names of your competition along the top header row. Be sure to leave a general comments line at the bottom of your criteria list; you will use this for notes during individual reviews. Some of the evaluation might be relative (i.e. rate quality of imagery relative to other sites 1-10), so it is particularly helpful to have one spreadsheet as you work through each of your reviews.
  5. Gather your materials. Collect the competitive site URLs, software, or devices that you will be reviewing so they will be readily available.  A browser with tabbed browsing works great for web reviews. A tip for a mobile device application is that often simulators can be downloaded that allow you to display the mobile device software on your computer.
  6. Start Reviewing. One at a time, go down the criteria list while looking through the application and enter your responses. It can be helpful to use a double screen with the application on one view and the spreadsheet on the other. Take your time and try to be as observant as possible; you are looking for both good and bad points to report. As you review, write down notes on what you liked, what annoyed you and any interesting widgets you see. Take screen captures of interesting or relevant screens as you do each review.
  7. Prepare the Analysis. Create an outline of the review document including a summary area and a section for each individual review. Paste the assessment results and your notes from the spreadsheet into the document and use as a starting point for writing the report. You may need to grab additional screen captures of specific things that will be in your evaluation.
  8. Summarize your Insights. Now that you have the reviews done, you can look back what data pops out as most relevant. Some of the criteria results can be translated into summary charts and graphs to better illustrate the information.
  9. Schedule a Read Out. Take time to present your findings, setup a read-out for your colleagues. You may want to create a very high-level power point presentation of some of the more interesting point’s from your review. After conducting the read-out, publish your documentation and let people know where you’ve placed the information.

Competitive Assessment Rubric

If you don’t have time for a full written competitive analysis, you can evaluate your competition with an assessment rubric.  Because it results in a clear ranking, the rubric is a good “at a glance” way of communicating a software’s relative strengths and weaknesses to clients. Some things to note about this evaluation method:

  • It is not a scientific analysis; it’s a short-cut for communicating your assessment of the systems reviewed. Like judging a talent competition, subjective ratings (i.e. “eight out of 10”) are inherently imprecise. However, if you use consistent, pre-defined criteria you should end up with a realistic representation of your comparative rankings.
  • When creating the assessment criteria it is important to select attributes that are roughly equivalent in value. In the example below, “Template Layouts” and “Browsing & Navigation” were equally important to the overall effectiveness of the sites reviewed.

The following rubric was created to evaluate mobile phone activation sites, but this approach can be adapted to create ranking metrics for any application.

1. First, create the criteria and rating system by which you will evaluate each system.

1 – Poor 2 – Average 3 – Excellent
Marketing Commerce Integration Option to “Buy Now” is not offered in marketing pages or users are sent to a third party site. The option to “Buy Now” is available from the marketing pages but there are some usability issues with layout and transition The marketing and commerce sites are well integrated and provide users with an almost seamless transition
Template Layouts (Commerce) The basic layout is not consistent from page to page and/or the activity areas within the layout are not clearly grouped by type of user task. The layout is mostly consistent from page to page and major activity areas are grouped by task type. Some areas with information heavy content or more complex user tasks deviate from the established layout paradigms. The site shows a high level of continuity both in page to page transitions and task-type groupings. Information heavy content and complex user tasks are well thought out and intuitive relative to the site’s established layout paradigms.
Browsing & Navigation The site lacks a cohesive Information Architecture. Information is not in a clear top-down hierarchy. There are numerous “orphan” or pop-up pages that do not fit within the site structure. Similar content is duplicated in multiple areas or is presented in multiple navigational contexts. The site has a structured Information Architecture. Secondary and Tertiary navigation items are related to parent elements, but there may be multiple menus unrelated to the broader structure. There may be orphan pages of detail or less relevant information. The Information Architecture is highly cohesive. Information is structured with a clear understanding of user goals. Everything has a logical place within the architecture; secondary menus are incorporated into the site structure or clearly transitioned.
Terminology & Labeling (Commerce) Terminology and labeling is inconsistent, confusing, or inaccurate. Different terms are used to represent the same concept. Some terms may not adhere to a common understanding of their meanings. Terminology and labeling is consistent but could be more intuitive. Some unnecessary industry specific terminology or uncommon terms are used. Terminology and naming is both intuitive and consistent. Only necessary industry specific terminology is used, in context, with help references.

2.    Next, evaluate the competition along with your own system, scoring the results.

  • Marketing & Commerce Integration: how well the site handled the user’s transition from browsing product information to after making an online purchase decision.
  • Template Layouts: how clear and consistent the overall layout is and how well the layout elements translate to different areas of the site.
  • Browsing & Navigation: relative clarity and consistency of the information architecture and overall ease of browsing.
  • Terminology & Labeling: relative clarity and consistency of the language use and element naming.

 


Marketing & Commerce Integration Template Layouts Browsing & Navigation Terminology & Labeling

Total

Our System

2 (Average)

1 (Poor)

2 (Average)

3 (Excellent)

8

Company A

3

3

2

3

11

Company B

2

2

2

2

8

Company C

1

3

1

2

7

Company D

2

3

2

3

10

Company E

3

Company F
Company G
Company H

 

3.    Once your table is complete, you can sort on the totals to see your rough rankings.